This post was co-written by Michael Wirig, Software Engineering Manager at Grōv Technologies.
A substantial percentage of the world’s habitable land is used for livestock farming for dairy and meat production. The dairy industry has leveraged technology to gain insights that have led to drastic improvements and are continuing to accelerate. A gallon of milk in 2017 involved 30% less water, 21% less land, a 19% smaller carbon footprint, and 20% less manure than it did in 2007 (US Dairy, 2019). By focusing on smarter water usage and sustainable land usage, livestock farming can grow to provide sustainable and nutrient-dense food for consumers and livestock alike.
Grōv Technologies (Grōv) has pioneered the Olympus Tower Farm, a fully automated Controlled Environment Agriculture (CEA) system. Unique amongst vertical farming startups, Grōv is growing cattle feed to improve that sustainable use of land for livestock farming while increasing the economic margins for dairy and beef producers.
The challenges of CEA
The set of growing conditions for a CEA is called a “recipe,” which is a combination of ingredients like temperature, humidity, light, carbon dioxide levels, and water. The optimal recipe is dynamic and is sensitive to its ingredients. Crops must be monitored in near-real time, and CEAs should be able to self-correct in order to maintain the recipe. To build a system with these capabilities requires answers to the following questions:
What parameters are needed to measure for indoor cattle feed production?
What sensors enable the accuracy and price trade-offs at scale?
Where do you place the sensors to ensure a consistent crop?
How do you correlate the data from sensors to the nutrient value?
To progress from a passively monitored system to a self-correcting, autonomous one, the CEA platform also needs to address:
How to maintain optimum crop conditions
How the system can learn and adapt to new seed varieties
How to communicate key business drivers such as yield and dry matter percentage
Grōv partnered with AWS Professional Services (AWS ProServe) to build a digital CEA platform addressing the challenges posed above.
Tower automation and edge platform
The Olympus Tower is instrumented for measuring recipe ingredients by combining the mechanical, electrical, and domain expertise of the Grōv team with the IoT edge and sensor expertise of the AWS ProServe team. The teams identified a primary set of features such as height, weight, and evenness of the growth to be measured at multiple stages within the Tower. Sensors were also added to measure secondary features such as water level, water pH, temperature, humidity, and carbon dioxide.
The teams designed and developed a purpose-built modular and industrial sensor station. Each sensor station has sensors for direct measurement of the features identified. The sensor stations are extended to support indirect measurement of features using a combination of Computer Vision and Machine Learning (CV/ML).
The trays with the growing cattle feed circulate through the Olympus Tower. A growth cycle starts on a tray with seeding, circulates through the tower over the cycle, and returns to the starting position to be harvested. The sensor station at the seeding location on the Olympus Tower tags each new growth cycle in a tray with a unique “Grow ID.” As trays pass by, each sensor station in the Tower collects the feature data. The firmware, jointly developed for the sensor station, uses AWS IoT SDK to stream the sensor data along with the Grow ID and metadata that’s specific to the sensor station. This information is sent every five minutes to an on-site edge gateway powered by AWS IoT Greengrass. Dedicated AWS Lambda functions manage the lifecycle of the Grow IDs and the sensor data processing on the edge.
The Grōv team developed AWS Greengrass Lambda functions running at the edge to ingest critical metrics from the operation automation software running the Olympus Towers. This information provides the ability to not just monitor the operational efficiency, but to provide the hooks to control the feedback loop.
The two sources of data were augmented with site-level data by installing sensor stations at the building level or site level to capture environmental data such as weather and energy consumption of the Towers.
All three sources of data are streamed to AWS IoT Greengrass and are processed by AWS Lambda functions. The edge software also fuses the data and correlates all categories of data together. This enables two major actions for the Grōv team – operational capability in real-time at the edge and enhanced data streamed into the cloud.
Cloud pipeline/platform: analytics and visualization
A ReactJS-based dashboard application is powered using Amazon API Gateway and AWS Lambda functions to report relevant metrics such as daily yield and machine uptime.
A data pipeline is deployed to analyze data using Amazon QuickSight. AWS Glue is used to create a dataset from the data stored in Amazon S3. Amazon Athena is used to query the dataset to make it available to Amazon QuickSight. This provides the extended Grōv tech team of research scientists the ability to perform a series of what-if analyses on the data coming in from the Tower Systems beyond what is available in the react-based dashboard.
Completing the data-driven loop
Now that the data has been collected from all sources and stored it in a data lake architecture, the Grōv CEA platform established a strong foundation for harnessing the insights and delivering the customer outcomes using machine learning.
The integrated and fused data from the edge (sourced from the Olympus Tower instrumentation, Olympus automation software data, and site-level data) is co-related to the lab analysis performed by Grōv Research Center (GRC). Harvest samples are routinely collected and sent to the lab, which performs wet chemistry and microbiological analysis. Trays sent as samples to the lab are associated with the results of the analysis with the sensor data by corresponding Grow IDs. This serves as a mechanism for labeling and correlating the recipe data with the parameters used by dairy and beef producers – dry matter percentage, micro and macronutrients, and the presence of myco-toxins.
Grōv has chosen Amazon SageMaker to build a machine learning pipeline on its comprehensive data set, which will enable fine tuning the growing protocols in near real-time. Historical data collection unlocks machine learning use cases for future detection of anomalous sensors readings and sensor health monitoring, as well.
Because the solution is flexible, the Grōv team plans to integrate data from animal studies on their health and feed efficiency into the CEA platform. Machine learning on the data from animal studies will enhance the tuning of recipe ingredients that impact the animals’ health. This will give the farmer an unprecedented view of the impact of feed nutrition on the end product and consumer.
Conclusion
Grōv Technologies and AWS ProServe have built a strong foundation for an extensible and scalable architecture for a CEA platform that will nourish animals for better health and yield, produce healthier foods and to enable continued research into dairy production, rumination and animal health to empower sustainable farming practices.
In this month’s issue of AWS Architecture Monthly, Worldwide Tech Lead for Agriculture, Karen Hildebrand (who’s also a fourth generation farmer) refers to agriculture as “the connective tissue our world needs to survive.” As our expert for August’s Agriculture issue, she also talks about what role cloud will play in future development efforts in this industry and why developing personal connections with our AWS agriculture customers is one of the most important aspects of our jobs.
You’ll also buzz through the world of high tech beehives, milk the information about data analytics-savvy cows, and see what the reference architecture of a Smart Farm looks like.
In August’s issue Agriculture issue
Ask an Expert: Karen Hildebrand, AWS WW Agriculture Tech Leader
CustomerSuccessStory: Tine & Crayon: Revolutionizing the Norwegian Dairy Industry Using Machine Learning on AWS
Blog Post: Beewise Combines IoT and AI to Offer an Automated Beehive
ReferenceArchitecture:Smart Farm: Enabling Sensor, Computer Vision, and Edge Inference in Agriculture
Customer Success Story: Farmobile: Empowering the Agriculture Industry Through Data
Blog Post: The Cow Collar Wearable: How Halter benefits from FreeRTOS
Related Videos: DuPont, mPrest & Netafirm, and Veolia
Survey opportunity
This month, we’re also asking you to take a 10-question survey about your experiences with this magazine. The survey is hosted by an external company (Qualtrics), so the below survey button doesn’t lead to our website. Please note that AWS will own the data gathered from this survey, and we will not share the results we collect with survey respondents. Your responses to this survey will be subject to Amazon’s Privacy Notice. Please take a few moments to give us your opinions.
Visit Flipboard, a personalized mobile magazine app that you can also read on your computer.
We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].
Welcome to the 10th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!
In case you missed our last ICYMI, checkout what happened last quarter here.
AWS Lambda
AWS Lambda functions can now mount an Amazon Elastic File System (EFS). EFS is a scalable and elastic NFS file system storing data within and across multiple Availability Zones (AZ) for high availability and durability. In this way, you can use a familiar file system interface to store and share data across all concurrent execution environments of one, or more, Lambda functions. EFS supports full file system access semantics, such as strong consistency and file locking.
Using different EFS access points, each Lambda function can access different paths in a file system, or use different file system permissions. You can share the same EFS file system with Amazon EC2 instances, containerized applications using Amazon ECS and AWS Fargate, and on-premises servers.
We also added support for resource policies. Resource policies allow sharing of schema repository across different AWS accounts and organizations. In this way, developers on different teams can search for and use any schema that another team has added to the shared registry.
Ben Smith, AWS Serverless Developer Advocate, published a guide on how to capture user events and monitor user behavior using the Amazon EventBridge partner integration with Auth0. This enables better insight into your application to help deliver a more customized experience for your users.
Rob Sutter, AWS Serverless Developer Advocate, has published a video series on Step Functions. We’ve compiled a playlist on YouTube to help you on your serverless journey.
AWS Amplify
The AWS Amplify Framework announced in April that they have rearchitected the Amplify UI component library to enable JavaScript developers to easily add authentication scenarios to their web apps. The authentication components include numerous improvements over previous versions. These include the ability to automatically sign in users after sign-up confirmation, better customization, and improved accessibility.
Amplify also announced the availability of Amplify Framework iOS and Amplify Framework Android libraries and tools. These help mobile application developers to easily build secure and scalable cloud-powered applications. Previously, mobile developers relied on a combination of tools and SDKS along with the Amplify CLI to create and manage a backend.
These new native libraries are oriented around use-cases, such as authentication, data storage and access, machine learning predictions etc. They provide a declarative interface that enables you to programmatically apply best practices with abstractions.
Amazon Managed Apache Cassandra Service (MCS) is now generally available under the new name: Amazon Keyspaces (for Apache Cassandra). Amazon Keyspaces is built on Apache Cassandra and can be used as a fully managed serverless database. Your applications can read and write data from Amazon Keyspaces using your existing Cassandra Query Language (CQL) code, with little or no changes. Danilo Poccia explains how to use Amazon Keyspace with API Gateway and Lambda in this launch post.
Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest published to the AWS Compute Blog this quarter.
Introducing the new serverless LAMP stack
Ben Smith, AWS Serverless Developer Advocate, introduces the Serverless LAMP stack. He explains how to use serverless technologies with PHP. Learn about the available tools, frameworks and strategies to build serverless applications, and why now is the right time to start.
Building a location-based, scalable, serverless web app
James Beswick, AWS Serverless Developer Advocate, walks through building a location-based, scalable, serverless web app. Ask Around Me is an example project that allows users to ask questions within a geofence to create an engaging community driven experience.
Julian Wood, AWS Serverless Developer Advocate, published two blog series on building well-architected serverless applications. Learn how to better understand application health and lifecycle management.
Go beyond the browser with these creative and physical projects. Moheeb Zara, AWS Serverless Developer Advocate, published several serverless powered device hacks, all using off the shelf parts.
We hold AWS Online Tech Talks covering serverless topics throughout the year. You can find these in the serverless section of the AWS Online Tech Talks page. We also regularly join in on podcasts, and record short videos you can find to learn in quick bite-sized chunks.
Learn how to build a complete serverless web application for a popular theme park called Innovator Island. James Beswick created a video series to walk you through this popular workshop at your own pace.
In May, we held a new virtual event series, the Serverless-First Function, to help you and your organization get the most out of the cloud. The first event, on May 21, included sessions from Amazon CTO, Dr. Werner Vogels, and VP of Serverless at AWS, David Richardson. The second event, May 28, was packed with sessions with our AWS Serverless Developer Advocate team. Catch up on the AWS Twitch channel.
Live streams
The AWS Serverless Developer Advocate team hosts several weekly livestreams on the AWS Twitch channel covering a wide range of topics. You can catch up on all our past content, including workshops, on the AWS Serverless YouTube channel.
Eric Johnson hosts “Sessions with SAM” every Thursday at 10AM PST. Each week, Eric shows how to use SAM to solve different serverless challenges. He explains how to use SAM templates to build powerful serverless applications. Catch up on the last few episodes.
James Beswick, AWS Serverless Developer Advocate, has compiled a round-up of all his content from Q2. He has plenty of videos ranging from beginner to advanced topics.
AWS Serverless Heroes
We’re pleased to welcome Kyuhyun Byun and Serkan Özal to the growing list of AWS Serverless Heroes. The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.
Still looking for more?
The Serverless landing page has much more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more getting started tutorials.
Follow the AWS Serverless team on our new LinkedIn page we share all the latest news and events. You can also follow all of us on Twitter to see latest news, follow conversations, and interact with the team.
Join AWS Internet of Things (IoT) experts as they walk you through building IoT applications with AWS live on Twitch. Along the way, AWS IoT customers and partners will co-host, share best tips and tricks, and make sure all of your questions are answered. By the end of each episode, you’ll learn how to build IoT applications across industrial, connected home, and everything in between.
In this special episode, our hosts, Rudy Chetty and Wale Oladehin, go all out in an ask-me-anything style and dive into audience-prompted topics including IoT operations, IoT best practices, device management, mobile development, analytics, and more. From AWS IoT Greengrass to AWS IoT SiteWise to AWS IoT Analytics, get ready to IoT All the Things across a wide range of IoT software and services in our Season 2, Episode 4 “IoT All the Things, AMA Edition.”
When we asked Rudy about his experience filming this special episode, he had this to say:
“I love the spontaneity of live-streaming. It’s a really interesting space to be able to be challenged by customers through questions. You never know what you’ll be asked, and you have to be on your toes constantly, so to speak. Case in point: I never thought I’d be discussing weevils and how to detect them in rice silos but that’s exactly what a customer asked about. Furthermore, it’s fascinating to not only be able to dive into a solution with them, but also see what use cases they dream up for AWS IoT.”
This post walks through deploying a web interface to view the live stream and control the robot. The application is built using AWS Amplify and Vue.js. Amplify is a development framework that makes it easy to add authentication, hosting, and other AWS resources. It also provides a pipeline for deploying web applications.
I use the Amplify Command Line Interface (CLI) to create an authentication flow for user sign-in using Amazon Cognito. I then show how to set up an authorizer in API Gateway so that only authenticated users can drive the robot. An AWS Identity Access and Management (IAM) role sets permissions so users can assume access to Kinesis Video Streams to view the live video feed. The web application is then configured and run locally for testing. Finally, using the Amplify CLI, I show how to add hosting and publish a production ready web application.
An AWS account. This project can be completed using the AWS Free Tier.
Amplify CLI and project setup
An architecture diagram showing the client relationship between the AWS resources deployed by Amplify.
The Amplify CLI allows you to create and manage resource on AWS. With the libraries and UI components provided by the Amplify Framework, you can build powerful applications using a variety of cloud services.
The web interface for the telepresence robot is built using Amplify Vue.js components for user registration and sign-in. Download the application and use the Amplify CLI to configure resources for the web application.
In the first guide, API Gateway is used to create a REST endpoint to send commands to the robot. Currently, the endpoint accepts requests without any authentication. To ensure that only authenticated users can control the robot, you must create an authorizer for the API.
The backend resources deployed by the Amplify web application include a Cognito User Pool. This is a user directory that provides sign-up and sign-in services, user profiles, and identity providers. The following instructions demonstrate how to configure an authorizer on API Gateway that verifies access using a user pool.
Choose the API created in the first guide for driving the robot.
Choose Authorizers from the menu.
Choose Create New Authorizer. Choose Cognito for Type and select the user pool created by the Amplify CLI. Set Token Source to Authorization.
Choose Create.
Choose Resources from the menu.
Choose POST, Method Request.
Set Authorization to the newly created authorizer.
Adding permissions
The web application loads a component for viewing video from the robot over a WebRTC connection. WebRTC is a protocol for negotiating peer to peer data connections by using a signaling channel.
The previous guide configured the robot to use a Kinesis Video Signaling Channel. Users signed into the web application must assume some permissions for Kinesis Video Streams to access the signaling channel.
When the Amplify CLI deploys an authentication flow, it creates a role in IAM. Cognito uses this role to assume permissions for a user pool based on matching conditions.
This Trust Relationship on the authRole controls when the role’s permissions are assumed. In this case, on a matching “authenticated” user from the identity pool.
The authorizer allows authenticated users to invoke the Lambda function through API Gateway. The permissions set on the authRole control access to the live video. The web application must know the endpoint for sending commands and the Kinesis Video Signaling Channel to use for the robot.
This information is configured in web-app/src/main.js. It requires a file named config.json to let the application know which endpoint and signaling channel to use.
Inside the application folder aws-serverless-telepresence-robot/web-app/src, create a new file named config.json.
{
"endpoint": "",
"channelARN": ""
}
Replace endpoint with the Invoke URL of the robot API. This can be found in API Gateway console under Stages, Prod. It can also be found under Outputs in the AWS CloudFormation stack created by the aws-serverless-telepresence-robot serverless application from the first guide.
Replace channelARN with the ARN of your robot’s signaling channel. This can be found in the Amazon Kinesis Video Streams console under Signaling channels.
Running the application
You can build and run the application locally for testing purposes. It still uses the backend deployed in the cloud. Do this before publishing to production:
Inside the web-app directory, run the following command: npm run serve
Follow the onscreen steps to create a new account.
Choose Start Video. If the robot is active, a WebRTC connection is made and live video is displayed.
Use the onscreen arrow buttons to drive the robot.
Deploying a hosted application
Amplify makes it easy to deploy a hosted application. The following commands configure and deploy hosting resources in Amazon S3 and Amazon CloudFront. This allows you to securely and quickly deploy your application for production use.
Inside aws-serverless-telepresence-robot/web-app, run the following. When prompted, select PROD, this configures the application to deploy using S3 and CloudFront. amplify add hosting
Finally, this command builds and publishes all the backend and frontend resources for your Amplify project. On completion, it provides a URL to the hosted web application. Note, it can take a while for the CloudFront distribution to deploy. amplify publish
With a few commands, the Amplify CLI is used to configure backend resources for a web frontend. Cognito is used as an identity provider. An Authorizer is created for an API Gateway endpoint, allowing authenticated users to send commands to the robot from the frontend. An IAM Role with a trusted relationship with the Cognito User Pool is given permissions to use Kinesis Video Signaling Channels, which are passed to the authenticated users. This allows the web frontend to open a live video connection to the telepresence robot using WebRTC.
Once run and tested locally, I showed how the Amplify CLI can streamline configuring hosting and deployment of a production web application using S3 and CloudFront. The summation of this is a custom-built telepresence robot with a web application for viewing and operating securely, all done without managed servers.
A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.
A telepresence robot allows you to explore remote environments from the comfort of your home through live stream video and remote control. These types of robots can improve the lives of the disabled, elderly, or those that simply cannot be with their coworkers or loved ones in person. Some are used to explore off-world terrain and others for search and rescue.
This guide walks through building a simple telepresence robot using a Pimoroni STS-PI Raspberry Pi robot kit. A Raspberry Pi is a small low-cost device that runs Linux. Add-on modules for Raspberry Pi are called “hats”. You can substitute this kit with any mobile platform that uses two motors wired to an Adafruit Motor Hat or a Pimoroni Explorer Hat.
The sample serverless application uses AWS Lambda and Amazon API Gateway to create a REST API for driving the robot. A Python application running on the robot uses AWS IoT Core to receive drive commands and authenticate with Amazon Kinesis Video Streams with WebRTC using an IoT Credentials Provider. In the next blog I walk through deploying a web frontend to both view the livestream and control the robot via the API.
Prerequisites
You need the following to complete the project:
A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.
An AWS account. This project can be completed using the AWS Free Tier.
There are three major parts to this project. First deploy the serverless backend using the AWS Serverless Application Repository. Then assemble the robot and run an installer on the Raspberry Pi. Finally, configure and run the Python application on the robot to confirm it can be driven through the API and is streaming video.
Deploy the serverless application
In this section, use the Serverless Application Repository to deploy the backend resources for the robot. The resources to deploy are defined using the AWS Serverless Application Model (SAM), an open-source framework for building serverless applications using AWS CloudFormation. To deeper understand how this application is built, look at the SAM template in the GitHub repository.
The Python application that runs on the robot requires permissions to connect as an IoT Thing and subscribe to messages sent to a specific topic on the AWS IoT Core message broker. The following policy is created in the SAM template:
To transmit video, the Python application runs the amazon-kinesis-video-streams-webrtc-sdk-c sample in a subprocess. Instead of using separate credentials to authenticate with Kinesis Video Streams, a Role Alias policy is created so that IoT credentials can be used.
This role grants access to connect and transmit video over WebRTC using the Kinesis Video Streams signaling channel deployed by the serverless application.
A deployed API Gateway endpoint, when called with valid JSON, invokes a Lambda function that publishes to an IoT message topic, RobotName/action. The Python application on the robot subscribes to this topic and drives the motors based on any received message that maps to a command.
On the next page, under Application Settings, fill out the parameter, RobotName.
Choose Deploy.
Once complete, choose View CloudFormation Stack.
Select the Outputs tab. Copy the ApiURL and the EndpointURL for use when configuring the robot.
Create and download the AWS IoT device certificate
The robot requires an AWS IoT root CA (fetched by the install script), certificate, and private key to authenticate with AWS IoT Core. The certificate and private key are not created by the serverless application since they can only be downloaded on creation. Create a new certificate and attach the IoT policy and Role Alias policy deployed by the serverless application.
Choose the Thing that corresponds with the name of the robot.
Under Security, choose Create certificate.
Choose Activate.
Download the Private Key and Thing Certificate. Save these securely, as this is the only time you can download this certificate.
Choose Attach Policy.
Two policies are created and must be attached. From the list, select <RobotName>Policy AliasPolicy-<AppName>
Choose Done.
Flash an operating system to an SD card
The Raspberry Pi single-board Linux computer uses an SD card as the main file system storage. Raspbian Buster Lite is an officially supported Debian Linux operating system that must be flashed to an SD card. Balena.io has created an application called balenaEtcher for the sole purpose of accomplishing this safely.
Insert the SD card into your computer and run balenaEtcher.
Choose the Raspbian image. Choose Flash to burn the image to the SD card.
When flashing is complete, balenaEtcher dismounts the SD card.
Configure Wi-Fi and SSH headless
Typically, a keyboard and monitor are used to configure Wi-Fi or to access the command line on a Raspberry Pi. Since it is on a mobile platform, configure the Raspberry Pi to connect to a Wi-Fi network and enable remote access headless by adding configuration files to the SD card.
Re-insert the SD card to your computer so that it shows as volume boot.
Create a file in the boot volume of the SD card named wpa_supplicant.conf.
Paste in the following contents, substituting your Wi-Fi credentials.
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=<Insert country code here>
network={
ssid="<Name of your WiFi>"
psk="<Password for your WiFi>"
}
Create an empty file without a file extension in the boot volume named ssh. At boot, the Raspbian operating system looks for this file and enables remote access if it exists. This can be done from a command line:
Since the installation may take some time, power the Raspberry Pi using a USB 5V power supply connected to a wall plug rather than a battery.
Connect remotely using SSH
Use your computer to gain remote command line access of the Raspberry Pi using SSH. Both devices must be on the same network.
Open a terminal application with SSH installed. It is already built into Linux and Mac OS, to enable SSH on Windows follow these instructions.
Enter the following to begin a secure shell session as user pi on the default local hostname raspberrypi, which resolves to the IP address of the device using MDNS:
If prompted to add an SSH key to the list of known hosts, type yes.
When prompted for a password, type raspberry. This is the default password and can be changed using the raspi-config utility.
Upon successful login, you now have shell access to your Raspberry Pi device.
Enable the camera using raspi-config
A built-in utility, raspi-config, provides an easy to use interface for configuring Raspbian. You must enable the camera module, along with I2C, a serial bus used for communicating with the motor driver.
In an open SSH session, type the following to open the raspi-config utility:
sudo raspi-config
Using the arrows, choose Interfacing Options.
Choose Camera. When prompted, choose Yes to enable the camera module.
While the script installs, proceed to the next section.
Configure the code
The Python application on the robot subscribes to AWS IoT Core to receive messages. It requires the certificate and private key created for the IoT thing to authenticate. These files must be copied to the directory where the Python application is stored on the Raspberry Pi.
It also requires the IoT Credentials endpoint is added to the file config.json to assume permissions necessary to transmit video to Amazon Kinesis Video Streams.
Open an SSH session on the Raspberry Pi.
Open the certificate.pem file with the nano text editor and paste in the contents of the certificate downloaded earlier.
Provide the following information: IOT_THINGNAME: The name of your robot, as set in the serverless application. IOT_CORE_ENDPOINT: This is found under the Settings page in the AWS IoT Core console. IOT_GET_CREDENTIAL_ENDPOINT: Provided by the serverless application. ROLE_ALIAS: This is already set to match the Role Alias deployed by the serverless application. AWS_DEFAULT_REGION: Corresponds to the Region the application is deployed in.
Save the file using CTRL+X and Y.
To start the robot, run the command:
python3 main.py
To stop the script, press CTRL+C.
View the Kinesis video stream
The following steps create a WebRTC connection with the robot to view the live stream.
Choose the channel that corresponds with the name of your robot.
Open the Media Playback card.
After a moment, a WebRTC peer to peer connection is negotiated and live video is displayed.
Sending drive commands
The serverless backend includes an Amazon API Gateway REST endpoint that publishes JSON messages to the Python script on the robot.
The robot expects a message:
{ “action”: <direction> }
Where direction can be “forward”, “backwards”, “left”, or “right”.
While the Python script is running on the robot, open another terminal window.
Run this command to tell the robot to drive forward. Replace <API-URL> using the endpoint listed under Outputs in the CloudFormation stack for the serverless application.
curl -d '{"action":"forward"}' -H "Content-Type: application/json" -X POST https://<API-URL>/publish
Conclusion
In this post, I show how to build and program a telepresence robot with remote control and a live video feed in the cloud. I did this by installing a Python application on a Raspberry Pi robot and deploying a serverless application.
The Python application uses AWS IoT credentials to receive remote commands from the cloud and transmit live video using Kinesis Video Streams with WebRTC. The serverless application deploys a REST endpoint using API Gateway and a Lambda function. Any application that can connect to the endpoint can drive the robot.
In part two, I build on this project by deploying a web interface for the robot using AWS Amplify.
A preview of the web frontend built in the next blog.
Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!
AWS re:Invent June 13, 2018 | 05:00 PM – 05:30 PM PT – Episode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar. Compute
Containers June 25, 2018 | 09:00 AM – 09:45 AM PT – Running Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.
June 19, 2018 | 11:00 AM – 11:45 AM PT – Launch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new
June 21, 2018 | 01:00 PM – 01:45 PM PT – Enabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.
June 28, 2018 | 01:00 PM – 01:45 PM PT – Fireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device. IoT
June 27, 2018 | 11:00 AM – 11:45 AM PT – AWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.
Mobile June 25, 2018 | 11:00 AM – 11:45 AM PT – Drive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.
June 26, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services. June 27, 2018 | 01:00 PM – 01:45 PM PT – Changing the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances. June 28, 2018 | 11:00 AM – 11:45 AM PT – Big Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.
We announced a preview of AWS IoT 1-Click at AWS re:Invent 2017 and have been refining it ever since, focusing on simplicity and a clean out-of-box experience. Designed to make IoT available and accessible to a broad audience, AWS IoT 1-Click is now generally available, along with new IoT buttons from AWS and AT&T.
I sat down with the dev team a month or two ago to learn about the service so that I could start thinking about my blog post. During the meeting they gave me a pair of IoT buttons and I started to think about some creative ways to put them to use. Here are a few that I came up with:
Help Request – Earlier this month I spent a very pleasant weekend at the HackTillDawn hackathon in Los Angeles. As the participants were hacking away, they occasionally had questions about AWS, machine learning, Amazon SageMaker, and AWS DeepLens. While we had plenty of AWS Solution Architects on hand (decked out in fashionable & distinctive AWS shirts for easy identification), I imagined an IoT button for each team. Pressing the button would alert the SA crew via SMS and direct them to the proper table.
Camera Control – Tim Bray and I were in the AWS video studio, prepping for the first episode of Tim’s series on AWS Messaging. Minutes before we opened the Twitch stream I realized that we did not have a clean, unobtrusive way to ask the camera operator to switch to a closeup view. Again, I imagined that a couple of IoT buttons would allow us to make the request.
Remote Dog Treat Dispenser – My dog barks every time a stranger opens the gate in front of our house. While it is great to have confirmation that my Ring doorbell is working, I would like to be able to press a button and dispense a treat so that Luna stops barking!
Homes, offices, factories, schools, vehicles, and health care facilities can all benefit from IoT buttons and other simple IoT devices, all managed using AWS IoT 1-Click.
All About AWS IoT 1-Click As I said earlier, we have been focusing on simplicity and a clean out-of-box experience. Here’s what that means:
Architects can dream up applications for inexpensive, low-powered devices.
Developers don’t need to write any device-level code. They can make use of pre-built actions, which send email or SMS messages, or write their own custom actions using AWS Lambda functions.
Installers don’t have to install certificates or configure cloud endpoints on newly acquired devices, and don’t have to worry about firmware updates.
Administrators can monitor the overall status and health of each device, and can arrange to receive alerts when a device nears the end of its useful life and needs to be replaced, using a single interface that spans device types and manufacturers.
I’ll show you how easy this is in just a moment. But first, let’s talk about the current set of devices that are supported by AWS IoT 1-Click.
Who’s Got the Button? We’re launching with support for two types of buttons (both pictured above). Both types of buttons are pre-configured with X.509 certificates, communicate to the cloud over secure connections, and are ready to use.
The AWS IoT Enterprise Button communicates via Wi-Fi. It has a 2000-click lifetime, encrypts outbound data using TLS, and can be configured using BLE and our mobile app. It retails for $19.99 (shipping and handling not included) and can be used in the United States, Europe, and Japan.
The AT&T LTE-M Button communicates via the LTE-M cellular network. It has a 1500-click lifetime, and also encrypts outbound data using TLS. The device and the bundled data plan is available an an introductory price of $29.99 (shipping and handling not included), and can be used in the United States.
We are very interested in working with device manufacturers in order to make even more shapes, sizes, and types of devices (badge readers, asset trackers, motion detectors, and industrial sensors, to name a few) available to our customers. Our team will be happy to tell you about our provisioning tools and our facility for pushing OTA (over the air) updates to large fleets of devices; you can contact them at [email protected].
AWS IoT 1-Click Concepts I’m eager to show you how to use AWS IoT 1-Click and the buttons, but need to introduce a few concepts first.
Device – A button or other item that can send messages. Each device is uniquely identified by a serial number.
Placement Template – Describes a like-minded collection of devices to be deployed. Specifies the action to be performed and lists the names of custom attributes for each device.
Placement – A device that has been deployed. Referring to placements instead of devices gives you the freedom to replace and upgrade devices with minimal disruption. Each placement can include values for custom attributes such as a location (“Building 8, 3rd Floor, Room 1337”) or a purpose (“Coffee Request Button”).
Action – The AWS Lambda function to invoke when the button is pressed. You can write a function from scratch, or you can make use of a pair of predefined functions that send an email or an SMS message. The actions have access to the attributes; you can, for example, send an SMS message with the text “Urgent need for coffee in Building 8, 3rd Floor, Room 1337.”
Getting Started with AWS IoT 1-Click Let’s set up an IoT button using the AWS IoT 1-Click Console:
If I didn’t have any buttons I could click Buy devices to get some. But, I do have some, so I click Claim devices to move ahead. I enter the device ID or claim code for my AT&T button and click Claim (I can enter multiple claim codes or device IDs if I want):
The AWS buttons can be claimed using the console or the mobile app; the first step is to use the mobile app to configure the button to use my Wi-Fi:
Then I scan the barcode on the box and click the button to complete the process of claiming the device. Both of my buttons are now visible in the console:
I am now ready to put them to use. I click on Projects, and then Create a project:
I name and describe my project, and click Next to proceed:
Now I define a device template, along with names and default values for the placement attributes. Here’s how I set up a device template (projects can contain several, but I just need one):
The action has two mandatory parameters (phone number and SMS message) built in; I add three more (Building, Room, and Floor) and click Create project:
I’m almost ready to ask for some coffee! The next step is to associate my buttons with this project by creating a placement for each one. I click Create placements to proceed. I name each placement, select the device to associate with it, and then enter values for the attributes that I established for the project. I can also add additional attributes that are peculiar to this placement:
I can inspect my project and see that everything looks good:
I click on the buttons and the SMS messages appear:
I can monitor device activity in the AWS IoT 1-Click Console:
And also in the Lambda Console:
The Lambda function itself is also accessible, and can be used as-is or customized:
As you can see, this is the code that lets me use {{*}}include all of the placement attributes in the message and {{Building}} (for example) to include a specific placement attribute.
Now Available I’ve barely scratched the surface of this cool new service and I encourage you to give it a try (or a click) yourself. Buy a button or two, build something cool, and let me know all about it!
Pricing is based on the number of enabled devices in your account, measured monthly and pro-rated for partial months. Devices can be enabled or disabled at any time. See the AWS IoT 1-Click Pricing page for more info.
I’m happy to announce that Sumerian is now generally available. You can create realistic virtual environments and scenes without having to acquire or master specialized tools for 3D modeling, animation, lighting, audio editing, or programming. Once built, you can deploy your finished creation across multiple platforms without having to write custom code or deal with specialized deployment systems and processes.
Sumerian gives you a web-based editor that you can use to quickly and easily create realistic, professional-quality scenes. There’s a visual scripting tool that lets you build logic to control how objects and characters (Sumerian Hosts) respond to user actions. Sumerian also lets you create rich, natural interactions powered by AWS services such as Amazon Lex, Polly, AWS Lambda, AWS IoT, and Amazon DynamoDB.
Sumerian was designed to work on multiple platforms. The VR and AR apps that you create in Sumerian will run in browsers that supports WebGL or WebVR and on popular devices such as the Oculus Rift, HTC Vive, and those powered by iOS or Android.
During the preview period, we have been working with a broad spectrum of customers to put Sumerian to the test and to create proof of concept (PoC) projects designed to highlight an equally broad spectrum of use cases, including employee education, training simulations, field service productivity, virtual concierge, design and creative, and brand engagement. Fidelity Labs (the internal R&D unit of Fidelity Investments), was the first to use a Sumerian host to create an engaging VR experience. Cora (the host) lives within a virtual chart room. She can display stock quotes, pull up company charts, and answer questions about a company’s performance. This PoC uses Amazon Polly to implement text to speech and Amazon Lex for conversational chatbot functionality. Read their blog post and watch the video inside to see Cora in action:
Now that Sumerian is generally available, you have the power to create engaging AR, VR, and 3D experiences of your own. To learn more, visit the Amazon Sumerian home page and then spend some quality time with our extensive collection of Sumerian Tutorials.
Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.
Note – All sessions are free and in Pacific Time.
Tech talks featured this month:
Analytics & Big Data
May 21, 2018 | 11:00 AM – 11:45 AM PT – Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.
May 24, 2018 | 11:00 AM – 11:45 AM PT – Data Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.
May 30, 2018 | 01:00 PM – 01:45 PM PT – Accelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.
Containers
May 24, 2018 | 01:00 PM – 01:45 PM PT –Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.
Databases
May 21, 2018 | 01:00 PM – 01:45 PM PT – How to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.
May 23, 2018 | 01:00 PM – 01:45 PM PT – 5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.
DevOps
May 23, 2018 | 09:00 AM – 09:45 AM PT – .NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.
Enterprise & Hybrid
May 22, 2018 | 11:00 AM – 11:45 AM PT – Hybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.
IoT
May 31, 2018 | 11:00 AM – 11:45 AM PT – Using AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.
Machine Learning
May 22, 2018 | 09:00 AM – 09:45 AM PT – Using Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.
May 24, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.
May 30, 2018 | 09:00 AM – 09:45 AM PT– Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.
June 1, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.
May 30, 2018 | 11:00 AM – 11:45 AM PT – Accelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.
Today, I’m pleased to announce that, as of April 24th 2018, the AWS IoT Analytics service is generally available. Customers can use IoT Analytics to clean, process, encrich, store, and analyze their connected device data at scale. AWS IoT Analytics is now available in US East (N. Virginia), US West (Oregon), US East (Ohio), and EU (Ireland). In November of last year, my colleague Tara Walker wrote an excellent post that walks through some of the features of the AWS IoT Analytics service and Ben Kehoe (an AWS Community Hero and Research Scientist at iRobot) spoke at AWS Re:Invent about replacing iRobot’s existing “rube goldberg machine” for forwarding data into an elasticsearch cluster with AWS IoT Analytics.
Iterating on customer feedback received during the service preview the AWS IoT Analytics team has added a number of new features including the ability to ingest data from external souces using the BatchPutMessage API, the ability to set a data retention policy on stored data, the ability to reprocess existing data, preview pipeline results, and preview messages from channels with the SampleChannelData API.
Let’s cover the core concepts of IoT Analytics and then walk through an example.
AWS IoT Analytics Concepts
AWS IoT Analytics can be broken down into a few simple concepts. For data preparation customers have: Channels, Pipelines, and Data Stores. For analyzing data customers have: Datasets and Notebooks.
Data Preparation
Channels are the entry point into IoT Analytics and they collect data from an existing IoT Core MQTT topic or from external sources that send messages to the channel using the Ingestion API. Channels are elastically scalable and consume messages in Binary or JSON format. Channels also immutably store raw device data for easily reprocessing using different logic if your needs change.
Pipelines consume messages from channels and allow you to process messages with steps, called activities, such as filtering on attributes, transforming the content of the message by adding or remvoing fields, invoking lambda functions for complex transformations and adding data from external data sources, or even enriching the messages with data from IoT Core. Pipelines output their data to a Data Store.
Data Stores are a queryable IoT-optimized data storage solution for the output of your pipelines. Data stores support custom retention periods to optimize costs. When a customer queries a Data Store the result is put into a Dataset.
Data Analytics
Datasets are similar to a view in a SQL database. Customers create a dataset by running a query against a data store. Data sets can be generated manually or on a recurring schedule.
Notebooks are Amazon SageMaker hosted Jupyter notebooks that let customers analyze their data with custom code and even build or train ML models on the data. IoT Analytics offers several notebook templates with pre-authored models for common IoT use cases such as Predictive Maintenance, Anomaly Detection, Fleet Segmentation, and Forecasting.
Additionally, you can use IoT analytics as a data source for Amazon QuickSight for easy visualizations of your data. You can find pricing information for each of these services on the AWS IoT Analytics Pricing Page.
IoT Analytics Walkthrough
While this walkthrough uses the console everything shown here is equally easy to do with the CLI. When we first navigate to the console we have a helpful guide telling us to build a channel, pipeline, and a data store: Our first step is to create a channel. I already have some data into an MQTT channel with IoT core so I’ll select that channel. First we’ll name the channel and select a retention period.
Now, I’ll select my IoT Core topic and grab the data. I can also post messages directly into the channel with the PutMessages APIs.
Now that I have a channel my next step is to create a pipeline. To do this I’ll select “Create a pipeline from this channel” from the “Actions” drop down.
Now, I’ll walk through the pipeline wizard giving my pipeline a name and a source.
I’ll select which of the message attributes the pipeline should expect. This can draw from the channel with the sampling API and guess at which attributes are needed or I could upload a specification in JSON.
Next I define the pipeline activities. If I’m dealing with binary data I need a lambda function to first deserialize the message into JSON so the other filter functions can operate on it. I can create filters, calculate attributes based on other attributes, and I can also enrich the message with metadata from IoT core registry.
For now I just want to filter out some messages and make a small transform with a Lambda function.
Finally, I choose or create a data store to output the results of my pipeline.
Now that I have a data store, I can create a view of that data by creating a data set.
I’ll just select all the data from the data store for this dataset but I could also select individual attributes as needed.
I have a data set! I can adjust the cron expression in the schedule to re-run this as frequently or infrequently as I wish.
If I want to create a model from my data I can create a SageMaker powered Jupyter notebook. There are a few templates that are great starting points like anomaly detection or output forecasting.
Here you can see an example of the anomaly detection notebook.
Finally, if I want to create simple visualizations of my data I can use QuickSight to bring in an IoT Analytics data set.
Let Us Know
I’m excited to see what customers build with AWS IoT Analytics. My colleagues on the IoT teams are eager to hear your feedback about the service so please let us know in the comments or on Twitter what features you want to see.
The Internet of Things (IoT) has precipitated to an influx of connected devices and data that can be mined to gain useful business insights. If you own an IoT device, you might want the data to be uploaded seamlessly from your connected devices to the cloud so that you can make use of cloud storage and the processing power to perform sophisticated analysis of data. To upload the data to the AWS Cloud, devices must pass authentication and authorization checks performed by the respective AWS services. The standard way of authenticating AWS requests is the Signature Version 4 algorithm that requires the caller to have an access key ID and secret access key. Consequently, you need to hardcode the access key ID and the secret access key on your devices. Alternatively, you can use the built-in X.509 certificate as the unique device identity to authenticate AWS requests.
AWS IoT has introduced the credentials provider feature that allows a caller to authenticate AWS requests by having an X.509 certificate. The credentials provider authenticates a caller using an X.509 certificate, and vends a temporary, limited-privilege security token. The token can be used to sign and authenticate any AWS request. Thus, the credentials provider relieves you from having to manage and periodically refresh the access key ID and secret access key remotely on your devices.
In the process of retrieving a security token, you use AWS IoT to create a thing (a representation of a specific device or logical entity), register a certificate, and create AWS IoT policies. You also configure an AWS Identity and Access Management (IAM) role and attach appropriate IAM policies to the role so that the credentials provider can assume the role on your behalf. You also make an HTTP-over-Transport Layer Security (TLS) mutual authentication request to the credentials provider that uses your preconfigured thing, certificate, policies, and IAM role to authenticate and authorize the request, and obtain a security token on your behalf. You can then use the token to sign any AWS request using Signature Version 4.
In this blog post, I explain the AWS IoT credentials provider design and then demonstrate the end-to-end process of retrieving a security token from AWS IoT and using the token to write a temperature and humidity record to a specific Amazon DynamoDB table.
Note: This post assumes you are familiar with AWS IoT and IAM to perform steps using the AWS CLI and OpenSSL. Make sure you are running the latest version of the AWS CLI.
Overview of the credentials provider workflow
The following numbered diagram illustrates the credentials provider workflow. The diagram is followed by explanations of the steps.
To explain the steps of the workflow as illustrated in the preceding diagram:
The AWS IoT device uses the AWS SDK or custom client to make an HTTPS request to the credentials provider for a security token. The request includes the device X.509 certificate for authentication.
The credentials provider forwards the request to the AWS IoT authentication and authorization module to verify the certificate and the permission to request the security token.
If the certificate is valid and has permission to request a security token, the AWS IoT authentication and authorization module returns success. Otherwise, it returns failure, which goes back to the device with the appropriate exception.
If assuming the role succeeds, AWS STS returns a temporary, limited-privilege security token to the credentials provider.
The credentials provider returns the security token to the device.
The AWS SDK on the device uses the security token to sign an AWS request with AWS Signature Version 4.
The requested service invokes IAM to validate the signature and authorize the request against access policies attached to the preconfigured IAM role.
If IAM validates the signature successfully and authorizes the request, the request goes through.
In another solution, you could configure an AWS Lambda rule that ingests your device data and sends it to another AWS service. However, in applications that require the uploading of large files such as videos or aggregated telemetry to the AWS Cloud, you may want your devices to be able to authenticate and send data directly to the AWS service of your choice. The credentials provider enables you to do that.
Outline of the steps to retrieve and use security token
Perform the following steps as part of this solution:
Create an AWS IoT thing: Start by creating a thing that corresponds to your home thermostat in the AWS IoT thing registry database. This allows you to authenticate the request as a thing and use thing attributes as policy variables in AWS IoT and IAM policies.
Register a certificate: Create and register a certificate with AWS IoT, and attach it to the thing for successful device authentication.
Create and configure an IAM role: Create an IAM role to be assumed by the service on behalf of your device. I illustrate how to configure a trust policy and an access policy so that AWS IoT has permission to assume the role, and the token has necessary permission to make requests to DynamoDB.
Create a role alias: Create a role alias in AWS IoT. A role alias is an alternate data model pointing to an IAM role. The credentials provider request must include a role alias name to indicate which IAM role to assume for obtaining a security token from AWS STS. You may update the role alias on the server to point to a different IAM role and thus make your device obtain a security token with different permissions.
Attach a policy: Create an authorization policy with AWS IoT and attach it to the certificate to control which device can assume which role aliases.
Request a security token: Make an HTTPS request to the credentials provider and retrieve a security token and use it to sign a DynamoDB request with Signature Version 4.
Use the security token to sign a request: Use the retrieved token to sign a request to DynamoDB and successfully write a temperature and humidity record from your home thermostat in a specific table. Thus, starting with an X.509 certificate on your home thermostat, you can successfully upload your thermostat record to DynamoDB and use it for further analysis. Before the availability of the credentials provider, you could not do this.
Deploy the solution
1. Create an AWS IoT thing
Register your home thermostat in the AWS IoT thing registry database by creating a thing type and a thing. You can use the AWS CLI with the following command to create a thing type. The thing type allows you to store description and configuration information that is common to a set of things.
Now, you need to have a Certificate Authority (CA) certificate, sign a device certificate using the CA certificate, and register both certificates with AWS IoT before your device can authenticate to AWS IoT. If you do not already have a CA certificate, you can use OpenSSL to create a CA certificate, as described in Use Your Own Certificate. To register your CA certificate with AWS IoT, follow the steps on Registering Your CA Certificate.
You then have to create a device certificate signed by the CA certificate and register it with AWS IoT, which you can do by following the steps on Creating a Device Certificate Using Your CA Certificate. Save the certificate and the corresponding key pair; you will use them when you request a security token later. Also, remember the password you provide when you create the certificate.
Run the following command in the AWS CLI to attach the device certificate to your thing so that you can use thing attributes in policy variables.
If the attach-thing-principal command succeeds, the output is empty.
3. Configure an IAM role
Next, configure an IAM role in your AWS account that will be assumed by the credentials provider on behalf of your device. You are required to associate two policies with the role: a trust policy that controls who can assume the role, and an access policy that controls which actions can be performed on which resources by assuming the role.
The following trust policy grants the credentials provider permission to assume the role. Put it in a text document and save the document with the name, trustpolicyforiot.json.
The following access policy allows DynamoDB operations on the table that has the same name as the thing name that you created in Step 1, MyHomeThermostat, by using credentials-iot:ThingName as a policy variable. I explain after Step 5 about using thing attributes as policy variables. Put the following policy in a text document and save the document with the name, accesspolicyfordynamodb.json.
Finally, run the following command in the AWS CLI to attach the access policy to your role.
aws iam attach-role-policy --role-name dynamodb-access-role --policy-arn arn:aws:iam::<your_aws_account_id>:policy/accesspolicyfordynamodb
If the attach-role-policy command succeeds, the output is empty.
Configure the PassRole permissions
The IAM role that you have created must be passed to AWS IoT to create a role alias, as described in Step 4. The user who performs the operation requires iam:PassRole permission to authorize this action. You also should add permission for the iam:GetRole action to allow the user to retrieve information about the specified role. Create the following policy to grant iam:PassRole and iam:GetRole permissions. Name this policy, passrolepermission.json.
Now, run the following command to attach the policy to the user.
aws iam attach-user-policy --policy-arn arn:aws:iam::<your_aws_account_id>:policy/passrolepermission --user-name <user_name>
If the attach-user-policy command succeeds, the output is empty.
4. Create a role alias
Now that you have configured the IAM role, you will create a role alias with AWS IoT. You must provide the following pieces of information when creating a role alias:
RoleAlias: This is the primary key of the role alias data model and hence a mandatory attribute. It is a string; the minimum length is 1 character, and the maximum length is 128 characters.
RoleArn: This is the Amazon Resource Name (ARN) of the IAM role you have created. This is also a mandatory attribute.
CredentialDurationSeconds: This is an optional attribute specifying the validity (in seconds) of the security token. The minimum value is 900 seconds (15 minutes), and the maximum value is 3,600 seconds (60 minutes); the default value is 3,600 seconds, if not specified.
Run the following command in the AWS CLI to create a role alias. Use the credentials of the user to whom you have given the iam:PassRole permission.
You created and registered a certificate with AWS IoT earlier for successful authentication of your device. Now, you need to create and attach a policy to the certificate to authorize the request for the security token.
Let’s say you want to allow a thing to get credentials for the role alias, Thermostat-dynamodb-access-role-alias, with thing owner Alice, thing type thermostat, and the thing attached to a principal. The following policy, with thing attributes as policy variables, achieves these requirements. After this step, I explain more about using thing attributes as policy variables. Put the policy in a text document, and save it with the name, alicethermostatpolicy.json.
If the attach-policy command succeeds, the output is empty.
You have completed all the necessary steps to request an AWS security token from the credentials provider!
Using thing attributes as policy variables
Before I show how to request a security token, I want to explain more about how to use thing attributes as policy variables and the advantage of using them. As a prerequisite, a device must provide a thing name in the credentials provider request.
Thing substitution variables in AWS IoT policies
AWS IoT Simplified Permission Management allows you to associate a connection with a specific thing, and allow the thing name, thing type, and other thing attributes to be available as substitution variables in AWS IoT policies. You can write a generic AWS IoT policy as in alicethermostatpolicy.json in Step 5, attach it to multiple certificates, and authorize the connection as a thing. For example, you could attach alicethermostatpolicy.json to certificates corresponding to each of the thermostats you have that you want to assume the role alias, Thermostat-dynamodb-access-role-alias, and allow operations only on the table with the name that matches the thing name. For more information, see the full list of thing policy variables.
Thing substitution variables in IAM policies
You also can use the following three substitution variables in the IAM role’s access policy (I used credentials-iot:ThingName in accesspolicyfordynamodb.json in Step 3):
credentials-iot:ThingName
credentials-iot:ThingTypeName
credentials-iot:AwsCertificateId
When the device provides the thing name in the request, the credentials provider fetches these three variables from the database and adds them as context variables to the security token. When the device uses the token to access DynamoDB, the variables in the role’s access policy are replaced with the corresponding values in the security token. Note that you also can use credentials-iot:AwsCertificateId as a policy variable; AWS IoT returns certificateId during registration.
6. Request a security token
Make an HTTPS request to the credentials provider to fetch a security token. You have to supply the following information:
Certificate and key pair: Because this is an HTTP request over TLS mutual authentication, you have to provide the certificate and the corresponding key pair to your client while making the request. Use the same certificate and key pair that you used during certificate registration with AWS IoT.
RoleAlias: Provide the role alias (in this example, Thermostat-dynamodb-access-role-alias) to be assumed in the request.
ThingName: Provide the thing name that you created earlier in the AWS IoT thing registry database. This is passed as a header with the name, x-amzn-iot-thingname. Note that the thing name is mandatory only if you have thing attributes as policy variables in AWS IoT or IAM policies.
Run the following command in the AWS CLI to obtain your AWS account-specific endpoint for the credentials provider. See the DescribeEndpoint API documentation for further details.
Note that if you are on Mac OS X, you need to export your certificate to a .pfx or .p12 file before you can pass it in the https request. Use OpenSSL with the following command to convert the device certificate from .pem to .pfx format. Remember the password because you will need it subsequently in a curl command.
Now, make an HTTPS request to the credentials provider to fetch a security token. You may use your preferred HTTP client for the request. I use curl in the following examples.
This command returns a security token object that has an accessKeyId, a secretAccessKey, a sessionToken, and an expiration. The following is sample output of the curl command.
Create a DynamoDB table called MyHomeThermostat in your AWS account. You will have to choose the hash (partition key) and the range (sort key) while creating the table to uniquely identify a record. Make the hash the serial_number of the thermostat and the range the timestamp of the record. Create a text file with the following JSON to put a temperature and humidity record in the table. Name the file, item.json.
You can use the accessKeyId, secretAccessKey, and sessionToken retrieved from the output of the curl command to sign a request that writes the temperature and humidity record to the DynamoDB table. Use the following commands to accomplish this.
In this blog post, I demonstrated how to retrieve a security token by using an X.509 certificate and then writing an item to a DynamoDB table by using the security token. Similarly, you could run applications on surveillance cameras or sensor devices that exchange the X.509 certificate for an AWS security token and use the token to upload video streams to Amazon Kinesis or telemetry data to Amazon CloudWatch.
If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the AWS IoT forum.
For some organizations, the idea of “going serverless” can be daunting. But with an understanding of best practices – and the right tools — many serverless applications can be fully functional with only a few lines of code and little else.
Examples of fully-serverless-application use cases include:
Web or mobile backends – Create fully-serverless, mobile applications or websites by creating user-facing content in a native mobile application or static web content in an S3 bucket. Then have your front-end content integrate with Amazon API Gateway as a backend service API. Lambda functions will then execute the business logic you’ve written for each of the API Gateway methods in your backend API.
Chatbots and virtual assistants – Build new serverless ways to interact with your customers, like customer support assistants and bots ready to engage customers on your company-run social media pages. The Amazon Alexa Skills Kit (ASK) and Amazon Lex have the ability to apply natural-language understanding to user-voice and freeform-text input so that a Lambda function you write can intelligently respond and engage with them.
Internet of Things (IoT) backends – AWS IoT has direct-integration for device messages to be routed to and processed by Lambda functions. That means you can implement serverless backends for highly secure, scalable IoT applications for uses like connected consumer appliances and intelligent manufacturing facilities.
Using AWS Lambda as the logic layer of a serverless application can enable faster development speed and greater experimentation – and innovation — than in a traditional, server-based environment.
Once you’ve finished reading the whitepaper, below are a couple additional resources I recommend as your next step:
If you would like to better understand some of the architecture pattern possibilities for serverless applications: Thirty Serverless Architectures in 30 Minutes (re:Invent 2017 video)
If you’re ready to get hands-on and build a sample serverless application: AWS Serverless Workshops (GitHub Repository)
Andrew Baird is a Sr. Solutions Architect for AWS. Prior to becoming a Solutions Architect, Andrew was a developer, including time as an SDE with Amazon.com. He has worked on large-scale distributed systems, public-facing APIs, and operations automation.
We have several upcoming tech talks in the month of April and early May. Come join us to learn about AWS services and solution offerings. We’ll have AWS experts online to help answer questions in real-time. Sign up now to learn more, we look forward to seeing you.
Note – All sessions are free and in Pacific Time.
April & early May — 2018 Schedule
Compute
April 30, 2018 | 01:00 PM – 01:45 PM PT – Best Practices for Running Amazon EC2 Spot Instances with Amazon EMR (300) – Learn about the best practices for scaling big data workloads as well as process, store, and analyze big data securely and cost effectively with Amazon EMR and Amazon EC2 Spot Instances.
May 1, 2018 | 01:00 PM – 01:45 PM PT – How to Bring Microsoft Apps to AWS (300) – Learn more about how to save significant money by bringing your Microsoft workloads to AWS.
May 2, 2018 | 01:00 PM – 01:45 PM PT – Deep Dive on Amazon EC2 Accelerated Computing (300) – Get a technical deep dive on how AWS’ GPU and FGPA-based compute services can help you to optimize and accelerate your ML/DL and HPC workloads in the cloud.
April 25, 2018 | 01:00 PM – 01:45 PM PT – Intro to Open Source Databases on AWS (200) – Learn how to tap the benefits of open source databases on AWS without the administrative hassle.
April 24, 2018 | 11:00 AM – 11:45 AM PT – Deploy your Desktops and Apps on AWS (300) – Learn how to deploy your desktops and apps on AWS with Amazon WorkSpaces and Amazon AppStream 2.0
IoT
May 2, 2018 | 11:00 AM – 11:45 AM PT – How to Easily and Securely Connect Devices to AWS IoT (200) – Learn how to easily and securely connect devices to the cloud and reliably scale to billions of devices and trillions of messages with AWS IoT.
April 30, 2018 | 11:00 AM – 11:45 AM PT – Offline GraphQL Apps with AWS AppSync (300) – Come learn how to enable real-time and offline data in your applications with GraphQL using AWS AppSync.
Networking
May 2, 2018 | 09:00 AM – 09:45 AM PT – Taking Serverless to the Edge (300) – Learn how to run your code closer to your end users in a serverless fashion. Also, David Von Lehman from Aerobatic will discuss how they used [email protected] to reduce latency and cloud costs for their customer’s websites.
May 3, 2018 | 09:00 AM – 09:45 AM PT – Protect Your Game Servers from DDoS Attacks (200) – Learn how to use the new AWS Shield Advanced for EC2 to protect your internet-facing game servers against network layer DDoS attacks and application layer attacks of all kinds.
May 1, 2018 | 11:00 AM – 11:45 AM PT – Building Data Lakes That Cost Less and Deliver Results Faster (300) – Learn how Amazon S3 Select And Amazon Glacier Select increase application performance by up to 400% and reduce total cost of ownership by extending your data lake into cost-effective archive storage.
May 3, 2018 | 11:00 AM – 11:45 AM PT – Integrating On-Premises Vendors with AWS for Backup (300) – Learn how to work with AWS and technology partners to build backup & restore solutions for your on-premises, hybrid, and cloud native environments.
What happens when you combine the Internet of Things, Machine Learning, and Edge Computing? Before I tell you, let’s review each one and discuss what AWS has to offer.
Internet of Things (IoT) – Devices that connect the physical world and the digital one. The devices, often equipped with one or more types of sensors, can be found in factories, vehicles, mines, fields, homes, and so forth. Important AWS services include AWS IoT Core, AWS IoT Analytics, AWS IoT Device Management, and Amazon FreeRTOS, along with others that you can find on the AWS IoT page.
Machine Learning (ML) – Systems that can be trained using an at-scale dataset and statistical algorithms, and used to make inferences from fresh data. At Amazon we use machine learning to drive the recommendations that you see when you shop, to optimize the paths in our fulfillment centers, fly drones, and much more. We support leading open source machine learning frameworks such as TensorFlow and MXNet, and make ML accessible and easy to use through Amazon SageMaker. We also provide Amazon Rekognition for images and for video, Amazon Lex for chatbots, and a wide array of language services for text analysis, translation, speech recognition, and text to speech.
Edge Computing – The power to have compute resources and decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. AWS Greengrass builds on AWS IoT, giving you the ability to run Lambda functions and keep device state in sync even when not connected to the Internet.
ML Inference at the Edge Today I would like to toss all three of these important new technologies into a blender! You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields, and homes that I mentioned.
Here are a few of the many ways that you can put Greengrass ML Inference to use:
Precision Farming – With an ever-growing world population and unpredictable weather that can affect crop yields, the opportunity to use technology to increase yields is immense. Intelligent devices that are literally in the field can process images of soil, plants, pests, and crops, taking local corrective action and sending status reports to the cloud.
Physical Security – Smart devices (including the AWS DeepLens) can process images and scenes locally, looking for objects, watching for changes, and even detecting faces. When something of interest or concern arises, the device can pass the image or the video to the cloud and use Amazon Rekognition to take a closer look.
Industrial Maintenance – Smart, local monitoring can increase operational efficiency and reduce unplanned downtime. The monitors can run inference operations on power consumption, noise levels, and vibration to flag anomalies, predict failures, detect faulty equipment.
Greengrass ML Inference Overview There are several different aspects to this new AWS feature. Let’s take a look at each one:
Machine Learning Models – Precompiled TensorFlow and MXNet libraries, optimized for production use on the NVIDIA Jetson TX2 and Intel Atom devices, and development use on 32-bit Raspberry Pi devices. The optimized libraries can take advantage of GPU and FPGA hardware accelerators at the edge in order to provide fast, local inferences.
Model Building and Training – The ability to use Amazon SageMaker and other cloud-based ML tools to build, train, and test your models before deploying them to your IoT devices. To learn more about SageMaker, read Amazon SageMaker – Accelerated Machine Learning.
Model Deployment – SageMaker models can (if you give them the proper IAM permissions) be referenced directly from your Greengrass groups. You can also make use of models stored in S3 buckets. You can add a new machine learning resource to a group with a couple of clicks:
Can you believe it’s already the month of March? With some great new Tech Talks available this month, there’s no better time to grow your knowledge about AWS services and solutions.
AWS Online Tech Talks
March 2018– Schedule
Below is the full schedule for the live, online technical sessions being held during the month of March. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.
March 28, 2018 | 11:00 AM – 12:00 PM PT – Deep Dive on Amazon Athena (300) – Dive deep into the most common Amazon Athena use cases, including working with other AWS services.
Compute
March 26, 2018 | 01:00 PM – 01:45 PM PT – High Performance Computing in the Cloud (200) – Learn how AWS is enabling faster time to results and higher ROI when it comes to solving the big problems in science, engineering and business with high performance computing in the cloud.
March 27, 2018 | 01:00 PM – 01:45 PM PT – Introduction to Hybrid Cloud on AWS (200) – Learn how AWS is building the industry’s broadest capabilities for Hybrid Cloud deployments.
March 28, 2018 | 01:00 PM – 02:00 PM PT – Media Processing Workflows at High Velocity and Scale using AI and ML (200) – Hear how AWS customers have improved media supply chains using AI in areas such as metadata tagging (Rekognition and Comprehend), translations, transcriptions, and cloud services (Elemental).
March 22, 2018 | 09:00 AM – 09:45 AM PT – New Mobile CLI and Console Experience (200) – Learn how AWS Mobile Services has introduced a new CLI and streamlined console experience in order to simplify and speed up the development of mobile applications with innovative AWS features and back-end functionality.
Networking
March 28, 2018 | 09:00 AM – 09:45 AM PT –Deep Dive on New AWS Networking Features (300) – Learn how AWS PrivateLink, Direct Connect gateway, and new features with Elastic Load Balancers (ELB) come together to meet the needs of a modern enterprise.
March 29, 2018 | 09:00 AM – 09:45 AM PT – Navigating GDPR Compliance on AWS (300) – Get a walkthrough of potential General Data Protection Regulation (GDPR) obligations and see how the AWS cloud offers services and features that are consistent with GDPR considerations in the ramp-up to the May 25th, 2018 enforcement date.
March 27, 2018 | 11:00 AM – 11:45 AM PT–Enterprise Applications with Amazon EFS (300) – Join us for a technical deep dive on Amazon EFS, where you’ll learn tips and tricks for integrating your enterprise applications with Amazon EFS.
March 29, 2018 | 11:00 AM – 11:45 AM PT – Transforming Data Lakes with Amazon S3 Select & Amazon Glacier Select (300) – Join us for a webinar where we’ll demonstrate how Amazon S3 Select can increase analytics query performance up to 400%, and Amazon Glacier Select makes it practical to extend queries to archive storage, significantly reducing data lake storage costs.
AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices by using the Message Queuing Telemetry Transport (MQTT) protocol, HTTP, and the MQTT over the WebSocket protocol. Every connected device must authenticate to AWS IoT, and AWS IoT must authorize all requests to determine if access to the requested operations or resources is allowed. Until now, AWS IoT has supported two kinds of authentication techniques: the Transport Layer Security (TLS) mutual authentication protocol and the AWS Signature Version 4 algorithm. Callers must possess either an X.509 certificate or AWS security credentials to be able to authenticate their calls. The requests are authorized based on the policies attached to the certificate or the AWS security credentials.
However, many of our customers have their own systems that issue custom authorization tokens to their devices. These systems use different access control mechanisms such as OAuth over JWT or SAML tokens. AWS IoT now supports a custom authorizer to enable you to use custom authorization tokens for access control. You can now use custom tokens to authenticate and authorize HTTPS over the TLS server authentication protocol and MQTT over WebSocket connections to AWS IoT.
In this blog post, I explain the AWS IoT custom authorizer design and then demonstrate the end-to-end process of setting up a custom authorizer to authorize an HTTPS over TLS server authentication connection to AWS IoT using a custom authorization token. In this process, you configure an AWS Lambda function, which will validate the token and provide the policies necessary to control the access privileges on the connection.
Note: This post assumes you are familiar with AWS IoT and AWS Lambda enough to perform steps using the AWS CLI and OpenSSL. Make sure you are running the latest version of the AWS CLI.
Overview of the custom authorizer workflow
The following numbered diagram illustrates the custom authorizer workflow. The diagram is followed by explanations of the steps.
To explain the steps of the workflow as illustrated in the preceding diagram:
The connected device uses the AWS SDK or custom client to make an HTTPS or MQTT over WebSocket request to the AWS IoT gateway. The request includes a custom authorization token and the name of a preconfigured authorizer Lambda function that is to be invoked to validate the authorization token.
The AWS IoT gateway identifies the authorization token in the request and determines that it is a custom authorization request. The gateway checks if a Lambda authorization function is configured for the AWS account that owns the device. If yes, the gateway invokes the Lambda function by passing the authorization token.
The Lambda function verifies the authorization token and returns to the AWS IoT gateway a principal that identifies the connection and a set of AWS IoT policies that determine permissions for the requested operation. The Lambda function also returns two time-to-live values that determine the validity (in seconds) of the connection and policy documents.
The AWS IoT gateway invokes the AWS policy evaluation engine to authorize the requested operation against the set of policies that are received from the authorizer Lambda function.
The AWS policy evaluation engine evaluates the policy documents and returns the result to the AWS IoT gateway. The gateway then caches the policy documents.
If the policy evaluation allows the requested operation, the AWS IoT gateway serves the request. If the requested operation is denied, the AWS IoT gateway returns an AccessDenied exception with the status code of 403 to the device (the red line in the preceding diagram).
Outline of the steps to configure and use the custom authorizer
The following are the steps you will perform as part of the solution:
Create a Lambda function: Start by creating a Lambda function. The function takes the authorization token in the input, verifies it, and returns authorization policies to determine the caller’s permissions for the requested operation.
Create an authorizer: Create an authorizer in AWS IoT. An authorizer is an alternate data model pointing to a pre-created Lambda function. You can specify in the custom authorization request an authorizer name. AWS IoT invokes a corresponding Lambda function to verify the authorization token. You may update the authorizer to point to a different Lambda function and thus easily control which Lambda function to invoke to verify an authorization token.
Designate the default authorizer: You may designate one of your authorizers as the default authorizer. AWS IoT invokes the default authorizer implicitly when a custom authorization request does not include a specific authorizer name.
Add Lambda invocation permissions: AWS IoT needs to be able to call your Lambda function on your behalf to verify the token in the custom authorization request. You need to associate a resource policy with the Lambda function to allow this.
Test the Lambda function: When the Lambda function and the custom authorizer are configured, use the test function to verify that they are functioning correctly.
Invoke the custom authorizer: Finally, make an HTTPS request to the gateway that includes a custom authorization token. The request invokes the custom authorizer.
Deploy the solution
1. Create a Lambda function
In this step, I show you how to create a Lambda function that runs your custom authorizer code and returns a set of essential attributes to authorize the request.
Sign in to your AWS account and create from the Lambda console a Lambda function that takes as input an authorization token and performs whichever actions are necessary to validate the token, as shown in the following code example. The output, in JSON format, must contain the following attributes:
IsAuthenticated: This is a Boolean (true/false) attribute that indicates whether the request is authenticated.
PrincipalId: This is an alphanumeric string; the minimum length is 1 character, and the maximum length is 128 characters. This string acts as an identifier associated with the token that is received in the custom authorization request.
PolicyDocuments: This is a list of JSON formatted policy documents following the same conventions as an AWS IoT policy. The list contains at most 10 policy documents, each of which can be a maximum of 2,048 characters.
DisconnectAfterInSeconds: This indicates the maximum duration (in seconds) of the connection to the AWS IoT gateway, after which it will be disconnected. The minimum value is 300 seconds, and the maximum value is 86,400 seconds.
RefreshAfterInSeconds: This is the period between policy refreshes. When it lapses, the Lambda function is invoked again to allow for policy refreshes. The minimum value is 300 seconds, and the maximum value is 86,400 seconds.
The following code example is a Lambda function in Node.js 6.10 that authenticates a token.
// A simple authorizer Lambda function demonstrating
// how to parse auth token and generate response
exports.handler = function(event, context, callback) {
var token = event.token;
switch (token.toLowerCase()) {
case 'allow':
callback(null, generateAuthResponse(token, 'Allow'));
case 'deny':
callback(null, generateAuthResponse(token, 'Deny'));
default:
callback("Error: Invalid token");
}
};
// Helper function to generate authorization response
var generateAuthResponse = function(token, effect) {
// Invoke your preferred identity provider
// to get the authN and authZ response.
// Following is just for simplicity sake
var authResponse = {};
authResponse.isAuthenticated = true;
authResponse.principalId = 'principalId';
var policyDocument = {};
policyDocument.Version = '2012-10-17';
policyDocument.Statement = [];
var statement = {};
statement.Action = 'iot:Publish';
statement.Effect = effect;
statement.Resource = "arn:aws:iot:us-east-1:<your_aws_account_id>:topic/customauthtesting";
policyDocument.Statement[0] = statement;
authResponse.policyDocuments = [policyDocument];
authResponse.disconnectAfterInSeconds = 3600;
authResponse.refreshAfterInSeconds = 600;
return authResponse;
}
The preceding function takes an authorization token and returns an object containing the five attributes (a-e) described earlier in this step.
2. Create an authorizer
Now that you have created the Lambda function, you will create an authorizer with AWS IoT pointing to the Lambda function. You do this so that you can easily control which Lambda function to invoke to verify an authorization token. The following attributes are required to create an authorizer:
AuthorizerName: This is the name of the authorizer. It is a string; the minimum length is 1 character, and the maximum length is 128 characters.
AuthorizerFunctionArn: This is the Amazon Resource Name (ARN) of the Lambda function that you created in the previous step.
TokenKeyName: This specifies the key name that your device chooses, which indicates the token in the custom authorization HTTP request header. It is a string; the minimum length is 1 character, and the maximum length is 128 characters.
TokenSigningPublicKeys: This is a map of one (minimum) and two (maximum) public keys. It is a 2,048-bit key at minimum. You need to generate a key pair, sign the custom authorization token and include the digital signature in the request in order to be able to use custom authorization successfully. AWS IoT needs the corresponding public key to verify the digital signature. Therefore, you must provide the public key while creating an authorizer.
Status: This specifies the status (ACTIVE or INACTIVE) of the authorizer. It is optional. The default value is INACTIVE.
Run the following command in OpenSSL to create an RSA key pair that will be used to generate a token signature.
openssl genrsa -out private.pem 2048
Run the following command to create the public key out of the key pair generated in the previous step.
You need to store the key pair securely and pass the public key in the TokenSigningPublicKeys parameter when creating the authorizer in AWS IoT. You will use the private key in the key pair to sign the authorization token and include the signature in the custom authorization request.
Run the following command in the AWS CLI to create an authorizer.
You must set the authorizer in the ACTIVE status to be invoked. The describe-authorizer API returns the attributes of an existing authorizer. You can use the following command to verify if all the attributes in the authorizer are set correctly.
You can have multiple authorizers in your account. You can designate one of them as the default so that AWS IoT invokes it if the custom authorization request does not specify an authorizer name. Run the following command in the AWS CLI to designate a default authorizer.
AWS IoT needs to invoke your authorizer Lambda function to evaluate the custom authorizer token. You need to provide AWS IoT appropriate permissions to invoke your Lambda function when a custom authorization request is made. Use the AWS CLI with the AddPermission command to grant the AWS IoT service principal permission to call lambda:InvokeFunction, as shown in the following command.
Note that you are using the precreated AuthorizerArn as the SourceArn while granting the permission. The Lambda function gets triggered only if the source ARN provided by AWS IoT during the invocation matches with the SourceArn that you have given permission. Even if your Lambda function ARN is put in an authorizer owned by someone else, they cannot trigger the function causing illegitimate charge to you.
5. Test the Lambda function
To verify the configuration, test to see if AWS IoT can invoke the Lambda function and get the correct output. You can do this by using the TestInvokeAuthorizer API. The API takes the following input:
AuthorizerName: This is the name of the authorizer. It is a string; the minimum length is 1 character, and the maximum length is 128 characters.
Token: This specifies the custom authorization token to authorize the request to the AWS IoT gateway. It is a string; the minimum length is 1 character, and the maximum length is 1,024 characters.
TokenSignature: This is the token signature generated by using the private key with the SHA256withRSA algorithm. This is a string with a maximum length 2,560 characters. You must Base64-encode the signature while passing it as an input (the command follows).
Run the following command in OpenSSL to generate a signature for string, allow.
If AWS IoT is able to invoke the Lambda function successfully, the output of the TestInvokeAuthorizer API will be exactly the same as the output of the Lambda function. The following is sample output of the test-invoke-authorizer command for the Lambda function you created in Step 1 earlier in this post.
You can now make a custom authorization request to the AWS IoT gateway. Custom authorization is supported for HTTPS over TLS server authentication and MQTT over WebSocket connections. Regardless of the protocol, requests must include the following attributes in the header. Note that supplying these attributes through query parameters is not allowed for security reasons.
Token: This specifies the authorization token. The header name must be the same as the TokenKeyName of the authorizer.
TokenSignature: This specifies the Base64-encoded digital signature of the token string. The header name must be x-amz-customauthorizer-signature.
AuthorizerName: This specifies the name of one of the ACTIVE authorizers preconfigured in your account. The header name must be x-amz-customauthorizer-name. If this field is not present and you have preconfigured a default custom authorizer for your account, the AWS IoT gateway invokes the default authorizer to authenticate and authorize the request.
Run the following command in the AWS CLI to obtain your AWS account-specific AWS IoT endpoint. See the DescribeEndpoint API documentation for further details. You need to specify the endpoint when making requests to the AWS IoT gateway.
aws iot describe-endpoint
The following is sample output of the describe-endpoint command. It contains the endpointAddress.
Now, make an HTTPS request to the AWS IoT gateway to post a message. You may use your preferred HTTP client for the request. I use curl in the following example, which posts the message, “Hello from custom auth,” to an MQTT topic, customauthtesting, by using the token, allow.
If the command succeeds, you will see the following output.
HTTP/1.1 200 OK
content-type: application/json
content-length: 65
date: Sun, 07 Jan 2018 19:56:12 GMT
x-amzn-RequestId: e7c13873-6c61-a12c-1216-9a935901c130
connection: keep-alive
{"message":"OK","traceId":"e7c13873-6c61-a12c-1216-9a935901c130"}
Conclusion
In this blog post, I have shown how to configure a custom authorizer in your AWS account and use it to authorize an HTTPS over TLS server authentication connection and publish a message to a specific MQTT topic by using a custom authorization token. Similarly, you can use custom authorizers to authorize MQTT over WebSocket requests to the AWS IoT gateway to publish messages to a specific topic or subscribe to a topic filter.
If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread in the AWS IoT forum.
AWS has completed its 2017 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Bundesamt für Sicherheit in der Informationstechnik (BSI)—Germany’s national cybersecurity authority—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can use the work performed under this BSI audit to comply with stringent local requirements and operate secure workloads in the AWS Cloud.
Continuing our commitment to Germany and the AWS European Regions, AWS has added 16 services to this year’s scope:
The English version of the C5 report is available through AWS Artifact. The German version of the report will be available through AWS Artifact in the coming weeks.
Last week I attended a talk given by Bryan Mistele, president of Seattle-based INRIX. Bryan’s talk provided a glimpse into the future of transportation, centering around four principle attributes, often abbreviated as ACES:
Autonomous – Cars and trucks are gaining the ability to scan and to make sense of their environments and to navigate without human input.
Connected – Vehicles of all types have the ability to take advantage of bidirectional connections (either full-time or intermittent) to other cars and to cloud-based resources. They can upload road and performance data, communicate with each other to run in packs, and take advantage of traffic and weather data.
Electric – Continued development of battery and motor technology, will make electrics vehicles more convenient, cost-effective, and environmentally friendly.
Shared – Ride-sharing services will change usage from an ownership model to an as-a-service model (sound familiar?).
Individually and in combination, these emerging attributes mean that the cars and trucks we will see and use in the decade to come will be markedly different than those of the past.
On the Road with AWS AWS customers are already using our AWS IoT, edge computing, Amazon Machine Learning, and Alexa products to bring this future to life – vehicle manufacturers, their tier 1 suppliers, and AutoTech startups all use AWS for their ACES initiatives. AWS Greengrass is playing an important role here, attracting design wins and helping our customers to add processing power and machine learning inferencing at the edge.
AWS customer Aptiv (formerly Delphi) talked about their Automated Mobility on Demand (AMoD) smart vehicle architecture in a AWS re:Invent session. Aptiv’s AMoD platform will use Greengrass and microservices to drive the onboard user experience, along with edge processing, monitoring, and control. Here’s an overview:
Another customer, Denso of Japan (one of the world’s largest suppliers of auto components and software) is using Greengrass and AWS IoT to support their vision of Mobility as a Service (MaaS). Here’s a video:
AWS at CES The AWS team will be out in force at CES in Las Vegas and would love to talk to you. They’ll be running demos that show how AWS can help to bring innovation and personalization to connected and autonomous vehicles.
Personalized In-Vehicle Experience – This demo shows how AWS AI and Machine Learning can be used to create a highly personalized and branded in-vehicle experience. It makes use of Amazon Lex, Polly, and Amazon Rekognition, but the design is flexible and can be used with other services as well. The demo encompasses driver registration, login and startup (including facial recognition), voice assistance for contextual guidance, personalized e-commerce, and vehicle control. Here’s the architecture for the voice assistance:
Connected Vehicle Solution – This demo shows how a connected vehicle can combine local and cloud intelligence, using edge computing and machine learning at the edge. It handles intermittent connections and uses AWS DeepLens to train a model that responds to distracted drivers. Here’s the overall architecture, as described in our Connected Vehicle Solution:
Digital Content Delivery – This demo will show how a customer uses a web-based 3D configurator to build and personalize their vehicle. It will also show high resolution (4K) 3D image and an optional immersive AR/VR experience, both designed for use within a dealership.
Autonomous Driving – This demo will showcase the AWS services that can be used to build autonomous vehicles. There’s a 1/16th scale model vehicle powered and driven by Greengrass and an overview of a new AWS Autonomous Toolkit. As part of the demo, attendees drive the car, training a model via Amazon SageMaker for subsequent on-board inferencing, powered by Greengrass ML Inferencing.
To speak to one of my colleagues or to set up a time to see the demos, check out the Visit AWS at CES 2018 page.
Some Resources If you are interested in this topic and want to learn more, the AWS for Automotive page is a great starting point, with discussions on connected vehicles & mobility, autonomous vehicle development, and digital customer engagement.
When you are ready to start building a connected vehicle, the AWS Connected Vehicle Solution contains a reference architecture that combines local computing, sophisticated event rules, and cloud-based data processing and storage. You can use this solution to accelerate your own connected vehicle projects.
AWS Training and Certification recently released free digital training courses that will make it easier for you to build your cloud skills and learn about using AWS Big Data services. This training includes courses like Introduction to Amazon EMR and Introduction to Amazon Athena.
You can get free and unlimited access to more than 100 new digital training courses built by AWS experts at aws.training. It’s easy to access training related to big data. Just choose the Analytics category on our Find Training page to browse through the list of courses. You can also use the keyword filter to search for training for specific AWS offerings.
Recommended training
Just getting started, or looking to learn about a new service? Check out the following digital training courses:
Introduction to Amazon EMR (15 minutes) Covers the available tools that can be used with Amazon EMR and the process of creating a cluster. It includes a demonstration of how to create an EMR cluster.
Introduction to Amazon Athena (10 minutes) Introduces the Amazon Athena service along with an overview of its operating environment. It covers the basic steps in implementing Athena and provides a brief demonstration.
Introduction to Amazon QuickSight (10 minutes) Discusses the benefits of using Amazon QuickSight and how the service works. It also includes a demonstration so that you can see Amazon QuickSight in action.
Introduction to Amazon Redshift (10 minutes) Walks you through Amazon Redshift and its core features and capabilities. It also includes a quick overview of relevant use cases and a short demonstration.
Introduction to AWS Lambda (10 minutes) Discusses the rationale for using AWS Lambda, how the service works, and how you can get started using it.
Introduction to Amazon Kinesis Analytics (10 minutes) Discusses how Amazon Kinesis Analytics collects, processes, and analyzes streaming data in real time. It discusses how to use and monitor the service and explores some use cases.
Introduction to Amazon Kinesis Streams (15 minutes) Covers how Amazon Kinesis Streams is used to collect, process, and analyze real-time streaming data to create valuable insights.
Introduction to AWS IoT (10 minutes) Describes how the AWS Internet of Things (IoT) communication architecture works, and the components that make up AWS IoT. It discusses how AWS IoT works with other AWS services and reviews a case study.
Introduction to AWS Data Pipeline (10 minutes) Covers components like tasks, task runner, and pipeline. It also discusses what a pipeline definition is, and reviews the AWS services that are compatible with AWS Data Pipeline.
Go deeper with classroom training
Want to learn more? Enroll in classroom training to learn best practices, get live feedback, and hear answers to your questions from an instructor.
Big Data on AWS (3 days) Introduces you to cloud-based big data solutions such as Amazon EMR, Amazon Redshift, Amazon Kinesis, and the rest of the AWS big data platform.
Data Warehousing on AWS (3 days) Introduces you to concepts, strategies, and best practices for designing a cloud-based data warehousing solution, and demonstrates how to collect, store, and prepare data for the data warehouse.
Building a Serverless Data Lake (1 day) Teaches you how to design, build, and operate a serverless data lake solution with AWS services. Includes topics such as ingesting data from any data source at large scale, storing the data securely and durably, using the right tool to process large volumes of data, and understanding the options available for analyzing the data in near-real time.
More training coming in 2018
We’re always evaluating and expanding our training portfolio, so stay tuned for more training options in the new year. You can always visit us at aws.training to explore our latest offerings.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.