All posts by Moheeb Zara

Building a serverless document scanner using Amazon Textract and AWS Amplify

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-serverless-document-scanner-using-amazon-textract-and-aws-amplify/

This guide demonstrates creating and deploying a production ready document scanning application. It allows users to manage projects, upload images, and generate a PDF from detected text. The sample can be used as a template for building expense tracking applications, handling forms and legal documents, or for digitizing books and notes.

The frontend application is written in Vue.js and uses the Amplify Framework. The backend is built using AWS serverless technologies and consists of an Amazon API Gateway REST API that invokes AWS Lambda functions. Amazon Textract is used to analyze text from uploaded images to an Amazon S3 bucket. Detected text is stored in Amazon DynamoDB.

An architectural diagram of the application.

An architectural diagram of the application.

Prerequisites

You need the following to complete the project:

Deploy the application

The solution consists of two parts, the frontend application and the serverless backend. The Amplify CLI deploys all the Amazon Cognito authentication, and hosting resources for the frontend. The backend requires the Amazon Cognito user pool identifier to configure an authorizer on the API. This enables an authorization workflow, as shown in the following image.

A diagram showing how an Amazon Cognito authorization workflow works

A diagram showing how an Amazon Cognito authorization workflow works

First, configure the frontend. Complete the following steps using a terminal running on a computer or by using the AWS Cloud9 IDE. If using AWS Cloud9, create an instance using the default options.

From the terminal:

  1. Install the Amplify CLI by running this command.
    npm install -g @aws-amplify/cli
  2. Configure the Amplify CLI using this command. Follow the guided process to completion.
    amplify configure
  3. Clone the project from GitHub.
    git clone https://github.com/aws-samples/aws-serverless-document-scanner.git
  4. Navigate to the amplify-frontend directory and initialize the project using the Amplify CLI command. Follow the guided process to completion.
    cd aws-serverless-document-scanner/amplify-frontend
    
    amplify init
  5. Deploy all the frontend resources to the AWS Cloud using the Amplify CLI command.
    amplify push
  6. After the resources have finishing deploying, make note of the StackName and UserPoolId properties in the amplify-frontend/amplify/backend/amplify-meta.json file. These are required when deploying the serverless backend.

Next, deploy the serverless backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the document-scanner application in the AWS Serverless Application Repository.
  2. In Application settings, name the application and provide the StackName and UserPoolId from the frontend application for the UserPoolID and AmplifyStackName parameters. Provide a unique name for the BucketName parameter.
  3. Choose Deploy.
  4. Once complete, copy the API endpoint so that it can be configured on the frontend application in the next section.

Configure and run the frontend application

  1. Create a file, amplify-frontend/src/api-config.js, in the frontend application with the following content. Include the API endpoint and the unique BucketName from the previous step. The s3_region value must be the same as the Region where your serverless backend is deployed.
    const apiConfig = {
    	"endpoint": "<API ENDPOINT>",
    	"s3_bucket_name": "<BucketName>",
    	"s3_region": "<Bucket Region>"
    };
    
    export default apiConfig;
  2. In a terminal, navigate to the root directory of the frontend application and run it locally for testing.
    cd aws-serverless-document-scanner/amplify-frontend
    
    npm install
    
    npm run serve

    You should see an output like this:

  3. To publish the frontend application to cloud hosting, run the following command.
    amplify publish

    Once complete, a URL to the hosted application is provided.

Using the frontend application

Once the application is running locally or hosted in the cloud, navigating to it presents a user login interface with an option to register. The registration flow requires a code sent to the provided email for verification. Once verified you’re presented with the main application interface.

Once you create a project and choose it from the list, you are presented with an interface for uploading images by page number.

On mobile, it uses the device camera to capture images. On desktop, images are provided by the file system. You can replace an image and the page selector also lets you go back and change an image. The corresponding analyzed text is updated in DynamoDB as well.

Each time you upload an image, the page is incremented. Choosing “Generate PDF” calls the endpoint for the GeneratePDF Lambda function and returns a PDF in base64 format. The download begins automatically.

You can also open the PDF in another window, if viewing a preview in a desktop browser.

Understanding the serverless backend

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

In the GitHub project, the folder serverless-backend/ contains the AWS SAM template file and the Lambda functions. It creates an API Gateway endpoint, six Lambda functions, an S3 bucket, and two DynamoDB tables. The template also defines an Amazon Cognito authorizer for the API using the UserPoolID passed in as a parameter:

Parameters:
  UserPoolID:
    Type: String
    Description: (Required) The user pool ID created by the Amplify frontend.

  AmplifyStackName:
    Type: String
    Description: (Required) The stack name of the Amplify backend deployment. 

  BucketName:
    Type: String
    Default: "ds-userfilebucket"
    Description: (Required) A unique name for the user file bucket. Must be all lowercase.  


Globals:
  Api:
    Cors:
      AllowMethods: "'*'"
      AllowHeaders: "'*'"
      AllowOrigin: "'*'"

Resources:

  DocumentScannerAPI:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      Auth:
        DefaultAuthorizer: CognitoAuthorizer
        Authorizers:
          CognitoAuthorizer:
            UserPoolArn: !Sub 'arn:aws:cognito-idp:${AWS::Region}:${AWS::AccountId}:userpool/${UserPoolID}'
            Identity:
              Header: Authorization
        AddDefaultAuthorizerToCorsPreflight: False

This only allows authenticated users of the frontend application to make requests with a JWT token containing their user name and email. The backend uses that information to fetch and store data in DynamoDB that corresponds to the user making the request.

Two DynamoDB tables are created. A Project table, which tracks all the project names by user, and a Pages table, which tracks pages by project and user. The DynamoDB tables are created by the AWS SAM template with the partition key and range key defined for each table. These are used by the Lambda functions to query and sort items. See the documentation to learn more about DynamoDB table key schema.

ProjectsTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "project_name"
          AttributeType: "S"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: project_name
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

  PagesTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "project"
          AttributeType: "S"
        - 
          AttributeName: "page"
          AttributeType: "N"
      KeySchema: 
        - AttributeName: project
          KeyType: HASH
        - AttributeName: page
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

When an API Gateway endpoint is called, it passes the user credentials in the request context to a Lambda function. This is used by the CreateProject Lambda function, which also receives a project name in the request body, to create an item in the Project Table and associate it with a user.

The endpoint for the FetchProjects Lambda function is called to retrieve the list of projects associated with a user. The DeleteProject Lambda function removes a specific project from the Project table and any associated pages in the Pages table. It also deletes the folder in the S3 bucket containing all images for the project.

When a user enters a Project, the API endpoint calls the FetchPageCount Lambda function. This returns the number of pages for a project to update the current page number in the upload selector. The project is retrieved from the path parameters, as defined in the AWS SAM template:

FetchPageCount:
    Type: AWS::Serverless::Function
    Properties:
      Handler: app.handler
      Runtime: python3.8
      CodeUri: lambda_functions/fetchPageCount/
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref PagesTable
      Environment:
        Variables:
          PAGES_TABLE_NAME: !Ref PagesTable
      Events:
        GetResource:
          Type: Api
          Properties:
            RestApiId: !Ref DocumentScannerAPI
            Path: /pages/count/{project+}
            Method: get  

The template creates an S3 bucket and two AWS IAM managed policies. The policies are applied to the AuthRole and UnauthRole created by Amplify. This allows users to upload images directly to the S3 bucket. To understand how Amplify works with Storage, see the documentation.

The template also sets an S3 event notification on the bucket for all object create events with a “.png” suffix. Whenever the frontend uploads an image to S3, the object create event invokes the ProcessDocument Lambda function.

The function parses the object key to get the project name, user, and page number. Amazon Textract then analyzes the text of the image. The object returned by Amazon Textract contains the detected text and detailed information, such as the positioning of text in the image. Only the raw lines of text are stored in the Pages table.

import os
import json, decimal
import boto3
import urllib.parse
from boto3.dynamodb.conditions import Key, Attr

client = boto3.resource('dynamodb')
textract = boto3.client('textract')

tableName = os.environ.get('PAGES_TABLE_NAME')

def handler(event, context):

  table = client.Table(tableName)

  print(table.table_status)
 
  key = urllib.parse.unquote(event['Records'][0]['s3']['object']['key'])
  bucket = event['Records'][0]['s3']['bucket']['name']
  project = key.split('/')[3]
  page = key.split('/')[4].split('.')[0]
  user = key.split('/')[2]
  
  response = textract.detect_document_text(
    Document={
        'S3Object': {
            'Bucket': bucket,
            'Name': key
        }
    })
    
  fullText = ""
  
  for item in response["Blocks"]:
    if item["BlockType"] == "LINE":
        fullText = fullText + item["Text"] + '\n'
  
  print(fullText)

  table.put_item(Item= {
    'project': user + '/' + project,
    'page': int(page), 
    'text': fullText
    })

  # print(response)
  return

The GeneratePDF Lambda function retrieves the detected text for each page in a project from the Pages table. It combines the text into a PDF and returns it as a base64-encoded string for download. This function can be modified if your document structure differs.

Understanding the frontend

In the GitHub repo, the folder amplify-frontend/src/ contains all the code for the frontend application. In main.js, the Amplify VueJS modules are configured to use the resources defined in aws-exports.js. It also configures the endpoint and S3 bucket of the serverless backend, defined in api-config.js.

In components/DocumentScanner.vue, the API module is imported and the API is defined.

API calls are defined as Vue methods that can be called by various other components and elements of the application.

In components/Project.vue, the frontend uses the Storage module for Amplify to upload images. For more information on how to use S3 in an Amplify project see the documentation.

Conclusion

This blog post shows how to create a multiuser application that can analyze text from images and generate PDF documents. This guide demonstrates how to do so in a secure and scalable way using a serverless approach. The example also shows an event driven pattern for handling high volume image processing using S3, Lambda, and Amazon Textract.

The Amplify Framework simplifies the process of implementing authentication, storage, and backend integration. Explore the full solution on GitHub to modify it for your next project or startup idea.

To learn more about AWS serverless and keep up to date on the latest features, subscribe to the YouTube channel.

#ServerlessForEveryone

Building a Pulse Oximetry tracker using AWS Amplify and AWS serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-pulse-oximetry-tracker-using-aws-amplify-and-aws-serverless/

This guide demonstrates an example solution for collecting, tracking, and sharing pulse oximetry data for multiple users. It’s built using AWS serverless technologies, enabling reliable scalability and security. The frontend application is written in VueJS and uses the Amplify Framework. It takes oxygen saturation measurements as manual input or a BerryMed pulse oximeter connected to a browser using Web Bluetooth.

The serverless backend that handles user data and shared access management is deployed using the AWS Serverless Application Model (AWS SAM). The backend application consists of an Amazon API Gateway REST API, which invokes AWS Lambda functions. The code is written in Python to handle the business logic of interacting with an Amazon DynamoDB database. Authentication is managed by Amazon Cognito.

A screenshot of the frontend application running in a desktop browser.

A screenshot of the frontend application running in a desktop browser.

Prerequisites

You need the following to complete the project:

Deploy the application

A high-level diagram of the full oxygen monitor application.

A high-level diagram of the full oxygen monitor application.

The solution consists of two parts, the frontend application and the serverless backend. The Amplify CLI deploys all the Amazon Cognito authentication and hosting resources for the frontend. The backend requires the Amazon Cognito user pool identifier to configure an authorizer on the API. This enables an authorization workflow, as shown in the following image.

A diagram showing how an Amazon Cognito authorization workflow works

A diagram showing how an Amazon Cognito authorization workflow works

First, configure the frontend. Complete the following steps using a terminal running on a computer or by using the AWS Cloud9 IDE. If using AWS Cloud9, create an instance using the default options.

From the terminal:

  1. Install the Amplify CLI by running this command.
    npm install -g @aws-amplify/cli
  2. Configure the Amplify CLI using this command. Follow the guided process to completion.
    amplify configure
  3. Clone the project from GitHub.
    git clone https://github.com/aws-samples/aws-serverless-oxygen-monitor-web-bluetooth.git
  4. Navigate to the amplify-frontend directory and initialize the project using the Amplify CLI command. Follow the guided process to completion.
    cd aws-serverless-oxygen-monitor-web-bluetooth/amplify-frontend
    
    amplify init
  5. Deploy all the frontend resources to the AWS Cloud using the Amplify CLI command.
    amplify push
  6. After the resources have finishing deploying, make note of the aws_user_pools_id property in the src/aws-exports.js file. This is required when deploying the serverless backend.

Next, deploy the serverless backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the oxygen-monitor-backend application in the AWS Serverless Application Repository.
  2. In Application settings, name the application and provide the aws_user_pools_id from the frontend application for the UserPoolID parameter.
  3. Choose Deploy.
  4. Once complete, copy the API endpoint so that it can be configured on the frontend application in the next step.

Configure and run the frontend application

  1. Create a file, amplify-frontend/src/api-config.js, in the frontend application with the following content. Include the API endpoint from the previous step.
    const apiConfig = {
      “endpoint”: “<API ENDPOINT>”
    };
    
    export default apiConfig;
  2. In a terminal, navigate to the root directory of the frontend application and run it locally for testing.
    cd aws-serverless-oxygen-monitor-web-bluetooth/amplify-frontend
    
    npm install
    
    npm run serve

    You should see an output like this:

  3. To publish the frontend application to cloud hosting, run the following command.
    amplify publish

    Once complete, a URL to the hosted application is provided.

Using the frontend application

Once the application is running locally or hosted in the cloud, navigating to it presents a user login interface with an option to register.

The registration flow requires a code sent to the provided email for verification. Once verified you’re presented with the main application interface. A sample value is displayed when the account has no oxygen saturation or pulse rate history.

To connect a BerryMed pulse oximeter to begin reading measurements, turn on the device. Choose the Connect Pulse Oximeter button and then select it from the list. A Chrome browser on a desktop or Android mobile device is required to use the Web Bluetooth feature.

If you do not have a compatible Bluetooth pulse oximeter or access to Web Bluetooth, checking the Enter Manually check box presents direct input boxes.

Once oxygen saturation and pulse rate values are available, choose the cloud upload icon. This publishes the values to the serverless backend, where they are stored in a DynamoDB table. The trend chart then updates to reflect the new data.

Access to your historical data can be shared to another user, for example a healthcare professional. Choose the share icon on the right to open sharing options. From here, you can add or remove access to others by user name.

To view data shared with you, select the user name from the drop-down and choose the refresh icon.

Understanding the serverless backend

In the GitHub project, the folder serverless-backend/ contains the AWS SAM template file and the Lambda functions. It creates an API Gateway endpoint, six Lambda functions, and two DynamoDB tables. The template also defines an Amazon Cognito authorizer for the API using the UserPoolID passed in as a parameter:

This only allows authenticated users of the frontend application to make requests with a JWT token containing their user name and email. The backend uses that information to fetch and store data in DynamoDB that corresponds to the user making the request.

The first three endpoints handle updating and retrieving oxygen and pulse rate levels. When a user publishes a new measurement, the AddLevels function is invoked which creates a new item in the DynamoDB “Levelstable.

The FetchLevels function retrieves the user’s personal history. The FetchSharedUserLevels function checks the Access Table to see if the requesting user has shared access rights.

The remaining endpoints handle access management. When you add a shared user, this invokes the ManageAccess function with a user name and an action, such as share or revoke. If sharing, the item is added to the Access Table that enables the relationship. If revoking, the item is removed from the table.

The GetSharedUsers function fetches the list of shared with the user making the request. This populates the drop-down of accessible users. FetchUsersWithAccess fetches all users that have access to the user making the request, this populates the list of users in the sharing options.

The DynamoDB tables are created by the AWS SAM template with the partition key and range key defined for each table. These are used by the Lambda functions to query and sort items. See the documentation to learn more about DynamoDB table key schema.

LevelsTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "timestamp"
          AttributeType: "N"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: timestamp
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

  SharedAccessTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "shared_user"
          AttributeType: "S"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: shared_user
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

 

Understanding the frontend

In the GitHub project, the folder amplify-frontend/src/ contains all the code for the frontend application. In main.js, the Amplify VueJS modules are configured to use the resources defined in aws-exports.js. It also configures the endpoint of the serverless backend, defined in api-config.js.

In the file, components/OxygenMonitor.vue, the API module is imported and the desired API is defined.

API calls are defined as Vue methods that can be called by various other components and elements of the application.

In components/ConnectDevice.vue, the connect method initializes a Web Bluetooth connection to the pulse oximeter. It searches for a Bluetooth service UUID and device name specific to BerryMed pulse oximeters. On a successful connection it creates an event listener on the Bluetooth characteristic that notifies changes on measurements.

The handleData method parses notification events. It emits on any changes to oxygen saturation or pulse rate.

The OxygenMonitor component defines the ConnectDevice component in its template. It binds handlers on emitted events.

The handlers assign the values to the Vue data object for use throughout the application.

Further explore the project code to see how the Amplify Framework and the serverless backend are used to make a practical application.

Conclusion

Tracking patient vitals remotely has become more relevant than ever. This guide demonstrates a solution for a personal health and telemedicine application. The full solution includes multiuser functionality and a secure and scalable serverless backend. The application uses a browser to interact with a physical device to measure oxygen saturation and pulse rate. It publishes measurements to a database using a serverless API. The historical data can be displayed as a trend chart and can also be shared with other users.

Once more familiarized with the sample project you may want to begin developing an application with your team. The Amplify Framework has support for team environments, allowing all your developers to work together seamlessly.

To learn more about AWS serverless and keep up to date on the latest features, subscribe to the YouTube channel.

ICYMI: Serverless Q2 2020

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/icymi-serverless-q2-2020/

Welcome to the 10th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, checkout what happened last quarter here.

AWS Lambda

AWS Lambda functions can now mount an Amazon Elastic File System (EFS). EFS is a scalable and elastic NFS file system storing data within and across multiple Availability Zones (AZ) for high availability and durability. In this way, you can use a familiar file system interface to store and share data across all concurrent execution environments of one, or more, Lambda functions. EFS supports full file system access semantics, such as strong consistency and file locking.

Using different EFS access points, each Lambda function can access different paths in a file system, or use different file system permissions. You can share the same EFS file system with Amazon EC2 instances, containerized applications using Amazon ECS and AWS Fargate, and on-premises servers.

Learn how to create an Amazon EFS-mounted Lambda function using the AWS Serverless Application Model in Sessions With SAM Episode 10.

With our recent launch of .NET Core 3.1 AWS Lambda runtime, we’ve also released version 2.0.0 of the PowerShell module AWSLambdaPSCore. The new version now supports PowerShell 7.

Amazon EventBridge

At AWS re:Invent 2019, we introduced a preview of Amazon EventBridge schema registry and discovery. This is a way to store the structure of the events (the schema) in a central location. It can simplify using events in your code by generating the code to process them for Java, Python, and TypeScript. In April, we announced general availability of EventBridge Schema Registry.

We also added support for resource policies. Resource policies allow sharing of schema repository across different AWS accounts and organizations. In this way, developers on different teams can search for and use any schema that another team has added to the shared registry.

Ben Smith, AWS Serverless Developer Advocate, published a guide on how to capture user events and monitor user behavior using the Amazon EventBridge partner integration with Auth0. This enables better insight into your application to help deliver a more customized experience for your users.

AWS Step Functions

In May, we launched a new AWS Step Functions service integration with AWS CodeBuild. CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces packages that are ready for deployment. Now, during the execution of a state machine, you can start or stop a build, get build report summaries, and delete past build executions records.

With the new AWS CodePipeline support to invoke Step Functions you can customize your delivery pipeline with choices, external validations, or parallel tasks. Each of those tasks can now call CodeBuild to create a custom build following specific requirements. Learn how to build a continuous integration workflow with Step Functions and AWS CodeBuild.

Rob Sutter, AWS Serverless Developer Advocate, has published a video series on Step Functions. We’ve compiled a playlist on YouTube to help you on your serverless journey.

AWS Amplify

The AWS Amplify Framework announced in April that they have rearchitected the Amplify UI component library to enable JavaScript developers to easily add authentication scenarios to their web apps. The authentication components include numerous improvements over previous versions. These include the ability to automatically sign in users after sign-up confirmation, better customization, and improved accessibility.

Amplify also announced the availability of Amplify Framework iOS and Amplify Framework Android libraries and tools. These help mobile application developers to easily build secure and scalable cloud-powered applications. Previously, mobile developers relied on a combination of tools and SDKS along with the Amplify CLI to create and manage a backend.

These new native libraries are oriented around use-cases, such as authentication, data storage and access, machine learning predictions etc. They provide a declarative interface that enables you to programmatically apply best practices with abstractions.

A mono-repository is a repository that contains more than one logical project, each in its own repository. Monorepo support is now available for the AWS Amplify Console, allowing developers to connect Amplify Console to a sub-folder in your mono-repository. Learn how to set up continuous deployment and hosting on a monorepo with the Amplify Console.

Amazon Keyspaces (for Apache Cassandra)

Amazon Managed Apache Cassandra Service (MCS) is now generally available under the new name: Amazon Keyspaces (for Apache Cassandra). Amazon Keyspaces is built on Apache Cassandra and can be used as a fully managed serverless database. Your applications can read and write data from Amazon Keyspaces using your existing Cassandra Query Language (CQL) code, with little or no changes. Danilo Poccia explains how to use Amazon Keyspace with API Gateway and Lambda in this launch post.

AWS Glue

In April we extended AWS Glue jobs, based on Apache Spark, to run continuously and consume data from streaming platforms such as Amazon Kinesis Data Streams and Apache Kafka (including the fully-managed Amazon MSK). Learn how to manage a serverless extract, transform, load (ETL) pipeline with Glue in this guide by Danilo Poccia.

Serverless posts

Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest published to the AWS Compute Blog this quarter.

Introducing the new serverless LAMP stack

Ben Smith, AWS Serverless Developer Advocate, introduces the Serverless LAMP stack. He explains how to use serverless technologies with PHP. Learn about the available tools, frameworks and strategies to build serverless applications, and why now is the right time to start.

 

Building a location-based, scalable, serverless web app

James Beswick, AWS Serverless Developer Advocate, walks through building a location-based, scalable, serverless web app. Ask Around Me is an example project that allows users to ask questions within a geofence to create an engaging community driven experience.

Building well-architected serverless applications

Julian Wood, AWS Serverless Developer Advocate, published two blog series on building well-architected serverless applications. Learn how to better understand application health and lifecycle management.

Device hacking with serverless

Go beyond the browser with these creative and physical projects. Moheeb Zara, AWS Serverless Developer Advocate, published several serverless powered device hacks, all using off the shelf parts.

April

May

June

Tech Talks and events

We hold AWS Online Tech Talks covering serverless topics throughout the year. You can find these in the serverless section of the AWS Online Tech Talks page. We also regularly join in on podcasts, and record short videos you can find to learn in quick bite-sized chunks.

Here are the highlights from Q2.

Innovator Island Workshop

Learn how to build a complete serverless web application for a popular theme park called Innovator Island. James Beswick created a video series to walk you through this popular workshop at your own pace.

Serverless First Function

In May, we held a new virtual event series, the Serverless-First Function, to help you and your organization get the most out of the cloud. The first event, on May 21, included sessions from Amazon CTO, Dr. Werner Vogels, and VP of Serverless at AWS, David Richardson. The second event, May 28, was packed with sessions with our AWS Serverless Developer Advocate team. Catch up on the AWS Twitch channel.

Live streams

The AWS Serverless Developer Advocate team hosts several weekly livestreams on the AWS Twitch channel covering a wide range of topics. You can catch up on all our past content, including workshops, on the AWS Serverless YouTube channel.

Eric Johnson hosts “Sessions with SAM” every Thursday at 10AM PST. Each week, Eric shows how to use SAM to solve different serverless challenges. He explains how to use SAM templates to build powerful serverless applications. Catch up on the last few episodes.

James Beswick, AWS Serverless Developer Advocate, has compiled a round-up of all his content from Q2. He has plenty of videos ranging from beginner to advanced topics.

AWS Serverless Heroes

We’re pleased to welcome Kyuhyun Byun and Serkan Özal to the growing list of AWS Serverless Heroes. The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.

Still looking for more?

The Serverless landing page has much more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more getting started tutorials.

Follow the AWS Serverless team on our new LinkedIn page we share all the latest news and events. You can also follow all of us on Twitter to see latest news, follow conversations, and interact with the team.

Chris Munns: @chrismunns
Eric Johnson: @edjgeek
James Beswick: @jbesw
Moheeb Zara: @virgilvox
Ben Smith: @benjamin_l_s
Rob Sutter: @rts_rob
Julian Wood: @julian_wood

Building an electronic security lock using serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-an-electronic-security-lock-using-serverless/

In this guide I show how to build an electronic security lock for package delivery, securing physical documents, or granting access to a secret lab. This project uses AWS Serverless to create a touchscreen keypad lock that uses SMS to alert a recipient with a custom message and unlock code. Files are included for the lockbox shown, but the system can be installed in anything with a door.

CircuitPython is a lightweight version of Python that works on embedded hardware. It runs on an Adafruit PyPortal open-source IoT touch display. A relay wired to the PyPortal acts as an electronic switch to bridge power to an electronic solenoid lock.

I deploy the backend to the AWS Cloud using the AWS Serverless Application Repository. The code on the PyPortal makes REST calls to the backend to send a random four-digit code as a text message using Amazon Pinpoint. It also stores the lock state in AWS System Manager Parameter Store, a secure service for storing and retrieving sensitive information.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

The serverless backend consists of three Amazon API Gateway endpoints that invoke AWS Lambda functions. At boot, the PyPortal calls the FetchState function to access the lock state from a Parameter Store in AWS Systems Manager. For example, if the returned state is:

{ “locked”: True, “code”: “1234” }

the PyPortal leaves the relay open so that the solenoid lock remains locked. Once the matching “1234” code is entered, the relay circuit is closed and the solenoid lock is opened. When unlocked the PyPortal calls the UpdateState function to update the state to:

{ “locked”: False, “code”: “ ” }

In an unlocked state, the PyPortal requests a ten-digit phone number to be entered in order to lock. The SendCode function is called with the phone number so that it can generate a random four-digit code. A message is then sent to the recipient using Amazon Pinpoint, and the Parameter Store state is updated to “locked”. The state is returned in the response and the PyPortal opens the relay again and stores the unlock code locally.

Before deploying the backend, create an Amazon Pinpoint Project and request a long code. A long code is a dedicated phone number required for sending SMS.

  1. Navigate to the Amazon Pinpoint console.
  2. Ensure that you are in a Region where Amazon Pinpoint is supported. For the most up-to-date list, see AWS Service Endpoints.
  3. Choose Create Project.
  4. Name your project and choose Create.
  5. Choose Configure under SMS and Voice.
  6. Select Enable the SMS channel for this project and choose Save changes.

  7. Under Settings, SMS and Voice choose Request long codes.
  8. Enter the target country and select Transactional for Default call type. Choose Request long codes. This incurs a monthly cost of one dollar and can be canceled anytime. For a breakdown of costs, check out current pricing.
  9. Under Settings, General settings make a note of the Project ID.

I use the AWS Serverless Application Model (AWS SAM) to create the backend template. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the aws-serverless-pyportal-lock application in the AWS Serverless Application Repository.
  2. Under Application settings, fill the parameters PinpointApplicationID and LockboxCustomMessage.
  3. Choose Deploy.
  4. Once complete, choose View CloudFormation Stack.
  5. Select the Outputs tab and make a note of the LockboxBaseApiUrl. This is required for configuring the PyPortal.
  6. Navigate to the URL listed as LockboxApiKey in the Outputs tab.
  7. Choose Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the backend.

PyPortal setup

The following instructions walk through installing the latest version of the Adafruit CircuityPython libraries and firmware.

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.3.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help with troubleshooting issues.

Wiring

Electronic solenoid locks come in varying shapes, sizes, and voltages. Choose one that works for your needs and wire it according to the following instructions for the PyPortal.

  1. Gather the PyPortal, a solenoid lock, relay module, JST connectors, jumper wire, and a power source that matches the solenoid being used. For this project, a six-volt solenoid is used with a four AA battery holder.
  2. Wire the system following this diagram.
  3. Splice female jumper wires to the exposed leads of a JST connector to connect the relay module.
  4. Insert the JST connector end to the port labeled D4 on the PyPortal.
  5. Power the PyPortal using USB or by feeding a five-volt supply to the port labeled D3.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. You can flash new firmware on the PyPortal by copying a Python file and necessary assets to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the LockboxApiKey and LockboxBaseApiURL API Gateway endpoint. These can be found under Outputs in the AWS CloudFormation stack created by the AWS Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request to the FetchState function.
  5. Test the system works by entering in a phone number when prompted. An SMS message with the unlock code is sent to the provided number.
  6. Mount the system to the desired door or container, such as a 3D printed safe (files included in the GitHub project).

    Optionally
    , if you installed the Mu Editor, you can choose “Serial” to follow along the device log.

 

Understanding the code

See circuit-python/code.py from the GitHub project, this is the main code for the PyPortal. When the PyPortal connects to Wi-Fi, the first thing it does is make a GET request to the API Gateway endpoint for the FetchState function.

def getState():
    endpoint = secrets['base-api'] + "/state"
    headers = {"x-api-key": secrets['x-api-key']}
    response = wifi.get(endpoint, headers=headers, timeout=30)
    handleState(response.json())
    response.close()

The FetchState Lambda function code, written in Python, gets the state from the Parameter Store and returns it in the response to the PyPortal.

import os
import json
import boto3

client = boto3.client('ssm')
parameterName = os.environ.get('PARAMETER_NAME')

def lambda_handler(event, context):
    response = client.get_parameter(
        Name=parameterName,
        WithDecryption=False
    )

    state = json.loads(response['Parameter']['Value'])

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

The getState function in the CircuitPython code passes the returned state to the handleState function, which determines whether to physically lock or unlock the device.

def handleState(newState):
    print(state)
    state['code'] = newState['code']
    state['locked'] = newState['locked']
    print(state)
    if state['locked'] == True:
        lock()
    if state['locked'] == False:
        unlock()

When the device is unlocked, and a phone number is entered to lock the device, the CircuitPython command function is called.

def command(action, num):
    if action == "unlock":
        if num == state["code"]:
            unlock()
        else:
            number_label.text = "Wrong code!"
            playBeep()
    if action == "lock":
        if validate(num) == True:
            data = sendCode(num)
            handleState(data)

The CircuitPython sendCode function makes a POST request with the entered phone number to the API Gateway endpoint for the SendCode Lambda function

def sendCode(num):
    endpoint = secrets['base-api'] + "/lock"
    headers = {"x-api-key": secrets['x-api-key']}
    data = { "number": num }
    response = wifi.post(endpoint, json=data, headers=headers, timeout=30)
    data = response.json()
    print("Code received: ", data)
    response.close()
    return data

This Lambda function generates a random four-digit number and adds it to the custom message stored as an environment variable. It then sends a text message to the provided phone number using Amazon Pinpoint, and saves the new state in the Parameter Store. The new state is returned in the response and is used by the handleState function in the CircuitPython code.

import os
import json
import boto3
import random

pinpoint = boto3.client('pinpoint')
ssm = boto3.client('ssm')

applicationId = os.environ.get('APPLICATION_ID')
parameterName = os.environ.get('PARAMETER_NAME')
message = os.environ.get('MESSAGE')

def lambda_handler(event, context):
    print(event)
    body = json.loads(event['body'])

    number = "+1" + str(body['number'])
    code = str(random.randint(1111,9999))

    addresses = {}
    addresses[number] = {'ChannelType': 'SMS'}
    pinpoint.send_messages(
        ApplicationId=applicationId,
        MessageRequest={
            'Addresses': addresses,
            'MessageConfiguration': {
                'SMSMessage': {
                    'Body': message + code,
                    'MessageType': 'TRANSACTIONAL'
                }
            }
        }
    )

    state = { "locked": True, "code": code }

    response = ssm.put_parameter(
        Name=parameterName,
        Value=json.dumps(state),
        Type='String',
        Overwrite=True
    )

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

Entering the correct unlock code from the SMS message calls the unlock function. The unlock function closes the relay circuit to open the solenoid lock. It plays a beep sound and then calls the updateState function, which makes a POST request to the API Gateway endpoint for the UpdateState Lambda function.

def updateState(newState):
    endpoint = secrets['base-api'] + "/state"
    headers = {"x-api-key": secrets['x-api-key']}
    response = wifi.post(endpoint, json=newState, headers=headers, timeout=30)
    data = response.json()
    print("Updated state to: ", data)
    response.close()
    return data

def unlock():
    print("Unlocked!")
    number_label.text = "Enter Phone# to Lock"
    time.sleep(1)
    btn = find_button("Unlock")
    if btn is not None:
        btn.selected = True
        btn.label = "Lock"
    lock_relay.value = True
    playBeep()
    updateState({"locked": False, "code": ""})

The UpdateState Lambda function updates the Parameter Store whenever the state is changed. When the PyPortal loses power or restarts, the last known state is fetched, preventing a false lock/unlocked position.

import os
import json
import boto3

client = boto3.client('ssm')
parameterName = os.environ.get('PARAMETER_NAME')

def lambda_handler(event, context):
    state = json.loads(event['body'])

    response = client.put_parameter(
        Name=parameterName,
        Value=json.dumps(state),
        Type='String',
        Overwrite=True
    )

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

Conclusion

I show how to build an electronic keypad lock system using a basic relay circuit and a microcontroller. The system is managed by a serverless backend API deployed using the AWS Serverless Application Repository. The backend uses API Gateway to provide a REST API for Lambda functions that handle fetching lock state, updating lock state, and sending a random four-digit code via SMS using Amazon Pinpoint. Language consistency is achieved by using CircuitPython on the PyPortal and Python 3.8 in the Lambda function code.

Use this project as a template to build out any solution that requires secure physical access control. It can be embedded in cabinet drawers to protect documents or can be used with a door solenoid to control room access. Try combining it with a serverless geohashing app to develop a treasure hunting experience. Explore how to further modify the serverless application in the GitHub project by learning about the AWS Serverless Application Model. Read my previous guide to learn how you can add voice to a CircuitPython project on a PyPortal.

 

Adding voice to a CircuitPython project using Amazon Polly

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/adding-voice-to-a-circuitpython-project-using-amazon-polly/

An Adafruit PyPortal displaying a quote while synthesizing and playing speech using Amazon Polly.

An Adafruit PyPortal displaying a quote while synthesizing and playing speech using Amazon Polly.

As a natural means of communication, voice is a powerful way to humanize an experience. What if you could make anything talk? This guide walks through how to leverage the cloud to add voice to an off-the-shelf microcontroller. Use it to develop more advanced ideas, like a talking toaster that encourages healthy breakfast habits or a house plant that can express its needs.

This project uses an Adafruit PyPortal, an open-source IoT touch display programmed using CircuitPython, a lightweight version of Python that works on embedded hardware. You copy your code to the PyPortal like you would to a thumb drive and it runs. Random quotes from the PaperQuotes API are periodically displayed on the PyPortal LCD.

A microcontroller can’t do speech synthesis on its own so I use Amazon Polly, a natural text to speech synthesis service, to generate audio. Adding speech also extends accessibility to the visually impaired. This project includes an example for requesting arbitrary speech in addition to random quotes. Use this example to add a voice to any CircuitPython project.

An Adafruit PyPortal, an external speaker, and a microSD card.

An Adafruit PyPortal, an external speaker, and a microSD card.

I deploy the backend to the AWS Cloud using the AWS Serverless Application Repository. The code on the PyPortal makes a REST call to the backend to fetch a quote and synthesize speech audio for playback on the device.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend when requesting speech synthesis of a text string.

An architecture diagram of the serverless backend when requesting speech synthesis of a text string.

The serverless backend consists of an Amazon API Gateway endpoint that invokes an AWS Lambda function. If called with a JSON object containing text and voiceId attributes, it uses Amazon Polly to synthesize speech and uploads an MP3 file as a public object to Amazon S3. Upon completion, it returns the URL for downloading the audio file. It also processes the submitted text and adds return lines so that it can appear text-wrapped when displayed on the PyPortal. For a full list of voices, see the Amazon Polly documentation. An example response:

To fetch quotes instead of a text field, call the endpoint with a comma-separated list of tags as shown in the following diagram. The Lambda function then calls the PaperQuotes API. It fetches up to 50 quotes per tag and selects a random one to synthesize as speech. As with arbitrary text, it returns a URL and a text-wrapped representation of the quote.

An architecture diagram of the serverless backend when requesting a random quote from the PaperQuotes API to synthesize as speech.

An architecture diagram of the serverless backend when requesting a random quote from the PaperQuotes API to synthesize as speech.

I use the AWS Serverless Application Model (AWS SAM) to create the backend template. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Generate a free PaperQuotes API key at paperquotes.com. The serverless backend requires this to fetch quotes.
  2. Navigate to the aws-serverless-pyportal-polly application in the AWS Serverless Application Repository.
  3. Under Application settings, enter the parameter, PaperQuotesAPIKey.
  4. Choose Deploy.
  5. Once complete, choose View CloudFormation Stack.
  6. Select the Outputs tab and make a note of the SpeechApiUrl. This is required for configuring the PyPortal.
  7. Click the link listed for SpeechApiKey in the Outputs tab.
  8. Click Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the SpeechApiUrl.

PyPortal setup

The following instructions walk through installing the latest version of the Adafruit CircuityPython libraries and firmware. It also shows how to enable an external speaker module.

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.3.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Insert the microSD card in the slot located on the back of the device.
  4. Cut the jumper pad on the back of the device labeled A0. This enables you to use an external speaker instead of the built-in speaker.
  5. Plug the external speaker connector into the port labeled SPEAKER on the back of the device.
  6. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help with troubleshooting issues.
  7. Optionally if you have a 3D printer at home, you can print a case for your PyPortal. This can protect and showcase your project.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. You can flash new firmware on the PyPortal by copying a Python file and necessary assets to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the SpeechApiKey and SpeechApiUrl API Gateway endpoint. These can be found under Outputs in the AWS CloudFormation stack created by the AWS Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request.
    Optionally, if you installed the Mu Editor, you can click on “Serial” to follow along the device log.

The PyPortal takes a few moments to connect to the Wi-Fi network and make its first request. On success, you hear it greet you and describe itself. The default interval is set to then display and read a quote every five minutes.

Understanding the CircuitPython code

See the bottom of circuit-python/code.py from the GitHub project. When the PyPortal connects to Wi-Fi, the first thing it does is synthesize an arbitrary “hello world” text for display. It then begins periodically displaying and “speaking” quotes.

# Connect to WiFi
print("Connecting to WiFi...")
wifi.connect()
print("Connected!")

displayQuote("Ready!")

speakText('Hello world! I am an Adafruit PyPortal running Circuit Python speaking to you using AWS Serverless', 'Joanna')

while True:
    speakQuote('equality, humanity', 'Joanna')
    time.sleep(60*secrets['interval'])

Both the speakText and speakQuote function call the synthesizeSpeech function. The difference is whether text or tags are passed to the API.

def speakText(text, voice):
    data = { "text": text, "voiceId": voice }
    synthesizeSpeech(data)

def speakQuote(tags, voice):
    data = { "tags": tags, "voiceId": voice }
    synthesizeSpeech(data)

The synthesizeSpeech function posts the data to the API Gateway endpoint. It then invokes the Lambda function and returns the MP3 URL and the formatted text. The downloadfile function is called to fetch the MP3 file and store it on the SD card. displayQuote is called to display the quote on the LCD. Finally, the playMP3 opens the file and plays the speech audio using the built-in or external speaker.

def synthesizeSpeech(data):
    response = postToAPI(secrets['endpoint'], data)
    downloadfile(response['url'], '/sd/cache.mp3')
    displayQuote(response['text'])
    playMP3("/sd/cache.mp3")

Modifying the Lambda function

The serverless application includes a Lambda function, SynthesizeSpeechFunction, which can be modified directly in the Lambda console. The AWS SAM template used to deploy the AWS Serverless Application Repository application adds policies for accessing the S3 bucket where audio is stored. It also grants access to Amazon Polly for synthesizing speech. It also adds the PaperQuote API token as an environment variable and sets API Gateway as an event source.

SynthesizeSpeechFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: lambda_functions/SynthesizeSpeech/
      Handler: app.lambda_handler
      Runtime: python3.8
      Policies:
        - S3FullAccessPolicy:
            BucketName: !Sub "${AWS::StackName}-audio"
        - Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Action:
                - polly:*
              Resource: '*'
      Environment:
        Variables:
          BUCKET_NAME: !Sub "${AWS::StackName}-audio"
          PAPER_QUOTES_TOKEN: !Ref PaperQuotesAPIKey
      Events:
        Speech:
          Type: Api
          Properties:
            RestApiId: !Ref SpeechApi
            Path: /speech
            Method: post

To edit the Lambda function, navigate back to the CloudFormation stack and click on the SpeechSynthesizeFunction under the Resources tab.

From here, you can edit the Lambda function code directly. Clicking Save deploys the new code.

The getQuotes function is called to fetch quotes from the PaperQuotes API. You can change this to call from a different source, such as a custom selection of quotes. Try modifying it to fetch social media posts or study questions.

Conclusion

I show how to add natural sounding text to speech on a microcontroller using a serverless backend. This is accomplished by deploying an application through the AWS Serverless Application Repository. The deployed API uses API Gateway to securely invoke a Lambda function that fetches quotes from the PaperQuotes API and generates speech using Amazon Polly. The speech audio is uploaded to S3.

I then show how to program a microcontroller, the Adafruit PyPortal, using CircuitPython. The code periodically calls the serverless API to fetch a quote and to download speech audio for playback. The sample code also demonstrates synthesizing arbitrary text to speech, meaning it can be used for any project you can conceive. Check out my previous guide on using the PyPortal to create a Martian weather display for inspiration.

Build a serverless Martian weather display with CircuitPython and AWS Lambda

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/build-a-serverless-martian-weather-display-with-circuitpython-and-aws-lambda/

Build a standalone digital weather display of Mars showing the latest images from the Mars Curiosity Rover.

This project uses an Adafruit PyPortal, an open-source IoT touch display. Traditionally, a microcontroller is programmed with firmware compiled using various specific toolchains. Fortunately, the PyPortal is programmed using CircuitPython, a lightweight version of Python that works on embedded hardware. You just copy your code to the PyPortal like you would to a thumb drive and it runs.

I deploy the backend, the part in the cloud that does all the heavy lifting, using the AWS Serverless Application Repository (SAR). The code on the PyPortal makes a REST call to the backend to handle the requests to the NASA Mars Rover Photos API and InSight: Mars Weather Service API. It then converts and resizes the image before returning the information to the PyPortal for display.

An Adafruit PyPortal displaying the latest images from the Mars Curiosity Rover and weather data from InSight Mars Lander.

An Adafruit PyPortal displaying the latest images from the Mars Curiosity Rover and weather data from InSight Mars Lander.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

Using a serverless backend reduces the load on the PyPortal. The PyPortal makes a call to the backend API and receives a small JSON object with the relevant data. This allows you to change to the logic of where and how to get the image and weather data without needing physical access to the device.

The backend API consists of an AWS Lambda function, written in Python, behind an Amazon API Gateway endpoint. When invoked, the FetchMarsData function makes requests to two separate NASA APIs. First it fetches the latest images from the Mars Curiosity Rover, typically from the previous day, and picks one at random. It resizes and converts the image to bitmap format before uploading to Amazon S3 with public read permissions. The PyPortal downloads the image from S3 later.

The function then calls the InSight: Mars Weather Service API. It retrieves the average air temperature, wind speed, pressure, season, solar day (sol), as well as the first and last timestamp of daily sampling. The API returns these values and the S3 image URL as a JSON object.

I use the AWS Serverless Application Model (SAM) to create the backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Generate a free NASA API key at api.nasa.gov. This is required to gain access to the NASA data APIs.
  2. Navigate to the aws-serverless-pyportal-mars-weather-display application in the Serverless Application Repository.
  3. Choose Deploy.
  4. On the next page, under Application Settings, enter the parameter, NasaApiKey.

  5. Once complete, choose View CloudFormation Stack.

  6. Select the Outputs tab and make a note of the MarsApiUrl. This is required for configuring the PyPortal.

  7. Navigate to the MarsApiKey URL listed in the Outputs tab.

  8. Click Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the MarsApiUrl.

PyPortal setup

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.2.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Insert the microSD card in the slot located on the back of the device.
  4. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help if you need to troubleshoot issues.
  5. Optionally if you have a 3D printer at home, you can print a case for your PyPortal. This can protect your project while also being a great way to display it on a desk.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. Flashing new firmware on the PyPortal is as simple as copying a Python file and necessary assets over to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the MarsApiKey and MarsApiUrl API Gateway endpoint, which can be found under Outputs in the AWS CloudFormation stack created by the Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request.
    Optionally, if you installed the Mu Editor, you can click on “Serial” to follow along the device log.Animated gif of the PyPortal device displaying a Mars rover image and Mars weather data.

Understanding how CircuitPython calls API Gateway

The main CircuitPython file is code.py. At the end of the file, the while loop periodically performs the operations necessary to display the photos from the Curiosity Rover and the InSight Mars lander weather data.

while True:
    data = callAPIEndpoint(secrets['mars_api_url'])
    downloadImage(data['image_url'])
    showDisplay(data['insight'], 
    displayTime=60*interval_minutes)

First, it calls the API Gateway endpoint using the URL from the secrets.py file, and passes the returned JSON to helper functions. The callAPIEndpoint(url) function passes the MarsApiKey in the header and a timeout of 30 seconds to the wifi.get() method. The timeout is required for integrations with services like Lambda and API Gateway. Remember, the CircuitPython code is running on a microcontroller and sometimes must wait longer when making requests.

def callAPIEndpoint(mars_api_url):
    headers = {"x-api-key": secrets['mars_api_key']}
    response = wifi.get(mars_api_url, headers=headers, timeout=30)
    data = response.json()
    print("JSON Response: ", data)
    response.close()
    return data

The JSON object that is received by the PyPortal is defined in the handler of the Lambda function. In the GitHub project downloaded earlier, see src/app.py.

def lambda_handler(event, context):
    url = fetchRoverImage()
    imgData = fetchImageData(url)
    image_s3_url = resize_image(imgData)
    weatherData = getMarsInsightWeather()

    return {
        "statusCode": 200,
        "body": json.dumps({
            "image_url": image_s3_url,
            "insight": weatherData
        })
    }

Similar to the CircuitPython code, this uses helper functions to perform all the various operations needed to retrieve and craft the data. At completion, the returned JSON is passed as the response to the PyPortal.

A quick way to add a new property is to edit the Lambda function directly through the AWS Lambda Console. Here, a key “hello” is added with a value “world”:

In the CircuitPython code.py file, the key is now available in the JSON response from API Gateway. The following prints the key value, which can be seen using the Mu Editor Serial debugger.

data = callAPIEndpoint(secrets['mars_api_url'])

print(data[‘hello’])

The Lambda function is packaged with the AWS Python SDK, boto3, which provides methods for interacting with a variety of AWS services. The Python Requests library is also included to make calls to the NASA APIs. Try exploring how to incorporate other services or APIs into your project. To understand how to modify the visual display on the PyPortal itself, see the displayio guide from Adafruit.

Conclusion

I show how to build a “live” Martian weather display using an Adafruit PyPortal, CircuitPython, and AWS Serverless technologies. Whether this is your first time using hardware or a serverless backend in the AWS Cloud, this project is simplified by the use of CircuitPython and the Serverless Application Model.

I also show how to make a request to API Gateway from the PyPortal. I then craft a response in Lambda for the PyPortal. Since both use variants of the Python programming language, much of the syntax stays the same.

To learn more, explore other devices supported by CircuitPython and the variety of community contributed libraries. Combined with the breadth of AWS services, you can push the boundaries of creativity.

ICYMI: Serverless Q1 2020

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/icymi-serverless-q1-2020/

Welcome to the ninth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

A calendar of the January, February, and March.

In case you missed our last ICYMI, checkout what happened last quarter here.

Launches/New products

In 2018, we launched the AWS Well-Architected Tool. This allows you to review workloads in a structured way based on the AWS Well-Architected Framework. Until now, we’ve provided workload-specific advice using the concept of a “lens.”

As of February, this tool now lets you apply those lenses to provide greater visibility in specific technology domains to assess risks and find areas for improvement. Serverless is the first available lens.

You can apply a lens when defining a workload in the Well-Architected Tool console.

A screenshot of applying a lens.

HTTP APIs beta was announced at AWS re:Invent 2019. Now HTTP APIs is generally available (GA) with more features to help developers build APIs better, faster, and at lower cost. HTTP APIs for Amazon API Gateway is built from the ground up based on lessons learned from building REST and WebSocket APIs, and looking closely at customer feedback.

For the majority of use cases, HTTP APIs offers up to 60% reduction in latency.

HTTP APIs costs at least 71% lower when compared against API Gateway REST APIs.

A bar chart showing the cost comparison between HTTP APIs and API Gateway.

HTTP APIs also offers a more intuitive experience and powerful features, like easily configuring cross origin resource scripting (CORS), JWT authorizers, auto-deploying stages, and simplified route integrations.

AWS Lambda

You can now view and monitor the number of concurrent executions of your AWS Lambda functions by version and alias. Previously, the ConcurrentExecutions metric measured and emitted the sum of concurrent executions for all functions in the account. It included even those that had a reserved concurrency limit specified.

Now, the ConcurrentExecutions metric is emitted for all functions, versions, aliases. This can be used to see which functions consume your concurrency limits and estimate peak traffic based on consumption averages. Fine grain visibility in these areas can help plan appropriate configuration for Provisioned Concurrency.

A Lambda function written in Ruby 2.7.

A Lambda function written in Ruby 2.7.

AWS Lambda now supports Ruby 2.7. Developers can take advantage of new features in this latest release of Ruby, like pattern matching, argument forwarding and numbered arguments. Lambda functions written in Ruby 2.7 run on Amazon Linux 2.

Updated AWS Mock .NET Lambda Test Tool

Updated AWS Mock .NET Lambda Test Tool

.NET Core 3.1 is now a supported runtime in AWS Lambda. You can deploy to Lambda by setting the runtime parameter value to dotnetcore3.1. Updates have also been released for the AWS Toolkit for Visual Studio and .NET Core Global Tool Amazon .Lambda.Tools. These make it easier to build and deploy your .NET Core 3.1 Lambda functions.

With .NET Core 3.1, you can take advantage of all the new features it brings to Lambda, including C# 8.0, F# 4.7 support, and .NET Standard 2.1 support, a new JSON serializer, and a ReadyToRun feature for ahead-of-time compilation. The AWS Mock .NET Lambda Test Tool has also been updated to support .NET Core 3.1 with new features to help debug and improve your workloads.

Cost Savings

Last year we announced Savings Plans for AWS Compute Services. This is a flexible discount model provided in exchange for a commitment of compute usage over a period of one or three years. AWS Lambda now participates in Compute Savings Plans, allowing customers to save money. Visit the AWS Cost Explorer to get started.

Amazon API Gateway

With the HTTP APIs launched in GA, customers can build APIs for services behind private ALBs, private NLBs, and IP-based services registered in AWS Cloud Map such as ECS tasks. To make it easier for customers to work between API Gateway REST APIs and HTTP APIs, customers can now use the same custom domain across both REST APIs and HTTP APIs. In addition, this release also enables customers to perform granular throttling for routes, improved usability when using Lambda as a backend, and better error logging.

AWS Step Functions

AWS Step Functions VS Code plugin.

We launched the AWS Toolkit for Visual Studio Code back in 2019 and last month we added toolkit support for AWS Step Functions. This enables you to define, visualize, and create workflows without leaving VS Code. As you craft your state machine, it is continuously rendered with helpful tools for debugging. The toolkit also allows you to update state machines in the AWS Cloud with ease.

To further help with debugging, we’ve added AWS Step Functions support for CloudWatch Logs. For standard workflows, you can select different levels of logging and can exclude logging of a workflow’s payload. This makes it easier to monitor event-driven serverless workflows and create metrics and alerts.

AWS Amplify

AWS Amplify is a framework for building modern applications, with a toolchain for easily adding services like authentication, storage, APIs, hosting, and more, all via command line interface.

Customers can now use the Amplify CLI to take advantage of AWS Amplify console features like continuous deployment, instant cache invalidation, custom redirects, and simple configuration of custom domains. This means you can do end-to-end development and deployment of a web application entirely from the command line.

Amazon DynamoDB

You can now easily increase the availability of your existing Amazon DynamoDB tables into additional AWS Regions without table rebuilds by updating to the latest version of global tables. You can benefit from improved replicated write efficiencies without any additional cost.

On-demand capacity mode is now available in the Asia Pacific (Osaka-Local) Region. This is a flexible capacity mode for DynamoDB that can serve thousands of requests per second without requiring capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance cost and performance.

AWS Serverless Application Repository

The AWS Serverless Application Repository (SAR) is a service for packaging and sharing serverless application templates using the AWS Serverless Application Model (SAM). Applications can be customized with parameters and deployed with ease. Previously, applications could only be shared publicly or with specific AWS account IDs. Now, SAR has added sharing for AWS Organizations. These new granular permissions can be added to existing SAR applications. Learn how to take advantage of this feature today to help improve your organizations productivity.

Amazon Cognito

Amazon Cognito, a service for managing identity providers and users, now supports CloudWatch Usage Metrics. This allows you to monitor events in near-real time, such as sign-in and sign-out. These can be turned into metrics or CloudWatch alarms at no additional cost.

Cognito User Pools now supports logging for all API calls with AWS CloudTrail. The enhanced CloudTrail logging improves governance, compliance, and operational and risk auditing capabilities. Additionally, Cognito User Pools now enables customers to configure case sensitivity settings for user aliases, including native user name, email alias, and preferred user name alias.

Serverless posts

Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest published to the AWS Compute Blog this quarter.

January

February

March

Tech Talks and events

We hold AWS Online Tech Talks covering serverless topics throughout the year. You can find these in the serverless section of the AWS Online Tech Talks page. We also delivered talks at conferences and events around the globe, regularly join in on podcasts, and record short videos you can find to learn in quick byte-sized chunks.

Here are the highlights from Q1.

January

February

March

Live streams

Rob Sutter, a Senior Developer Advocate on AWS Serverless, has started hosting Serverless Office Hours every Tuesday at 14:00 ET on Twitch. He’ll be imparting his wisdom on Step Functions, Lambda, Golang, and taking questions on all things serverless.

Check out some past sessions:

Happy Little APIs Season 2 is airing every other Tuesday on the AWS Twitch Channel. Checkout the first episode where Eric Johnson and Ran Ribenzaft, Serverless Hero and CTO of Epsagon, talk about private integrations with HTTP API.

Eric Johnson is also streaming “Sessions with SAM” every Thursday at 10AM PST. Each week Eric shows how to use SAM to solve different problems with serverless and how to leverage SAM templates to build out powerful serverless applications. Catch up on the last few episodes on our Twitch channel.

Relax with a cup of your favorite morning beverage every Friday at 12PM EST with a Serverless Coffee Break with James Beswick. These are chats about all things serverless with special guests. You can catch these live on Twitter or on your own time with these recordings.

AWS Serverless Heroes

This year, we’ve added some new faces to the list of AWS Serverless Heroes. The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.

Still looking for more?

The Serverless landing page has even more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and Getting Started tutorials.

Building a Raspberry Pi telepresence robot using serverless: Part 2

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-raspberry-pi-telepresence-robot-using-serverless-part-2/

The deployed web frontend and the robot it controls.

The deployed web frontend and the robot it controls.

In a previous post, I show how to build a telepresence robot using serverless technologies and a Raspberry Pi. The result is a robot that transmits live video using Amazon Kinesis Video Streams with WebRTC. It can be driven remotely via an AWS Lambda function using an Amazon API Gateway REST endpoint.

This post walks through deploying a web interface to view the live stream and control the robot. The application is built using AWS Amplify and Vue.js. Amplify is a development framework that makes it easy to add authentication, hosting, and other AWS resources. It also provides a pipeline for deploying web applications.

I use the Amplify Command Line Interface (CLI) to create an authentication flow for user sign-in using Amazon Cognito. I then show how to set up an authorizer in API Gateway so that only authenticated users can drive the robot. An AWS Identity Access and Management (IAM) role sets permissions so users can assume access to Kinesis Video Streams to view the live video feed. The web application is then configured and run locally for testing. Finally, using the Amplify CLI, I show how to add hosting and publish a production ready web application.

Prerequisites

You need the following to complete the project:

Amplify CLI and project setup

An architecture diagram showing the client relationship between the AWS resources deployed by Amplify.

The Amplify CLI allows you to create and manage resource on AWS. With the libraries and UI components provided by the Amplify Framework, you can build powerful applications using a variety of cloud services.

The web interface for the telepresence robot is built using Amplify Vue.js components for user registration and sign-in. Download the application and use the Amplify CLI to configure resources for the web application.

To install and configure Amplify on the frontend web application, refer to the project set-up instructions on the GitHub project.

Creating an API Gateway authorizer

In the first guide, API Gateway is used to create a REST endpoint to send commands to the robot. Currently, the endpoint accepts requests without any authentication. To ensure that only authenticated users can control the robot, you must create an authorizer for the API.

The backend resources deployed by the Amplify web application include a Cognito User Pool. This is a user directory that provides sign-up and sign-in services, user profiles, and identity providers. The following instructions demonstrate how to configure an authorizer on API Gateway that verifies access using a user pool.

  1. Navigate to the Amazon API Gateway console.
  2. Choose the API created in the first guide for driving the robot.
  3. Choose Authorizers from the menu.
  4. Choose Create New Authorizer. Choose Cognito for Type and select the user pool created by the Amplify CLI. Set Token Source to Authorization.
  5. Choose Create.
  6. Choose Resources from the menu.
  7. Choose POST, Method Request.
  8. Set Authorization to the newly created authorizer.

Adding permissions

The web application loads a component for viewing video from the robot over a WebRTC connection. WebRTC is a protocol for negotiating peer to peer data connections by using a signaling channel.

The previous guide configured the robot to use a Kinesis Video Signaling Channel. Users signed into the web application must assume some permissions for Kinesis Video Streams to access the signaling channel.

When the Amplify CLI deploys an authentication flow, it creates a role in IAM. Cognito uses this role to assume permissions for a user pool based on matching conditions.

This Trust Relationship on the authRole controls when the role’s permissions are assumed. In this case, on a matching “authenticated” user from the identity pool.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "cognito-identity.amazonaws.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "cognito-identity.amazonaws.com:aud": "us-west-2:12345e-9548-4a5a-b44c-12345677"
        },
        "ForAnyValue:StringLike": {
          "cognito-identity.amazonaws.com:amr": "authenticated"
        }
      }
    }
  ]
}

Follow these steps to attach Kinesis Video Streams permissions to the authRole.

  1. Navigate to the IAM console.
  2. Choose Roles from the menu.
  3. Use the search bar to find “authRole”. It is prefixed by the stack name associated with the Amplify deployment. Choose it from the list.
  4. Choose Add inline policy.
  5. Select the JSON tab and paste in the following. In the Resource property, replace <RobotName> with the name of the robot created in the first guide.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "kinesisvideo:GetSignalingChannelEndpoint",
                    "kinesisvideo:ConnectAsMaster",
                    "kinesisvideo:GetIceServerConfig",
                    "kinesisvideo:ConnectAsViewer",
                    "kinesisvideo:DescribeSignalingChannel"
                ],
                "Resource": "arn:aws:kinesisvideo:*:*:channel/<RobotName>/*"
            }
        ]
    }
    
  6. Choose Review Policy.
  7. Choose Create Policy.

Configuring the application

The authorizer allows authenticated users to invoke the Lambda function through API Gateway. The permissions set on the authRole control access to the live video. The web application must know the endpoint for sending commands and the Kinesis Video Signaling Channel to use for the robot.

This information is configured in web-app/src/main.js. It requires a file named config.json to let the application know which endpoint and signaling channel to use.

  1. Inside the application folder aws-serverless-telepresence-robot/web-app/src, create a new file named config.json.
    {
      "endpoint": "",
      "channelARN": ""
    }
  2. Replace endpoint with the Invoke URL of the robot API. This can be found in API Gateway console under Stages, Prod. It can also be found under Outputs in the AWS CloudFormation stack created by the aws-serverless-telepresence-robot serverless application from the first guide.
  3. Replace channelARN with the ARN of your robot’s signaling channel. This can be found in the Amazon Kinesis Video Streams console under Signaling channels.

Running the application

You can build and run the application locally for testing purposes. It still uses the backend deployed in the cloud. Do this before publishing to production:

  1. Inside the web-app directory, run the following command:
    npm run serve
  2. Navigate to the locally hosted application at http://localhost:8080
  3. Follow the onscreen steps to create a new account.
  4. Choose Start Video. If the robot is active, a WebRTC connection is made and live video is displayed.
  5. Use the onscreen arrow buttons to drive the robot.

Deploying a hosted application

Amplify makes it easy to deploy a hosted application. The following commands configure and deploy hosting resources in Amazon S3 and Amazon CloudFront. This allows you to securely and quickly deploy your application for production use.

  1. Inside aws-serverless-telepresence-robot/web-app, run the following. When prompted, select PROD, this configures the application to deploy using S3 and CloudFront.
    amplify add hosting
  2. Finally, this command builds and publishes all the backend and frontend resources for your Amplify project. On completion, it provides a URL to the hosted web application. Note, it can take a while for the CloudFront distribution to deploy.
    amplify publish

Conclusion

In this post, I show how to build a web interface for remotely viewing and controlling the robot. This is done using AWS Amplify, Vue.js, and a previously deployed serverless application.

With a few commands, the Amplify CLI is used to configure backend resources for a web frontend. Cognito is used as an identity provider. An Authorizer is created for an API Gateway endpoint, allowing authenticated users to send commands to the robot from the frontend. An IAM Role with a trusted relationship with the Cognito User Pool is given permissions to use Kinesis Video Signaling Channels, which are passed to the authenticated users. This allows the web frontend to open a live video connection to the telepresence robot using WebRTC.

Once run and tested locally, I showed how the Amplify CLI can streamline configuring hosting and deployment of a production web application using S3 and CloudFront. The summation of this is a custom-built telepresence robot with a web application for viewing and operating securely, all done without managed servers.

The principles used in this project can be applied towards a variety of use cases. Use this to build out a fleet of remote vehicles to monitor factories or for personal home security. You can create a community for users to experience environments remotely. The interface Vue component can also easily be modified for custom commands sent to the application running on the robot.

Building a Raspberry Pi telepresence robot using serverless: Part 1

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-raspberry-pi-telepresence-robot-using-serverless-part-1/

A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.

A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.

A telepresence robot allows you to explore remote environments from the comfort of your home through live stream video and remote control. These types of robots can improve the lives of the disabled, elderly, or those that simply cannot be with their coworkers or loved ones in person. Some are used to explore off-world terrain and others for search and rescue.

This guide walks through building a simple telepresence robot using a Pimoroni STS-PI Raspberry Pi robot kit. A Raspberry Pi is a small low-cost device that runs Linux. Add-on modules for Raspberry Pi are called “hats”. You can substitute this kit with any mobile platform that uses two motors wired to an Adafruit Motor Hat or a Pimoroni Explorer Hat.

The sample serverless application uses AWS Lambda and Amazon API Gateway to create a REST API for driving the robot. A Python application running on the robot uses AWS IoT Core to receive drive commands and authenticate with Amazon Kinesis Video Streams with WebRTC using an IoT Credentials Provider. In the next blog I walk through deploying a web frontend to both view the livestream and control the robot via the API.

Prerequisites

You need the following to complete the project:

A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.

A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.

Estimated Cost: $120

There are three major parts to this project. First deploy the serverless backend using the AWS Serverless Application Repository. Then assemble the robot and run an installer on the Raspberry Pi. Finally, configure and run the Python application on the robot to confirm it can be driven through the API and is streaming video.

Deploy the serverless application

In this section, use the Serverless Application Repository to deploy the backend resources for the robot. The resources to deploy are defined using the AWS Serverless Application Model (SAM), an open-source framework for building serverless applications using AWS CloudFormation. To deeper understand how this application is built, look at the SAM template in the GitHub repository.

An architecture diagram of the AWS IoT and Amazon Kinesis Video Stream resources of the deployed application.

The Python application that runs on the robot requires permissions to connect as an IoT Thing and subscribe to messages sent to a specific topic on the AWS IoT Core message broker. The following policy is created in the SAM template:

RobotIoTPolicy:
      Type: "AWS::IoT::Policy"
      Properties:
        PolicyName: !Sub "${RobotName}Policy"
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
            - Effect: Allow
              Action:
                - iot:Connect
                - iot:Subscribe
                - iot:Publish
                - iot:Receive
              Resource:
                - !Sub "arn:aws:iot:*:*:topicfilter/${RobotName}/action"
                - !Sub "arn:aws:iot:*:*:topic/${RobotName}/action"
                - !Sub "arn:aws:iot:*:*:topic/${RobotName}/telemetry"
                - !Sub "arn:aws:iot:*:*:client/${RobotName}"

To transmit video, the Python application runs the amazon-kinesis-video-streams-webrtc-sdk-c sample in a subprocess. Instead of using separate credentials to authenticate with Kinesis Video Streams, a Role Alias policy is created so that IoT credentials can be used.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "iot:Connect",
        "iot:AssumeRoleWithCertificate"
      ],
      "Resource": "arn:aws:iot:Region:AccountID:rolealias/robot-camera-streaming-role-alias",
      "Effect": "Allow"
    }
  ]
}

When the above policy is attached to a certificate associated with an IoT Thing, it can assume the following role:

 KVSCertificateBasedIAMRole:
      Type: 'AWS::IAM::Role'
      Properties:
        AssumeRolePolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: 'Allow'
            Principal:
              Service: 'credentials.iot.amazonaws.com'
            Action: 'sts:AssumeRole'
        Policies:
        - PolicyName: !Sub "KVSIAMPolicy-${AWS::StackName}"
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
            - Effect: Allow
              Action:
                - kinesisvideo:ConnectAsMaster
                - kinesisvideo:GetSignalingChannelEndpoint
                - kinesisvideo:CreateSignalingChannel
                - kinesisvideo:GetIceServerConfig
                - kinesisvideo:DescribeSignalingChannel
              Resource: "arn:aws:kinesisvideo:*:*:channel/${credentials-iot:ThingName}/*"

This role grants access to connect and transmit video over WebRTC using the Kinesis Video Streams signaling channel deployed by the serverless application. An architecture diagram of the API endpoint in the deployed application.

A deployed API Gateway endpoint, when called with valid JSON, invokes a Lambda function that publishes to an IoT message topic, RobotName/action. The Python application on the robot subscribes to this topic and drives the motors based on any received message that maps to a command.

  1. Navigate to the aws-serverless-telepresence-robot application in the Serverless Application Repository.
  2. Choose Deploy.
  3. On the next page, under Application Settings, fill out the parameter, RobotName.
  4. Choose Deploy.
  5. Once complete, choose View CloudFormation Stack.
  6. Select the Outputs tab. Copy the ApiURL and the EndpointURL for use when configuring the robot.

Create and download the AWS IoT device certificate

The robot requires an AWS IoT root CA (fetched by the install script), certificate, and private key to authenticate with AWS IoT Core. The certificate and private key are not created by the serverless application since they can only be downloaded on creation. Create a new certificate and attach the IoT policy and Role Alias policy deployed by the serverless application.

  1. Navigate to the AWS IoT Core console.
  2. Choose Manage, Things.
  3. Choose the Thing that corresponds with the name of the robot.
  4. Under Security, choose Create certificate.
  5. Choose Activate.
  6. Download the Private Key and Thing Certificate. Save these securely, as this is the only time you can download this certificate.
  7. Choose Attach Policy.
  8. Two policies are created and must be attached. From the list, select
    <RobotName>Policy
    AliasPolicy-<AppName>
  9. Choose Done.

Flash an operating system to an SD card

The Raspberry Pi single-board Linux computer uses an SD card as the main file system storage. Raspbian Buster Lite is an officially supported Debian Linux operating system that must be flashed to an SD card. Balena.io has created an application called balenaEtcher for the sole purpose of accomplishing this safely.

  1. Download the latest version of Raspbian Buster Lite.
  2. Download and install balenaEtcher.
  3. Insert the SD card into your computer and run balenaEtcher.
  4. Choose the Raspbian image. Choose Flash to burn the image to the SD card.
  5. When flashing is complete, balenaEtcher dismounts the SD card.

Configure Wi-Fi and SSH headless

Typically, a keyboard and monitor are used to configure Wi-Fi or to access the command line on a Raspberry Pi. Since it is on a mobile platform, configure the Raspberry Pi to connect to a Wi-Fi network and enable remote access headless by adding configuration files to the SD card.

  1. Re-insert the SD card to your computer so that it shows as volume boot.
  2. Create a file in the boot volume of the SD card named wpa_supplicant.conf.
  3. Paste in the following contents, substituting your Wi-Fi credentials.
    ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
            update_config=1
            country=<Insert country code here>
    
            network={
             ssid="<Name of your WiFi>"
             psk="<Password for your WiFi>"
            }

  4. Create an empty file without a file extension in the boot volume named ssh. At boot, the Raspbian operating system looks for this file and enables remote access if it exists. This can be done from a command line:
    cd path/to/volume/boot
    touch ssh

  5. Safely eject the SD card from your computer.

Assemble the robot

For this section, you can use the Pimoroni STS-Pi robot kit with a Pimoroni Explorer Hat, along with a Raspberry Pi Model 3 B+ or newer, and a camera module. Alternatively, you can use any two motor robot platform that uses the Explorer Hat or Adafruit Motor Hat.

  1. Follow the instructions in this video to assemble the Pimoroni STS-Pi robot kit.
  2. Place the SD card in the Raspberry Pi.
  3. Since the installation may take some time, power the Raspberry Pi using a USB 5V power supply connected to a wall plug rather than a battery.

Connect remotely using SSH

Use your computer to gain remote command line access of the Raspberry Pi using SSH. Both devices must be on the same network.

  1. Open a terminal application with SSH installed. It is already built into Linux and Mac OS, to enable SSH on Windows follow these instructions.
  2. Enter the following to begin a secure shell session as user pi on the default local hostname raspberrypi, which resolves to the IP address of the device using MDNS:
  3. If prompted to add an SSH key to the list of known hosts, type yes.
  4. When prompted for a password, type raspberry. This is the default password and can be changed using the raspi-config utility.
  5. Upon successful login, you now have shell access to your Raspberry Pi device.

Enable the camera using raspi-config

A built-in utility, raspi-config, provides an easy to use interface for configuring Raspbian. You must enable the camera module, along with I2C, a serial bus used for communicating with the motor driver.

  1. In an open SSH session, type the following to open the raspi-config utility:
    sudo raspi-config

  2. Using the arrows, choose Interfacing Options.
  3. Choose Camera. When prompted, choose Yes to enable the camera module.
  4. Repeat the process to enable the I2C interface.
  5. Select Finish and reboot.

Run the install script

An installer script is provided for building and installing the Kinesis Video Stream WebRTC producer, AWSIoTPythonSDK and Pimoroni Explorer Hat Python libraries. Upon completion, it creates a directory with the following structure:

├── /home/pi/Projects/robot
│  └── main.py // The main Python application
│  └── config.json // Parameters used by main.py
│  └── kvsWebrtcClientMasterGstSample //Kinesis Video Stream producer
│  └── /certs
│     └── cacert.pem // Amazon SFSRootCAG2 Certificate Authority
│     └── certificate.pem // AWS IoT certificate placeholder
│     └── private.pem.key // AWS IoT private key placeholder
  1. Open an SSH session on the Raspberry Pi.
  2. (Optional) If using the Adafruit Motor Hat, run this command, otherwise the script defaults to the Pimoroni Explorer Hat.
    export MOTOR_DRIVER=adafruit  

  3. Run the following command to fetch and execute the installer script.
    wget -O - https://raw.githubusercontent.com/aws-samples/aws-serverless-telepresence-robot/master/scripts/install.sh | bash

  4. While the script installs, proceed to the next section.

Configure the code

The Python application on the robot subscribes to AWS IoT Core to receive messages. It requires the certificate and private key created for the IoT thing to authenticate. These files must be copied to the directory where the Python application is stored on the Raspberry Pi.

It also requires the IoT Credentials endpoint is added to the file config.json to assume permissions necessary to transmit video to Amazon Kinesis Video Streams.

  1. Open an SSH session on the Raspberry Pi.
  2. Open the certificate.pem file with the nano text editor and paste in the contents of the certificate downloaded earlier.
    cd/home/pi/Projects/robot/certs
    nano certificate.pem

  3. Press CTRL+X and then Y to save the file.
  4. Repeat the process with the private.key.pem file.
    nano private.key.pem

  5. Open the config.json file.
    cd/home/pi/Projects/robot
    nano config.json

  6. Provide the following information:
    IOT_THINGNAME: The name of your robot, as set in the serverless application.
    IOT_CORE_ENDPOINT: This is found under the Settings page in the AWS IoT Core console.
    IOT_GET_CREDENTIAL_ENDPOINT: Provided by the serverless application.
    ROLE_ALIAS: This is already set to match the Role Alias deployed by the serverless application.
    AWS_DEFAULT_REGION: Corresponds to the Region the application is deployed in.
  7. Save the file using CTRL+X and Y.
  8. To start the robot, run the command:
    python3 main.py

  9. To stop the script, press CTRL+C.

View the Kinesis video stream

The following steps create a WebRTC connection with the robot to view the live stream.

  1. Navigate to the Amazon Kinesis Video Streams console.
  2. Choose Signaling channels from the left menu.
  3. Choose the channel that corresponds with the name of your robot.
  4. Open the Media Playback card.
  5. After a moment, a WebRTC peer to peer connection is negotiated and live video is displayed.
    An animated gif demonstrating a live video stream from the robot.

Sending drive commands

The serverless backend includes an Amazon API Gateway REST endpoint that publishes JSON messages to the Python script on the robot.

The robot expects a message:

{ “action”: <direction> }

Where direction can be “forward”, “backwards”, “left”, or “right”.

  1. While the Python script is running on the robot, open another terminal window.
  2. Run this command to tell the robot to drive forward. Replace <API-URL> using the endpoint listed under Outputs in the CloudFormation stack for the serverless application.
    curl -d '{"action":"forward"}' -H "Content-Type: application/json" -X POST https://<API-URL>/publish

    An animated gif demonstrating the robot being driven from a REST request.

Conclusion

In this post, I show how to build and program a telepresence robot with remote control and a live video feed in the cloud. I did this by installing a Python application on a Raspberry Pi robot and deploying a serverless application.

The Python application uses AWS IoT credentials to receive remote commands from the cloud and transmit live video using Kinesis Video Streams with WebRTC. The serverless application deploys a REST endpoint using API Gateway and a Lambda function. Any application that can connect to the endpoint can drive the robot.

In part two, I build on this project by deploying a web interface for the robot using AWS Amplify.

A preview of the web frontend built in the next blog.

A preview of the web frontend built in the next blog.

 

 

Deploy and publish to an Amazon MQ broker using AWS serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/deploy-and-publish-to-an-amazon-mq-broker-using-aws-serverless/

If you’re managing a broker on premises or in the cloud with a dependent existing infrastructure, Amazon MQ can provide easily deployed, managed ActiveMQ brokers. These support a variety of messaging protocols that can offload operational overhead. That can be useful when deploying a serverless application that communicates with one or more external applications that also communicate with each other.

This post walks through deploying a serverless backend and an Amazon MQ broker in one step using the AWS Serverless Application Model (AWS SAM). It shows you how to publish to a topic using AWS Lambda and then how to create a client application to consume messages from the topic, using a supported protocol. As a result, the AWS services and features supported by AWS Lambda can now be delivered to an external application connected to an Amazon MQ broker using STOMP, AMQP, MQTT, OpenWire, or WSS.

Although many protocols are supported by Amazon MQ, this walkthrough focuses on one. MQTT is a lightweight publish–subscribe messaging protocol. It is built to work in a small code footprint and is one of the most well-supported messaging protocols across programming languages. The protocol also introduced quality of service (QoS) to ensure message delivery when a device goes offline. Using QoS features, you can limit failure states in an interdependent network of applications.

To simplify this configuration, I’ve provided an AWS Serverless Application Repository application that deploys AWS resources using AWS CloudFormation. Two resources are deployed, a single instance Amazon MQ broker and a Lambda function. The Lambda function uses Node.js and an MQTT library to act as a producer and publish to a message topic on the Amazon MQ broker. A provided sample Node.js client app can act as an MQTT client and subscribe to the topic to receive messages.

Prerequisites

The following resources are required to complete the walkthrough:

Required steps

To complete the walkthrough, follow these steps:

  • Clone the Aws-sar-lambda-publish-amazonmq GitHub repository.
  • Deploy the AWS Serverless Application Repository application.
  • Run a Node.js MQTT client application.
  • Send a test message from an AWS Lambda function.
  • Use composite destinations.

Clone the GitHub repository

Before beginning, clone or download the project repository from GitHub. It contains the sample Node.js client application used later in this walkthrough.

Deploy the AWS Serverless Application Repository application

  1. Navigate to the page for the lambda-publish-amazonmq AWS Serverless Application Repository application.
  2. In Application settings, fill the following fields:

    – AdminUsername
    – AdminPassword
    – ClientUsername
    – ClientPassword

    These are the credentials for the Amazon MQ broker. The admin credentials are assigned to environment variables used by the Lambda function to publish messages to the Amazon MQ broker. The client credentials are used in the Node.js client application.

  3. Choose Deploy.

Creation can take up to 10 minutes. When completed, proceed to the next section.

Run a Node.js MQTT client application

The Amazon MQ broker supports OpenWire, AMQP, STOMP, MQTT, and WSS connections. This allows any supported programming language to publish and consume messages from an Amazon MQ queue or topic.

To demonstrate this, you can deploy the sample Node.js MQTT client application included in the GitHub project for the AWS Serverless Application Repository app. The client credentials created in the previous section are used here.

  1. Open a terminal application and change to the client-app directory in the GitHub project folder by running the following command:
    cd ~/some-project-path/aws-sar-lambda-publish-amazonmq/client-app
  2. Install the Node.js dependencies for the client application:
    npm install
  3. The app requires a WSS endpoint to create an Amazon MQ broker MQTT WebSocket connection. This can be found on the broker page in the Amazon MQ console, under Connections.
  4. The node app takes four arguments separated by spaces. Provide the user name and password of the client created on deployment, followed by the WSS endpoint and a topic, some/topic.
    node app.js "username" "password" "wss://endpoint:port" "some/topic"
  5. After connected prints in the terminal, leave this app running, and proceed to the next section.

There are three important components run by this code to subscribe and receive messages:

  • Connecting to the MQTT broker.
  • Subscribing to the topic on a successful connection.
  • Creating a handler for any message events.

The following code example shows connecting to the MQTT broker.

const args = process.argv.slice(2)

let options = {
  username: args[0],
  password: args[1],
  clientId: 'mqttLambda_' + uuidv1()
}

let mqEndpoint = args[2]
let topic = args[3]

let client = mqtt.connect( mqEndpoint, options)

The following code example shows subscribing to the topic on a successful connection.

// When connected, subscribe to the topic

client.on('connect', function() {
  console.log("connected")

  client.subscribe(topic, function (err) {
    if(err) console.log(err)
  })
})

The following code example shows creating a handler for any message events.

// Log messages

client.on('message', function (topic, message) {
  console.log(`message received on ${topic}: ${message.toString()}`)
})

Send a test message from an AWS Lambda function

Now that the Amazon MQ broker, PublishMessage Lambda function, and the Node.js client application are running, you can test consuming messages from a serverless application.

  1. In the Lambda console, select the newly created PublishMessage Lambda function. Its name begins with the name given to the AWS Serverless Application Repository application on deployment.
  2. Choose Test.
  3. Give the new test event a name, and optionally modify the message. Choose Create.
  4. Choose Test to invoke the Lambda function with the test event.
  5. If the execution is successful, the message appears in the terminal where the Node.js client-app is running.

Using composite destinations

The Amazon MQ broker uses an XML configuration to enable and configure ActiveMQ features. One of these features, composite destinations, makes one-to-many relationships on a single destination possible. This means that a queue or topic can be configured to forward to another queue, topic, or combination.

This is useful when fanning out to a number of clients, some of whom are consuming queues while others are consuming topics. The following steps demonstrate how you can easily modify the broker configuration and define multiple destinations for a topic.

  1. On the Amazon MQ Configurations page, select the matching configuration from the list. It has the same stack name prefix as your broker.
  2. Choose Edit configuration.
  3. After the broker tag, add the following code example. It creates a new virtual composite destination where messages published to “some/topic” publishes to a queue “A.queue” and a topic “foo.”
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <broker schedulePeriodForDestinationPurge="10000" xmlns="http://activemq.apache.org/schema/core">
      
      <destinationInterceptors>
        <virtualDestinationInterceptor>
          <virtualDestinations>
            <compositeTopic name="some.topic">
              <forwardTo>
                <queue physicalName="A.Queue"/>
                <topic physicalName="foo" />
              </forwardTo>
            </compositeTopic>
          </virtualDestinations>
        </virtualDestinationInterceptor>
      </destinationInterceptors>
      <destinationPolicy>
  4. Choose Save, add a description for this revision, and then choose Save.
  5. In the left navigation pane, choose Brokers, and select the broker with the stack name prefix.
  6. Under Details, choose Edit.
  7. Under Configuration, select the latest configuration revision that you just created.
  8. Choose Schedule modifications, Immediately, Apply.

After the reboot is complete, run another test of the Lambda function. Then, open and log in to the ActiveMQ broker web console, which can be found under Connections on the broker page. To log in, use the admin credentials created on deployment.

On the Queues page, a new queue “A.Queue” was generated because you published to some/topic, which has a composite destination configured.

Conclusion

It can be difficult to tackle architecting a solution with multiple client destinations and networked applications. Although there are many ways to go about solving this problem, this post showed you how to deploy a robust solution using ActiveMQ with a serverless workflow. The workflow publishes messages to a client application using MQTT, a well-supported and lightweight messaging protocol.

To accomplish this, you deployed a serverless application and an Amazon MQ broker in one step using the AWS Serverless Application Repository. You also ran a Node.js MQTT client application authenticated as a registered user in the Amazon MQ broker. You then used Lambda to test publishing a message to a topic on the Amazon MQ broker. Finally, you extended functionality by modifying the broker configuration to support a virtual composite destination, allowing delivery to multiple topic and queue destinations.

With the completion of this project, you can take things further by integrating other AWS services and third-party or custom client applications. Amazon MQ provides multiple protocol endpoints that are widely used across the software and platform landscape. Using serverless as an in-between, you can deliver features from services like Amazon EventBridge to your external applications, wherever they might be. You can also explore how to invoke an Lambda function from Amazon MQ.

 

Using artificial intelligence to detect product defects with AWS Step Functions

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/using-artificial-intelligence-to-detect-product-defects-with-aws-step-functions/

Factories that produce a high volume of inventory must ensure that defective products are not shipped. This is often accomplished with human workers on the assembly line or through computer vision.

You can build an application that uses a custom image classification model to detect and report back any defects in a product, then takes appropriate action. This method provides a powerful, scalable, and simple solution for quality control. It uses Amazon S3, Amazon SQS, AWS Lambda, AWS Step Functions, and Amazon SageMaker.

To simulate a production scenario, the model is trained using an example dataset containing images of an open-source printed circuit board, with defects and without. An accompanying AWS Serverless Application Repository application deploys the Step Functions workflow for handling image classification and notifications.

Typically, in a solution like this, there would be some form of automated camera capture. In this walkthrough, you manually upload images to S3. A Lambda function then consumes an SQS queue of notifications from S3. That Lambda function then kicks off a workflow in Step Functions to complete the quality review. This controls the flow of images sent to the model endpoint.

The returned predictions are used by a state machine to determine action. Detected defects publish an Amazon SNS notification to an email subscription. When no defect is detected, an item is logged to an Amazon DynamoDB table.

Required steps

To complete the walkthrough, follow these steps:

  • Clone the aws-sar-sagemaker-image-classification GitHub repository.
  • Prepare an image dataset and upload it to S3.
  • Create an Amazon SageMaker notebook instance.
  • Use a Jupyter notebook to train and deploy a custom image classification with Amazon SageMaker.
  • Create an S3 bucket for processing images.
  • Deploy an AWS Serverless Application Repository application.
  • Create an S3 bucket notification.
  • Upload a test image to S3 for classification.

Clone the GitHub repository

Before beginning, clone or download the walkthrough repository from GitHub. It contains all the necessary files to complete this walkthrough.

Prepare an image dataset

The best image classification models are made using the best datasets. While the quantity of samples can strengthen your machine learning (ML) model, the quality of the dataset is going to directly affect the reliability of the image classifier.

In this walkthrough, the training algorithm expects the images to be 233×233 pixels. Images are organized in folders named for their corresponding class. In this application, two classes are used, defect_free and defective.

images_to_classify

├── defect_free
│   ├── 1.jpg
│   ├── 2.jpg
|   ├── 3.jpg
│   └── . . .
└── defective
│   ├── 1.jpg
│   ├── 2.jpg   
│   ├── 3.jpg
│   ├── . . .
└── . . .

This sample dataset has been provided in the GitHub repository. It contains four images of a circuit board for each class. The defective images show the circuit board missing the microcontroller component, which is integral to its function. This walkthrough uses a small dataset—for production usage. A larger dataset can produce predictions of higher confidence.

Upload the image dataset to S3

  1. In the S3 console, choose Create bucket and enter a unique bucket name.
  2. For Region, select one that matches the location of the notebook instance.
  3. Choose Create.
  4. In the list of S3 buckets, select the newly created bucket and choose Upload.
  5. Use the drag and drop feature to drag the image folder, as structured in the previous section, into the S3 upload dialog box.
  6. Choose Upload, and proceed to the next section.

Create an Amazon SageMaker notebook instance

Amazon SageMaker provides ML tools in the cloud for developers and data scientists. A notebook instance deploys a fully managed ML compute instance running the Jupyter notebook app, which is used for training and deploying the image classification model.

  1. In the Amazon SageMaker console, choose Notebook instances, Create notebook instance.
  2. For IAM role, choose Create a new role, and specify the bucket created for the dataset in the previous section.
  3. The remaining fields can be left as their default options.
  4. Choose Create notebook instance.
  5. Wait for the notebook instance to finish deploying before moving to the next section.

Train and deploy a custom image classification

The following steps instruct you how to open and run the example notebook on the Amazon SageMaker notebook instance.

  1. Download this example Jupyter notebook to your local machine.
  2. In the Amazon SageMaker console, choose Notebook instances, and select the notebook created earlier.
  3. Choose Open Jupyter, Upload, and then select the notebook downloaded from GitHub.
  4. Open the notebook.
  5. The Jupyter notebook has eight steps, each with a cell containing code that can be executed by choosing Run. Follow through each step until a model has been trained and deployed.
  6. In the Amazon SageMaker console, choose Inference, Endpoints.
  7. Choose the endpoint labeled IC-images-to-classify-xxxxx.
  8. Make a note of the name of this endpoint. You need it for deploying the AWS Serverless Application Repository application.

Create an S3 bucket for processing images

To apply certain permissions, you must create an S3 bucket before you deploy the AWS Serverless Application Repository application. This bucket is where images are stored for classification.

  1. In the Amazon S3 console, choose Create bucket.
  2. Enter a unique bucket name.
  3. For Region, select one that matches the location of the notebook instance.
  4. Choose Create.

Deploy the AWS Serverless Application Repository application

Now that a model has been trained and deployed, a serverless backend can orchestrate classifying images and alerting on detected defects. When fully configured and deployed, any S3 image upload events passed to the SQS queue are classified. An AWS Step Functions state machine determines whether to send the email alert through Amazon SNS.

  1. In the AWS Serverless Application Repository, select Show apps that create custom IAM roles or resource policies.
  2. In the search bar, search for and choose sagemaker-defect-detection.
  3. Under Application settings, all fields are required. BucketName must be the same as the bucket created for processing images. To receive notification of detected defects, for EmailAddress, enter a valid email address. The ModelEndpointName must be the same as it is in Amazon SageMaker.
  4. Choose Deploy.
  5. After creation of the application is complete, a confirmation email is sent to the provided address. Confirm the request to allow Amazon SNS notifications to be sent.

Create the S3 bucket notification

The AWS Serverless Application Repository application sets up an SQS event subscription on the Lambda function for handling the classification of images. To avoid circular dependencies, configure an S3 bucket notification separately to forward S3 image upload events to the SQS queue.

  1. In the Amazon S3 console, select the newly created bucket for processing images, and choose Properties, Events, Add notification.
    • For Events, select PUT.
    • For Suffix, enter .jpg
    • For Send to, enter SQS Queue.
  2. Select the SQS queue created by the AWS Serverless Application Repository application.
  3. Choose Save.

Upload a test image to S3 for classification

Any image upload events on the S3 bucket cause the application to run. In a real use case, the file name could correspond to a numerical ID to track the physical product.

  1. In the dashboard for the S3 bucket, choose Overview, Upload.
  2. From the walkthrough GitHub repository, choose the test file defective.jpg.
  3. Choose Upload.

If a defect is detected, an alert is sent to the email address specified when the AWS Serverless Application Repository app was created. The alert includes the file name of the image and its URL path. It also includes a line indicating the confidence score as a floating-point number between 0 and 1. A higher score indicates that it is more likely the prediction is accurate.

If there is no defect, the bucket, key, and confidence score of the image are logged to a DynamoDB table.

The DynamoDB table is created by the AWS Serverless Application Repository app and can be found in the Resources card on the application page. In the Lambda console, choose Applications.

Conclusion

This post walks you through building a fully managed quality control automation solution using Amazon SageMaker to train and deploy an image classification model endpoint. It shows how you can use AWS Serverless Application Repository to deploy a serverless backend and S3 to store and pass images along for classification. While this walkthrough used a specific and minimal dataset, it illustrates how to build more complex and higher fidelity image classification workflows. As it stands, it’s a cost-effective and highly scalable solution.

To take this solution further, create an app for uploading images into Amazon S3. Optionally, create a serverless application that can resize images for a training job. If a custom image classifier isn’t necessary, explore how Amazon Rekognition can be used for object detection and labeling jobs.

Building an AWS IoT Core device using AWS Serverless and an ESP32

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-an-aws-iot-core-device-using-aws-serverless-and-an-esp32/

Using a simple Arduino sketch, an AWS Serverless Application Repository application, and a microcontroller, you can build a basic serverless workflow for communicating with an AWS IoT Core device.

A microcontroller is a programmable chip and acts as the brain of an electronic device. It has input and output pins for reading and writing on digital or analog components. Those components could be sensors, relays, actuators, or various other devices. It can be used to build remote sensors, home automation products, robots, and much more. The ESP32 is a powerful low-cost microcontroller with Wi-Fi and Bluetooth built in and is used this walkthrough.

The Arduino IDE, a lightweight development environment for hardware, now includes support for the ESP32. There is a large collection of community and officially supported libraries, from addressable LED strips to spectral light analysis.

The following walkthrough demonstrates connecting an ESP32 to AWS IoT Core to allow it to publish and subscribe to topics. This means that the device can send any arbitrary information, such as sensor values, into AWS IoT Core while also being able to receive commands.

Solution overview

This post walks through deploying an application from the AWS Serverless Application Repository. This allows an AWS IoT device to be messaged using a REST endpoint powered by Amazon API Gateway and AWS Lambda. The AWS SAR application also configures an AWS IoT rule that forwards any messages published by the device to a Lambda function that updates an Amazon DynamoDB table, demonstrating basic bidirectional communication.

The last section explores how to build an IoT project with real-world application. By connecting a thermal printer module and modifying a few lines of code in the example firmware, the ESP32 device becomes an AWS IoT–connected printer.

All of this can be accomplished within the AWS Free Tier, which is necessary for the following instructions.

An example of an AWS IoT project using an ESP32, AWS IoT Core, and an Arduino thermal printer

An example of an AWS IoT project using an ESP32, AWS IoT Core, and an Arduino thermal printer.

Required steps

To complete the walkthrough, follow these steps:

  • Create an AWS IoT device.
  • Install and configure the Arduino IDE.
  • Configure and flash an ESP32 IoT device.
  • Deploying the lambda-iot-rule AWS SAR application.
  • Monitor and test.
  • Create an IoT thermal printer.

Creating an AWS IoT device

To communicate with the ESP32 device, it must connect to AWS IoT Core with device credentials. You must also specify the topics it has permissions to publish and subscribe on.

  1. In the AWS IoT console, choose Register a new thing, Create a single thing.
  2. Name the new thing. Use this exact name later when configuring the ESP32 IoT device. Leave the remaining fields set to their defaults. Choose Next.
  3.  Choose Create certificate. Only the thing cert, private key, and Amazon Root CA 1 downloads are necessary for the ESP32 to connect. Download and save them somewhere secure, as they are used when programming the ESP32 device.
  4. Choose Activate, Attach a policy.
  5. Skip adding a policy, and choose Register Thing.
  6. In the AWS IoT console side menu, choose Secure, Policies, Create a policy.
  7. Name the policy Esp32Policy. Choose the Advanced tab.
  8. Paste in the following policy template.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": "iot:Connect",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:client/THINGNAME"
        },
        {
          "Effect": "Allow",
          "Action": "iot:Subscribe",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topicfilter/esp32/sub"
        },
    	{
          "Effect": "Allow",
          "Action": "iot:Receive",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topic/esp32/sub"
        },
        {
          "Effect": "Allow",
          "Action": "iot:Publish",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topic/esp32/pub"
        }
      ]
    }
  9. Replace REGION with the matching AWS Region you’re currently operating in. This can be found on the top right corner of the AWS console window.
  10.  Replace ACCOUNT_ID with your own, which can be found in Account Settings.
  11. Replace THINGNAME with the name of your device.
  12. Choose Create.
  13. In the AWS IoT console, choose Secure, Certification. Select the one created for your device and choose Actions, Attach policy.
  14. Choose Esp32Policy, Attach.

Your AWS IoT device is now configured to have permission to connect to AWS IoT Core. It can also publish to the topic esp32/pub and subscribe to the topic esp32/sub. For more information on securing devices, see AWS IoT Policies.

Installing and configuring the Arduino IDE

The Arduino IDE is an open-source development environment for programming microcontrollers. It supports a continuously growing number of platforms including most ESP32-based modules. It must be installed along with the ESP32 board definitions, MQTT library, and ArduinoJson library.

  1. Download the Arduino installer for the desired operating system.
  2. Start Arduino and open the Preferences window.
  3. For Additional Board Manager URLs, add
    https://dl.espressif.com/dl/package_esp32_index.json.
  4. Choose Tools, Board, Boards Manager.
  5. Search esp32 and install the latest version.
  6. Choose Sketch, Include Library, Manage Libraries.
  7. Search MQTT, and install the latest version by Joel Gaehwiler.
  8. Repeat the library installation process for ArduinoJson.

The Arduino IDE is now installed and configured with all the board definitions and libraries needed for this walkthrough.

Configuring and flashing an ESP32 IoT device

A collection of various ESP32 development boards.

A collection of various ESP32 development boards.

For this section, you need an ESP32 device. To check if your board is compatible with the Arduino IDE, see the boards.txt file. The following code connects to AWS IoT Core securely using MQTT, a publish and subscribe messaging protocol.

This project has been tested on the following devices:

  1. Install the required serial drivers for your device. Some boards use different USB/FTDI chips for interfacing. Here are the most commonly used with links to drivers.
  2. Open the Arduino IDE and choose File, New to create a new sketch.
  3. Add a new tab and name it secrets.h.
  4. Paste the following into the secrets file.
    #include <pgmspace.h>
    
    #define SECRET
    #define THINGNAME ""
    
    const char WIFI_SSID[] = "";
    const char WIFI_PASSWORD[] = "";
    const char AWS_IOT_ENDPOINT[] = "xxxxx.amazonaws.com";
    
    // Amazon Root CA 1
    static const char AWS_CERT_CA[] PROGMEM = R"EOF(
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
    )EOF";
    
    // Device Certificate
    static const char AWS_CERT_CRT[] PROGMEM = R"KEY(
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
    )KEY";
    
    // Device Private Key
    static const char AWS_CERT_PRIVATE[] PROGMEM = R"KEY(
    -----BEGIN RSA PRIVATE KEY-----
    -----END RSA PRIVATE KEY-----
    )KEY";
  5. Enter the name of your AWS IoT thing, as it is in the console, in the field THINGNAME.
  6. To connect to Wi-Fi, add the SSID and PASSWORD of the desired network. Note: The network name should not include spaces or special characters.
  7. The AWS_IOT_ENDPOINT can be found from the Settings page in the AWS IoT console.
  8. Copy the Amazon Root CA 1, Device Certificate, and Device Private Key to their respective locations in the secrets.h file.
  9. Choose the tab for the main sketch file, and paste the following.
    #include "secrets.h"
    #include <WiFiClientSecure.h>
    #include <MQTTClient.h>
    #include <ArduinoJson.h>
    #include "WiFi.h"
    
    // The MQTT topics that this device should publish/subscribe
    #define AWS_IOT_PUBLISH_TOPIC   "esp32/pub"
    #define AWS_IOT_SUBSCRIBE_TOPIC "esp32/sub"
    
    WiFiClientSecure net = WiFiClientSecure();
    MQTTClient client = MQTTClient(256);
    
    void connectAWS()
    {
      WiFi.mode(WIFI_STA);
      WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
    
      Serial.println("Connecting to Wi-Fi");
    
      while (WiFi.status() != WL_CONNECTED){
        delay(500);
        Serial.print(".");
      }
    
      // Configure WiFiClientSecure to use the AWS IoT device credentials
      net.setCACert(AWS_CERT_CA);
      net.setCertificate(AWS_CERT_CRT);
      net.setPrivateKey(AWS_CERT_PRIVATE);
    
      // Connect to the MQTT broker on the AWS endpoint we defined earlier
      client.begin(AWS_IOT_ENDPOINT, 8883, net);
    
      // Create a message handler
      client.onMessage(messageHandler);
    
      Serial.print("Connecting to AWS IOT");
    
      while (!client.connect(THINGNAME)) {
        Serial.print(".");
        delay(100);
      }
    
      if(!client.connected()){
        Serial.println("AWS IoT Timeout!");
        return;
      }
    
      // Subscribe to a topic
      client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);
    
      Serial.println("AWS IoT Connected!");
    }
    
    void publishMessage()
    {
      StaticJsonDocument<200> doc;
      doc["time"] = millis();
      doc["sensor_a0"] = analogRead(0);
      char jsonBuffer[512];
      serializeJson(doc, jsonBuffer); // print to client
    
      client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
    }
    
    void messageHandler(String &topic, String &payload) {
      Serial.println("incoming: " + topic + " - " + payload);
    
    //  StaticJsonDocument<200> doc;
    //  deserializeJson(doc, payload);
    //  const char* message = doc["message"];
    }
    
    void setup() {
      Serial.begin(9600);
      connectAWS();
    }
    
    void loop() {
      publishMessage();
      client.loop();
      delay(1000);
    }
  10. Choose File, Save, and give your project a name.

Flashing the ESP32

  1. Plug the ESP32 board into a USB port on the computer running the Arduino IDE.
  2. Choose Tools, Board, and then select the matching type of ESP32 module. In this case, a Sparkfun ESP32 Thing was used.
  3. Choose Tools, Port, and then select the matching port for your device.
  4. Choose Upload. Arduino reads Done uploading when the upload is successful.
  5. Choose the magnifying lens icon to open the Serial Monitor. Set the baud rate to 9600.

Keep the Serial Monitor open. When connected to Wi-Fi and then AWS IoT Core, any messages received on the topic esp32/sub are logged to this console. The device is also now publishing to the topic esp32/pub.

The topics are set at the top of the sketch. When changing or adding topics, remember to add permissions in the device policy.

// The MQTT topics that this device should publish/subscribe
#define AWS_IOT_PUBLISH_TOPIC   "esp32/pub"
#define AWS_IOT_SUBSCRIBE_TOPIC "esp32/sub"

Within this sketch, the relevant functions are publishMessage() and messageHandler().

The publishMessage() function creates a JSON object with the current time in milliseconds and the analog value of pin A0 on the device. It then publishes this JSON object to the topic esp32/pub.

void publishMessage()
{
  StaticJsonDocument<200> doc;
  doc["time"] = millis();
  doc["sensor_a0"] = analogRead(0);
  char jsonBuffer[512];
  serializeJson(doc, jsonBuffer); // print to client

  client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
}

The messageHandler() function prints out the topic and payload of any message from a subscribed topic. To see all the ways to parse JSON messages in Arduino, see the deserializeJson() example.

void messageHandler(String &topic, String &payload) {
  Serial.println("incoming: " + topic + " - " + payload);

//  StaticJsonDocument<200> doc;
//  deserializeJson(doc, payload);
//  const char* message = doc["message"];
}

Additional topic subscriptions can be added within the connectAWS() function by adding another line similar to the following.

// Subscribe to a topic
  client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);

  Serial.println("AWS IoT Connected!");

Deploying the lambda-iot-rule AWS SAR application

Now that an ESP32 device has been connected to AWS IoT, the following steps walk through deploying an AWS Serverless Application Repository application. This is a base for building serverless integration with a physical device.

  1. On the lambda-iot-rule AWS Serverless Application Repository application page, make sure that the Region is the same as the AWS IoT device.
  2. Choose Deploy.
  3. Under Application settings, for PublishTopic, enter esp32/sub. This is the topic to which the ESP32 device is subscribed. It receives messages published to this topic. Likewise, set SubscribeTopic to esp32/pub, the topic on which the device publishes.
  4. Choose Deploy.
  5. When creation of the application is complete, choose Test app to navigate to the application page. Keep this page open for the next section.

Monitoring and testing

At this stage, two Lambda functions, a DynamoDB table, and an AWS IoT rule have been deployed. The IoT rule forwards messages on topic esp32/pub to TopicSubscriber, a Lambda function, which inserts the messages on to the DynamoDB table.

  1. On the application page, under Resources, choose MyTable. This is the DynamoDB table that the TopicSubscriber Lambda function updates.
  2. Choose Items. If the ESP32 device is still active and connected, messages that it has published appear here.

The TopicPublisher Lambda function is invoked by the API Gateway endpoint and publishes to the AWS IoT topic esp32/sub.

1.     On the application page, find the Application endpoint.

2.     To test that the TopicPublisher function is working, enter the following into a terminal or command-line utility, replacing ENDPOINT with the URL from above.

curl -d '{"text":"Hello world!"}' -H "Content-Type: application/json" -X POST https://ENDPOINT/publish

Upon success, the request returns a copy of the message.

Back in the Serial Monitor, the message published to the topic esp32/sub prints out.

Creating an IoT thermal printer

With the completion of the previous steps, the ESP32 device currently logs incoming messages to the serial console.

The following steps demonstrate how the code can be modified to use incoming messages to interact with a peripheral component. This is done by wiring a thermal printer to the ESP32 in order to physically print messages. The REST endpoint from the previous section can be used as a webhook in third-party applications to interact with this device.

A wiring diagram depicting an ESP32 connected to a thermal printer.

A wiring diagram depicting an ESP32 connected to a thermal printer.

  1. Follow the product instructions for powering, wiring, and installing the correct Arduino library.
  2. Ensure that the thermal printer is working by holding the power button on the printer while connecting the power. A sample receipt prints. On that receipt, the default baud rate is specified as either 9600 or 19200.
  3. In the Arduino code from earlier, include the following lines at the top of the main sketch file. The second line defines what interface the thermal printer is connected to. &Serial2 is used to set the third hardware serial interface on the ESP32. For this example, the pins on the Sparkfun ESP32 Thing, GPIO16/GPIO17, are used for RX/TX respectively.
    #include "Adafruit_Thermal.h"
    
    Adafruit_Thermal printer(&Serial2);
  4. Replace the setup() function with the following to initialize the printer on device bootup. Change the baud rate of Serial2.begin() to match what is specified in the test print. The default is 19200.
    void setup() {
      Serial.begin(9600);
    
      // Start the thermal printer
      Serial2.begin(19200);
      printer.begin();
      printer.setSize('S');
    
      connectAWS();
    }
    
  5. Replace the messageHandler() function with the following. On any incoming message, it parses the JSON and prints the message on the thermal printer.
    void messageHandler(String &topic, String &payload) {
      Serial.println("incoming: " + topic + " - " + payload);
    
      // deserialize json
      StaticJsonDocument<200> doc;
      deserializeJson(doc, payload);
      String message = doc["message"];
    
      // Print the message on the thermal printer
      printer.println(message);
      printer.feed(2);
    }
  6. Choose Upload.
  7. After the firmware has successfully uploaded, open the Serial Monitor to confirm that the board has connected to AWS IoT.
  8. Enter the following into a command-line utility, replacing ENDPOINT, as in the previous section.
    curl -d '{"message": "Hello World!"}' -H "Content-Type: application/json" -X POST https://ENDPOINT/publish

If successful, the device prints out the message “Hello World” from the attached thermal printer. This is a fully serverless IoT printer that can be triggered remotely from a webhook. As an example, this can be used with GitHub Webhooks to print a physical readout of events.

Conclusion

Using a simple Arduino sketch, an AWS Serverless Application Repository application, and a microcontroller, this post demonstrated how to build a basic serverless workflow for communicating with a physical device. It also showed how to expand that into an IoT thermal printer with real-world applications.

With the use of AWS serverless, advanced compute and extensibility can be added to an IoT device, from machine learning to translation services and beyond. By using the Arduino programming environment, the vast collection of open-source libraries, projects, and code examples open up a world of possibilities. The next step is to explore what can be done with an Arduino and the capabilities of AWS serverless. The sample Arduino code for this project and more can be found at this GitHub repository.

New AWS Lambda scaling controls for Kinesis and DynamoDB event sources

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/new-aws-lambda-scaling-controls-for-kinesis-and-dynamodb-event-sources/

AWS Lambda is introducing a new scaling parameter for Amazon Kinesis Data Streams and Amazon DynamoDB Streams event sources. Parallelization Factor can be set to increase concurrent Lambda invocations for each shard, which by default is 1. This allows for faster stream processing without the need to over-scale the number of shards, while still guaranteeing order of records processed.

There are two common optimization scenarios: high traffic and low traffic. For example, an online business might experience seasonal spikes in traffic. The following features help ensure that your business can scale appropriately to withstand the upcoming holiday season.

Handling high traffic with Parallelization Factor

A diagram showing how Parallelization Factor maintains order.

Each shard has uniquely identified sequences of data records. Each record contains a partition key to guarantee order and are organized separately into shards based on that key. The records from each shard must be polled to guarantee that records with the same partition key are processed in order.

When there is a high volume of data traffic, you want to process records as fast as possible. Before this release, customers were solving this by updating the number of shards on a Kinesis data stream. Increasing the number of shards increases the number of functions processing data from those shards. One Lambda function invocation processes one shard at a time.

You can now use the new Parallelization Factor to specify the number of concurrent batches that Lambda polls from a single shard. This feature introduces more flexibility in scaling options for Lambda and Kinesis. The default factor of one exhibits normal behavior. A factor of two allows up to 200 concurrent invocations on 100 Kinesis data shards. The Parallelization Factor can be scaled up to 10.

Each parallelized shard contains messages with the same partition key. This means record processing order will still be maintained and each parallelized shard must complete before processing the next.

Using Parallelization Factor

Since Parallelization Factor is quickly set on an event source mapping, it can be increased or decreased on demand. Fully automated scaling of stream processing is now possible.

For example, Amazon CloudWatch can be used to monitor changes in traffic. High traffic can cause the IteratorAge metric to increase, and an alarm can be created if this occurs for some specified period of time. The alarm can trigger a Lambda function that uses the UpdateEventSourceMapping API to increase the Parallelization Factor. In the same way, an alarm can be set to reduce the factor if traffic decreases.

You can enable Parallelization Factor in the AWS Lambda console by creating or updating a Kinesis or DynamoDB event source. Choose Additional settings and set the Concurrent batches per shard to the desired factor, between 1 and 10.

Configuring the Parallelization Factor from the AWS Lambda console.

Configuring the Parallelization Factor from the AWS Lambda console.

You can also enable this feature from the AWS CLI using the –-parallelization-factor parameter when creating or updating an event source mapping.

$ aws lambda create-event-source-mapping – function-name my-function \
--parallelization-factor 2 – batch-size 100 – starting-position AT_TIMESTAMP – starting-position-timestamp 1541139109 \
--event-source-arn arn:aws:kinesis:us-east-2:123456789012:stream/lambda-stream
{
	"UUID": "2b733gdc-8ac3-cdf5-af3a-1827b3b11284",
	“ParallelizationFactor”: 2,
	"BatchSize": 100,
	"MaximumBatchingWindowInSeconds": 0,
	"EventSourceArn": "arn:aws:kinesis:us-east-2:123456789012:stream/lambda-stream",
	"FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:my-function",
	"LastModified": 1541139209.351,
	"LastProcessingResult": "No records processed",
	"State": "Creating",
	"StateTransitionReason": "User action"
}

Handling low traffic with Batch Window

Previously, you could use Batch Size to handle low volumes, or handle tasks that were not time sensitive. Batch Size configures the number of records to read from a shard, up to 10,000. The payload limit of a single invocation is 6 MB.

In September, we launched Batch Window, which allows you to fine tune when Lambda invocations occur. Lambda normally reads records from a Kinesis data stream at a particular interval. This feature is ideal in situations where data is sparse and batches of data take time to build up.

Using Batch Window, you can set your function to wait up to 300 seconds for a batch to build before processing it. This means you can also set your function to process on certain conditions, such as reaching the payload size, or Batch Size reaching its maximum value. With Batch Window, you can manage the average number of records processed by the function with each invocation. This allows you to increase the efficiency of each invocation and reduce the total number.

Batch Window is set when adding a new event trigger in the AWS Lambda console.

Adding an event source trigger in the AWS Lambda console

Adding an event source trigger in the AWS Lambda console

It can also be set using AWS CLI with the --maximum-batching-window-in-seconds parameter.

$ aws lambda create-event-source-mapping – function-name my-function \
--maximum-batching-window-in-seconds 300 – batch-size 100 – starting-position AT_TIMESTAMP – starting-position-timestamp 1541139109 \
--event-source-arn arn:aws:kinesis:us-east-2:123456789012:stream/lambda-stream
{
	"UUID": "2b733gdc-8ac3-cdf5-af3a-1827b3b11284",
	"BatchSize": 100,
	"MaximumBatchingWindowInSeconds": 300,
	"EventSourceArn": "arn:aws:kinesis:us-east-2:123456789012:stream/lambda-stream",
	"FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:my-function",
	"LastModified": 1541139209.351,
	"LastProcessingResult": "No records processed",
	"State": "Creating",
	"StateTransitionReason": "User action"
}

Conclusion

You now have new options for managing scale in Amazon Kinesis and Amazon DynamoDB stream processing.  The Batch Window parameter allows you to tune how long to wait before processing a batch, ideal for low traffic or tasks that aren’t time sensitive. The Parallelization Factor parameter enables faster stream processing of ordered records at high volume, using concurrent Lambda invocations per shard. Both of these features can lead to more efficient stream processing.