Tag Archives: Amazon API Gateway

Introducing the serverless LAMP stack – part 2 relational databases

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-serverless-lamp-stack-part-2-relational-databases/

In this post, you learn how to use an Amazon Aurora MySQL relational database in your serverless applications. I show how to pool and share connections to the database with Amazon RDS Proxy, and how to choose configurations. The code examples in this post are written in PHP and can be found in this GitHub repository. The concepts can be applied to any AWS Lambda supported runtime.

TThe serverless LAMP stack

The serverless LAMP stack

This serverless LAMP stack architecture is first discussed in this post. This architecture uses a PHP Lambda function (or multiple functions) to read and write to an Amazon Aurora MySQL database.

Amazon Aurora provides high performance and availability for MySQL and PostgreSQL databases. The underlying storage scales automatically to meet demand, up to 64 tebibytes (TiB). An Amazon Aurora DB instance is created inside a virtual private cloud (VPC) to prevent public access. To connect to the Aurora database instance from a Lambda function, that Lambda function must be configured to access the same VPC.

Database memory exhaustion can occur when connecting directly to an RDS database. This is caused by a surge in database connections or by a large number of connections opening and closing at a high rate. This can lead to slower queries and limited application scalability. Amazon RDS Proxy is implemented to solve this problem. RDS Proxy is a fully managed database proxy feature for Amazon RDS. It establishes a database connection pool that sits between your application and your relational database and reuses connections in this pool. This protects the database against oversubscription, without the memory and CPU overhead of opening a new database connection each time. Credentials for the database connection are securely stored in AWS Secrets Manager. They are accessed via an AWS Identity and Access Management (IAM) role. This enforces strong authentication requirements for database applications without a costly migration effort for the DB instances themselves.

The following steps show how to connect to an Amazon Aurora MySQL database running inside a VPC. The connection is made from a Lambda function running PHP. The Lambda function connects to the database via RDS Proxy. The database credentials that RDS Proxy uses are held in  Secrets Manager and accessed via IAM authentication.

RDS Proxy with IAM Authentication

RDS Proxy with IAM authentication

Getting started

RDS Proxy is currently in preview and not recommended for production workloads. For a full list of available Regions, refer to the RDS Proxy pricing page.

Creating an Amazon RDS Aurora MySQL database

Before creating an Aurora DB cluster, you must meet the prerequisites, such as creating a VPC and an RDS DB subnet group. For more information on how to set this up, see DB cluster prerequisites.

  1. Call the create-db-cluster AWS CLI command to create the Aurora MySQL DB cluster.
    aws rds create-db-cluster \
    --db-cluster-identifier sample-cluster \
    --engine aurora-mysql \
    --engine-version 5.7.12 \
    --master-username admin \
    --master-user-password secret99 \
    --db-subnet-group-name default-vpc-6cc1cf0a \
    --vpc-security-group-ids sg-d7cf52a3 \
    --enable-iam-database-authentication true
  2. Add a new DB instance to the cluster.
    aws rds create-db-instance \
        --db-instance-class db.r5.large \
        --db-instance-identifier sample-instance \
        --engine aurora-mysql  \
        --db-cluster-identifier sample-cluster
  3. Store the database credentials as a secret in AWS Secrets Manager.
    aws secretsmanager create-secret \
    --name MyTestDatabaseSecret \
    --description "My test database secret created with the CLI" \
    --secret-string '{"username":"admin","password":"secret99","engine":"mysql","host":"<REPLACE-WITH-YOUR-DB-WRITER-ENDPOINT>","port":"3306","dbClusterIdentifier":"<REPLACE-WITH-YOUR-DB-CLUSTER-NAME>"}'

    Make a note of the resulting ARN for later

    {
        "VersionId": "eb518920-4970-419f-b1c2-1c0b52062117", 
        "Name": "MySampleDatabaseSecret", 
        "ARN": "arn:aws:secretsmanager:eu-west-1:1234567890:secret:MySampleDatabaseSecret-JgEWv1"
    }

    This secret is used by RDS Proxy to maintain a connection pool to the database. To access the secret, the RDS Proxy service requires permissions to be explicitly granted.

  4. Create an IAM policy that provides secretsmanager permissions to the secret.
    aws iam create-policy \
    --policy-name my-rds-proxy-sample-policy \
    --policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": [
            "secretsmanager:GetResourcePolicy",
            "secretsmanager:GetSecretValue",
            "secretsmanager:DescribeSecret",
            "secretsmanager:ListSecretVersionIds"
          ],
          "Resource": [
            "<the-arn-of-the-secret>”
          ]
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": [
            "secretsmanager:GetRandomPassword",
            "secretsmanager:ListSecrets"
          ],
          "Resource": "*"
        }
      ]
    }'
    

    Make a note of the resulting policy ARN, which you need to attach to a new role.

    {
        "Policy": {
            "PolicyName": "my-rds-proxy-sample-policy", 
            "PermissionsBoundaryUsageCount": 0, 
            "CreateDate": "2020-06-04T12:21:25Z", 
            "AttachmentCount": 0, 
            "IsAttachable": true, 
            "PolicyId": "ANPA6JE2MLNK3Z4EFQ5KL", 
            "DefaultVersionId": "v1", 
            "Path": "/", 
            "Arn": "arn:aws:iam::1234567890112:policy/my-rds-proxy-sample-policy", 
            "UpdateDate": "2020-06-04T12:21:25Z"
         }
    }
    
  5. Create an IAM Role that has a trust relationship with the RDS Proxy service. This allows the RDS Proxy service to assume this role to retrieve the database credentials.

    aws iam create-role --role-name my-rds-proxy-sample-role --assume-role-policy-document '{
     "Version": "2012-10-17",
     "Statement": [
      {
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
        "Service": "rds.amazonaws.com"
       },
       "Action": "sts:AssumeRole"
      }
     ]
    }'
    
  6. Attach the new policy to the role:
    aws iam attach-role-policy \
    --role-name my-rds-proxy-sample-role \
    --policy-arn arn:aws:iam::123456789:policy/my-rds-proxy-sample-policy
    

Create an RDS Proxy

  1. Use the AWS CLI to create a new RDS Proxy. Replace the – -role-arn and SecretArn value to those values created in the previous steps.
    aws rds create-db-proxy \
    --db-proxy-name sample-db-proxy \
    --engine-family MYSQL \
    --auth '{
            "AuthScheme": "SECRETS",
            "SecretArn": "arn:aws:secretsmanager:eu-west-1:123456789:secret:exampleAuroraRDSsecret1-DyCOcC",
             "IAMAuth": "REQUIRED"
          }' \
    --role-arn arn:aws:iam::123456789:role/my-rds-proxy-sample-role \
    --vpc-subnet-ids  subnet-c07efb9a subnet-2bc08b63 subnet-a9007bcf
    

    To enforce IAM authentication for users of the RDS Proxy, the IAMAuth value is set to REQUIRED. This is a more secure alternative to embedding database credentials in the application code base.

    The Aurora DB cluster and its associated instances are referred to as the targets of that proxy.

  2. Add the database cluster to the proxy with the register-db-proxy-targets command.
    aws rds register-db-proxy-targets \
    --db-proxy-name sample-db-proxy \
    --db-cluster-identifiers sample-cluster
    

Deploying a PHP Lambda function with VPC configuration

This GitHub repository contains a Lambda function with a PHP runtime provided by a Lambda layer. The function uses the MySQLi PHP extension to connect to the RDS Proxy. The extension has been installed and compiled along with a PHP executable using this command:

The PHP executable is packaged together with a Lambda bootstrap file to create a PHP custom runtime. More information on building your own custom runtime for PHP can be found in this post.

Deploy the application stack using the AWS Serverless Application Model (AWS SAM) CLI:

sam deploy -g

When prompted, enter the SecurityGroupIds and the SubnetIds for your Aurora DB cluster.

The SAM template attaches the SecurityGroupIds and SubnetIds parameters to the Lambda function using the VpcConfig sub-resource.

Lambda creates an elastic network interface for each combination of security group and subnet in the function’s VPC configuration. The function can only access resources (and the internet) through that VPC.

Adding RDS Proxy to a Lambda Function

  1. Go to the Lambda console.
  2. Choose the PHPHelloFunction that you just deployed.
  3. Choose Add database proxy at the bottom of the page.
  4. Choose existing database proxy then choose sample-db-proxy.
  5. Choose Add.

Using the RDS Proxy from within the Lambda function

The Lambda function imports three libraries from the AWS PHP SDK. These are used to generate a password token from the database credentials stored in Secrets Manager.

The AWS PHP SDK libraries are provided by the PHP-example-vendor layer. Using Lambda layers in this way creates a mechanism for incorporating additional libraries and dependencies as the application evolves.

The function’s handler named index, is the entry point of the function code. First, getenv() is called to retrieve the environment variables set by the SAM application’s deployment. These are saved as local variables and available for the duration of the Lambda function’s execution.

The AuthTokenGenerator class generates an RDS auth token for use with IAM authentication. This is initialized by passing in the credential provider to the SDK client constructor. The createToken() method is then invoked, with the Proxy endpoint, port number, Region, and database user name provided as method parameters. The resultant temporary token is then used to connect to the proxy.

The PHP mysqli class represents a connection between PHP and a MySQL database. The real_connect() method is used to open a connection to the database via RDS Proxy. Instead of providing the database host endpoint as the first parameter, the proxy endpoint is given. The database user name, temporary token, database name, and port number are also provided. The constant MYSQLI_CLIENT_SSL is set to ensure that the connection uses SSL encryption.

Once a connection has been established, the connection object can be used. In this example, a SHOW TABLES query is executed. The connection is then closed, and the result is encoded to JSON and returned from the Lambda function.

This is the output:

RDS Proxy monitoring and performance tuning

RDS Proxy allows you to monitor and adjust connection limits and timeout intervals without changing application code.

Limit the timeout wait period that is most suitable for your application with the connection borrow timeout option. This specifies how long to wait for a connection to become available in the connection pool before returning a timeout error.

Adjust the idle connection timeout interval to help your applications handle stale resources. This can save your application from mistakenly leaving open connections that hold important database resources.

Multiple applications using a single database can each use an RDS Proxy to divide the connection quotas across each application. Set the maximum proxy connections as a percentage of the max_connections configuration (for MySQL).

The following example shows how to change the MaxConnectionsPercent setting for a proxy target group.

aws rds modify-db-proxy-target-group \
--db-proxy-name sample-db-proxy \
--target-group-name default \
--connection-pool-config '{"MaxConnectionsPercent": 75 }'

Response:

{
    "TargetGroups": [
        {
            "DBProxyName": "sample-db-proxy",
            "TargetGroupName": "default",
            "TargetGroupArn": "arn:aws:rds:eu-west-1:####:target-group:prx-tg-03d7fe854604e0ed1",
            "IsDefault": true,
            "Status": "available",
            "ConnectionPoolConfig": {
            "MaxConnectionsPercent": 75,
            "MaxIdleConnectionsPercent": 50,
            "ConnectionBorrowTimeout": 120,
            "SessionPinningFilters": []
        	},            
"CreatedDate": "2020-06-04T16:14:35.858000+00:00",
            "UpdatedDate": "2020-06-09T09:08:50.889000+00:00"
        }
    ]
}

RDS Proxy may keep a session on the same connection until the session ends when it detects a session state change that isn’t appropriate for reuse. This behavior is called pinning. Performance tuning for RDS Proxy involves maximizing connection reuse by minimizing pinning.

The Amazon CloudWatch metric DatabaseConnectionsCurrentlySessionPinned can be monitored to see how frequently pinning occurs in your application.

Amazon CloudWatch collects and processes raw data from RDS Proxy into readable, near real-time metrics. Use these metrics to observe the number of connections and the memory associated with connection management. This can help identify if a database instance or cluster would benefit from using RDS Proxy. For example, if it is handling many short-lived connections, or opening and closing connections at a high rate.

Conclusion

In this post, you learn how to create and configure an RDS Proxy to manage connections from a PHP Lambda function to an Aurora MySQL database. You see how to enforce strong authentication requirements by using Secrets Manager and IAM authentication. You deploy a Lambda function that uses Lambda layers to store the AWS PHP SDK as a dependency.

You can create secure, scalable, and performant serverless applications with relational databases. Do this by placing the RDS Proxy service between your database and your Lambda functions. You can also migrate your existing MySQL database to an Aurora DB cluster without altering the database. Using RDS Proxy and Lambda, you can build serverless PHP applications faster, with less code.

Find more PHP examples with the Serverless LAMP stack.

Building a location-based, scalable, serverless web app – part 3

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-location-based-scalable-serverless-web-app-part-3/

In part 2, I cover the API configuration, geohashing algorithm, and real-time messaging architecture used in the Ask Around Me web application. These are needed for receiving and processing questions and answers, and sending results back to users in real time.

In this post, I explain the backend processing architecture, how data is aggregated, and how to deploy the final application to production. The code and instructions for this application are available in the GitHub repo.

Processing questions

The frontend sends new user questions to the backend via the POST questions API. While the predicted volume of questions is only 1,000 per hour, it’s possible for usage to spike unexpectedly. To help handle this load, the PostQuestions Lambda function puts incoming questions onto an Amazon SQS queue. The ProcessQuestions function takes messages from the Questions queue in batches of 10, and loads these into the Questions table in Amazon DynamoDB.

Questions processing architecture

This asynchronous process smooths out traffic spikes, ensuring that the application is not throttled by DynamoDB. It also provides consistent response times to the front-end POST request, since the API call returns as soon as the message is durably persisted to the queue.

Currently, the ProcessQuestions function does not parse or validate user questions. It would be easy to add message filtering at this stage, using Amazon Comprehend to detect sentiment or inappropriate language. These changes would increase the processing time per question, but by handling this asynchronously, the initial POST API latency is not adversely affected.

The ProcessQuestions function uses the Geo Library for Amazon DynamoDB that converts the question’s latitude and longitude into a geohash. This geohash attribute is one of the indexes in the underlying DynamoDB table. The GetQuestions function using the same library for efficiently querying questions based on proximity to the user.

There are a couple of different mechanisms used to pass information between the frontend and backend applications. When the frontend first initializes, it retrieves the current location of the user from the browser. It then calls the questions API to get a list of active questions within 5 miles of the current location. This retrieves the state up to this point in time. To receive notifications of new messages posted in the user’s area, the frontend also subscribes to the geohash topic in AWS IoT Core.

Processing answers

Answers processing architecture

The application allows two types of question that have different answer types. First, the rating questions accept an answer with a 0–5 score range. Second, the geography questions accept a geo-point, which is a latitude and longitude representing a location.

Similar to the way questions are handled, answers are also queued before processing. However, the PostAnswers Lambda function sends answers to different queues, depending on question type. Ratings messages are sent to the StarAnswers queue, while geography messages are routed to the GeoAnswers queue. Star ratings are saved as raw data in the Answers table by the ProcessAnswerStar function. Geography answers are first converted to a geohash before they are stored.

It’s possible for users to submit updates to their answers. For a star rating, the processing function simply saves the new score. For geography answers, if the updated answer contains a latitude and longitude close enough to the original answer, it results in the same geohash. This is due to the different aggregation processes used for these types of answers.

Aggregating data

In this application, the users asking questions are seeking aggregated answers instead of raw data. For example, “How do you rate the park?” shows an average score from users instead of thousands of individual ratings. To maintain performance, this aggregation occurs when new answers are saved to the database, not when the application fetches the question list.

The Answers table emits updates to a DynamoDB stream whenever new items are inserted or updated. The StreamSpecification parameter in the table definition is set to NEW_AND_OLD_IMAGES, meaning the stream record contains both the new and old item record.

New answers to questions are new items in the table, so the stream record only contains the new image. If users update their answers, this creates an updated item in the table, and the stream record contains both the new and old images of the item.

For star ratings, when receiving an updated rating, the Aggregation function uses both images to calculate the delta in the score. For example, if the old rating was 2 and the user changes this to 5, then the delta is 3. The summary score related to the answer is updated in the Questions table, using a DynamoDB update expression:

    const result = await myGeoTableManager.updatePoint({
      RangeKeyValue: { S: update }, 
      GeoPoint: {
        latitude: item.lat,
        longitude: item.lng
      },
      UpdateItemInput: {
        UpdateExpression: 'ADD answers :deltaAnswers, totalScore :deltaTotalScore',
        ExpressionAttributeValues: {
          ':deltaAnswers': { N: item.deltaAnswers.toString()},
          ':deltaTotalScore': { N: item.deltaValue.toString()}
        }
      }
    }).promise()

For geo-point ratings, the same approach is used but if the geohash changes, then the delta is -1 for the geohash in the old image, and +1 for the geohash in the new image. The update expression automatically creates a new geohash attribute on the DynamoDB item if it is not already present:

    const result = await myGeoTableManager.updatePoint({
      RangeKeyValue: { S: item.ID }, 
      GeoPoint: {
        latitude: item.lat,
        longitude: item.lng
      },
      UpdateItemInput: {
        UpdateExpression: `ADD ${item.geohash} :deltaAnswers, answers :deltaAnswers`,
        ExpressionAttributeValues: {
          ':deltaAnswers': { N: item.deltaAnswers.toString() }          
        }
      }
    }).promise()

By using a Lambda function as a DynamoDB stream processor, you can aggregate large amounts of data in near real time. The Questions and Answers tables have a one-to-many relationship – many answers belong to one question. As answers are saved, the aggregation process updates the summaries in the Questions table.

The Questions table also publishes updates to another DynamoDB stream. These are consumed by a Lambda function that sends the aggregated update to topics in AWS IoT Core. This is how updated scores are sent back to the frontend client application.

Publishing to production with Amplify Console

At this point, you can run the application on your local development machine and view the application via the localhost Vue.js server. Once you are ready to launch the application to users, you must deploy to production.

Single-page applications are easy to deploy publicly. The build process creates static HTML, JS, and CSS files. These can be served via Amazon S3 and Amazon CloudFront, together with any image and media assets used. The process of running the build process and managing the deployment can be automated using AWS Amplify Console.

In this walk through, I use GitHub as the repo provider. You can also use AWS CodeCommit, Bitbucket, GitLab, or upload the build directory from your machine.

To deploy the front end via Amplify Console:

  1. From the AWS Management Console, select the Services dropdown and choose AWS Amplify. From the initial splash screen, choose Get Started under Deploy.Amplify Console getting started
  2. Select GitHub as the repository provider, then choose Continue:Select GitHub as your code repo
  3. Follow the prompts to enable GitHub access, then select the repository dropdown and choose the repo. In the Branch dropdown, choose master. Choose Next.Add repository branch
  4. In the App build and test settings page, choose Next.
  5. In the Review page, choose Save and deploy.
  6. The final screen shows the deployment pipeline for the connected repo, starting at the Provision phase:Amplify Console deployment pipeline

After a few minutes, the Build, Deploy, and Verify steps show green checkmarks. Open the URL in a browser, and you see that the application is now served by the public URL:

Ask Around Me - Deployed application

Finally, before logging in, you must add the URL to the list of allowed URLs in the Auth0 settings:

  1. Log into Auth0 and navigate to the dashboard.
  2. Choose Applications in the menu, then select Ask Around Me from the list of applications.
  3. On the Settings tab, add the application’s URL to Allowed Callback URLs, Allowed Logout URLs, and Allowed Web Origins. Separate from the existing values using a comma.Updating the Auth0 configuration
  4. Choose Save changes. This allows the new published domain name to interact with Auth0 for authentication your application’s users.

Anytime you push changes to the code repository, Amplify Console detects the commit and redeploys the application. If errors are detected, the existing version is presented to users. If there are no errors, the new version is served to visitors.

Conclusion

In the last part of this series, I show how the application queues posted questions and answers. I explain how this asynchronous approach smooths traffic spikes and helps maintain responsive APIs.

I cover how answers are collected from thousands of users and are aggregated using DynamoDB streams. These totals are saved as summaries in the Questions table, and live updates are pushed via AWS IoT Core back to the frontend.

Finally, I show how you can automate deployment using Amplify Console. By connecting the service directly with your code repository, it publishes and serves your application with no need to manually copy files.

To learn more about this application, see the accompanying GitHub repo.

Building a location-based, scalable, serverless web app – part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-location-based-scalable-serverless-web-app-part-2/

Part 1 introduces the Ask Around Me web application that allows users to send questions to other local users in real time. I explain the app’s functionality and how using a single-page application (SPA) framework complements a serverless backend. I configure Auth0 for authentication and show how to deploy the frontend and backend. I also introduce how SPA frontends can send and receive data using both a traditional API and real-time messaging via a WebSocket.

In this post, I review the backend architecture, Amazon API Gateway’s HTTP APIs, and the geohashing implementation. The code and instructions for this application are available in the GitHub repo.

Architecture overview

After deploying the application using the repo’s README.md instructions, the backend architecture looks like this:

Ask Around Me backend architecture

The Vue.js frontend primarily interacts with the backend via HTTP APIs using Amazon API Gateway. When users submit questions or answers, the data is sent via the POST API endpoints. When the frontend requests lists of questions or answers, this occurs via the GET API endpoints.

Incoming questions and answers are posted to separate Amazon SQS queues. These queues invoke AWS Lambda functions that process and store the data in the application’s Amazon DynamoDB tables. In the Questions table, the application saves geo-location data and aggregated statistics for each question. The Answers table maintains a record of user IDs and answers to ensure that each user can only post one answer per question.

When new answers are stored in the Answers tables, a DynamoDB stream triggers a Lambda aggregation function with the update. This calculates average scores for questions and aggregates data for the heat map, then stores the result in the main Questions table. When the Questions table is updated, this DynamoDB stream invokes the Publish Lambda function. This publishes updates to the relevant topic in AWS IoT Core, which the front-end application subscribes to.

Using HTTP APIs

API Gateway is a common integration service used between the frontend and backend of serverless web applications. You can choose between the standard REST APIs, and the newer HTTP APIs. The choice depends upon which features you need, and cost considerations for your workload.

This application uses JWT authentication via Auth0 and Lambda proxy integration, and both are supported by HTTP APIs. Many advanced features like API key management, Amazon Cognito integration, and usage plans are not required in this application. It’s also important to compare to cost of each service:

API typeHourlyDaily Annually
PUT questions1,00024,0008,760,000
GET questions50,0001,200,000438,000,000
PUT answers10,000240,00087,600,000
Total API requests 534,360,000
REST APIs cost$1,870.26
HTTP APIs$534.36

Using the predicted API usage covered in part 1, you can compare the REST APIs and HTTP APIs overall cost. At an estimated $534 annually, the HTTP APIs option is approximately 30% of the cost of REST APIs.

The AWS Serverless Application Model (SAM) template in the repo defines the HTTP API resource and CORS configuration. It also includes the Auth0 authorizer used to validate each API request:

  MyApi:
    Type: AWS::Serverless::HttpApi
    Properties:
     Auth:
        Authorizers:
          MyAuthorizer:
            JwtConfiguration:
              issuer: !Ref Auth0issuer
              audience:
                - https://auth0-jwt-authorizer
            IdentitySource: "$request.header.Authorization"
        DefaultAuthorizer: MyAuthorizer

      CorsConfiguration:
        AllowMethods:
          - GET
          - POST
          - DELETE
          - OPTIONS
        AllowHeaders:
          - "*"   
        AllowOrigins: 
          - "*"   

With the HTTP API resource defined, each Lambda function has an event configuration referencing this resource. All the functions referencing the HTTP API resource automatically use the Auth0 authorizer.

  GetAnswersFunction: 
    Type: AWS::Serverless::Function
    Properties:
      Description: Get all answers for a question
      ... 
      Events:
        Get:
          Type: HttpApi
          Properties:
            Path: /answers/{Key}
            Method: get
            ApiId: !Ref MyApi    

Using geohashing in web applications

A key part of the functionality in Ask Around Me is the ability to find and answer questions near the user. Given the expected volume of questions in this system, this requires an efficient way to query based upon location that maintains performance as traffic grows.

In a naïve implementation, you might compare the current geographical position of the user with the geo-location of each question and answer in the database. But with an expected 1,000 questions per hour, this would soon become a slow operation with O(n) performance.

A more efficient solution is geohashing. This divides the geographical area of the planet into series of grid cells that are identified by an alphanumeric hash. The first character of the hash identifies one of 32 cells in the grid, roughly 5000 km x 5000 km on the planet. The second character identifies one of 32 squares in that first cell, so combining the first two characters provides a resolution of approximately 1250 km x 1250 km. By the 12th character in the hash, you can identify an area as small as a couple of square inches on Earth. For a more detailed explanation, see this geohashing site.

When using this algorithm, it’s important to choose the correct level of resolution. For Ask Around Me, the frontend searches for questions within 5 miles of the user. You can identify these areas with a 5-character hash. This means you can compare the user’s current location using their geohash, to the geohash stored in the Questions table. This comparison allows you to immediately discard most questions from the search and quickly find the relevant items.

This solution uses the Geo Library for Amazon DynamoDB npm library. Both the GET and POST questions APIs use this library to calculate the geohash when storing and fetching questions. The library requires a dedicated DynamoDB table, which is why user answers are stored in a separate table.

The GET questions API uses the latitude and longitude from the query parameters to query the underlying DynamoDB using this library:

const AWS = require('aws-sdk')
AWS.config.update({region: process.env.AWS_REGION})

const ddb = new AWS.DynamoDB() 
const ddbGeo = require('dynamodb-geo')
const config = new ddbGeo.GeoDataManagerConfiguration(ddb, process.env.TableName)
config.hashKeyLength = 5

const myGeoTableManager = new ddbGeo.GeoDataManager(config)
const SEARCH_RADIUS_METERS = 4000

exports.handler = async (event) => {

  const latitude = parseFloat(event.queryStringParameters.lat)
  const longitude = parseFloat(event.queryStringParameters.lng)

  // Get questions within geo range
  const result = await myGeoTableManager.queryRadius({
    RadiusInMeter: SEARCH_RADIUS_METERS,
    CenterPoint: {
      latitude,
      longitude
    }
  })

  return {
    statusCode: 200,
    body: JSON.stringify(result)
  }
}

The publish/subscribe pattern for real time in web apps

Modern web applications frequently use real-time notifications to keep users informed of state changes. You could achieve this with frequently polling of the APIs to fetch new information. However, this approach is usually wasteful, both in cost and compute terms, because most API calls do not return new information. Additionally, if updates are evenly distributed and you poll every n seconds, there is an average delay of n/2 seconds between data becoming available and your application receiving it.

Instead of polling, a better option for many web applications is a WebSocket. Data availability is closer to real time, and the messaging is less frequent. This can be important for web applications used on mobile devices where unnecessary messaging can impact battery life.

This approach uses the publish-subscribe pattern. The frontend makes subscriptions to a backend service, indicating topics of interest. The backend service receives messages from publishers, which are upstream processes in the application. It filters the messages and routes to the appropriate subscribers.

Although powerful, this can be complex to implement due to connectivity issues over networks. For a web application, users may turn off their devices, disconnect Wi-Fi, or become unreachable due to limited coverage. This pattern is generally forward-only, meaning you only receive messages after the point of subscription.

AWS IoT Core simplifies this process, and the JavaScript SDK handles the common reconnection issues. The backend application sends messages to topics in AWS IoT Core, and the frontend application subscribes to topics of interest. The service maintains the list of active publishers and subscribers, and routes messages between the two. It also automatically manages fan-out, which occurs when there are many subscribers to a single topic.

From a pricing perspective, this is also a cost-efficient approach. At the time of writing, AWS IoT Core costs $0.08 per million minutes of connection, and $1.00 per million messages. There are also no servers to manage, and the service scales automatically to handle your application’s load.

In the example application, the real-time connection is configured and managed in a single component, IoT.vue. This initiates a connection to an IoT endpoint when the application first starts, and listens for messages on subscribed topics. It passes data back to the global Vuex store so other components automatically receive updates with no dependency on the IoT component.

Choosing publish-subscribe topics for web apps

In a typical synchronous API call, the client application makes a specific request and receives a response from a backend service. With a topic-based subscription, the topic itself is the equivalent of the request, but you usually don’t receive immediate information.

In this web application, there are a number of topics that are potentially important to users. Some topics are shared across multiple users, while other are private to a single user:

  • Account-level topic: messages relating only to a single user ID, such as billing and notifications. These are intended for any devices where that user is logged in.
  • Per-question topic: when a user asks a question, they need alerts when new answers arrive. Each question ID maps to an individual topic. Anyone who asks or watches a question subscribes to this topic.
  • Geo-fenced alert topic: a user receives alerts when new questions are asked in their local area. In this case, the geohash of their location is the topic identifier. New questions are published to their geohash topics, and users within the same geohash area receive those messages.
  • A system-wide topic: this is a single topic that all users subscribe to. This is reserved for important messages for all application users.

In web applications, you subscribe to some topics when the application initializes, such as account-level or system-wide topics. Other subscriptions are dynamic. For example, you subscribe to a question ID topic only after posting a question, or subscribe to different geo-fence hashes when the user’s location changes.

Conclusion

This post explores the backend architecture of the Ask Around Me application. I compare the cost and features in deciding between REST APIs and HTTP APIs in API Gateway. I introduce geohashing and the npm library used to handle geo-location queries in DynamoDB. And I show how you can build real-time messaging into your web applications using the publish-subscribe pattern with AWS IoT Core.

To learn more, visit the application’s code repo on GitHub.

Introducing the new Serverless LAMP stack

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-new-serverless-lamp-stack/

This is the first in a series of posts for PHP developers. The series will explain how to use serverless technologies with PHP. It covers the available tools, frameworks and strategies to build serverless applications, and why now is the right time to start.

In future posts, I demonstrate how to use AWS Lambda for web applications built with PHP frameworks such as Laravel and Symphony. I show how to move from using Lambda as a replacement for web hosting functionality to a decoupled, event-driven approach. I cover how to combine multiple Lambda functions of minimal scope with other serverless services to create performant scalable microservices.

In this post, you learn how to use PHP with Lambda via the custom runtime API. Visit this GitHub repository for the sample code.

The Serverless LAMP stack

The Serverless LAMP stack

The challenges with traditional PHP applications

Scalability is an inherent challenge with the traditional LAMP stack. A scalable application is one that can handle highly variable levels of traffic. PHP applications are often scaled horizontally, by adding more web servers as needed. This is managed via a load balancer, which directs requests to various web servers. Each additional server brings additional overhead with networking, administration, storage capacity, backup and restore systems, and an update to asset management inventories. Additionally, each horizontally scaled server runs independently. This can result in configuration synchronization challenges.

Horizontal scaling with traditional LAMP stack applications.

Horizontal scaling with traditional LAMP stack applications.

New storage challenges arise as each server has its own disks and filesystem, often requiring developers to add a mechanism to handle user sessions. Using serverless technologies, scalability is managed for the developer.

If traffic surges, the services scale to meet the demand without having to deploy additional servers. This allows applications to quickly transition from prototype to production.

The serverless LAMP architecture

A traditional web application can be split in to two components:

  • The static assets (media files, css, js)
  • The dynamic application (PHP, MySQL)

A serverless approach to serving these two components is illustrated below:

The serverless LAMP stack

The serverless LAMP stack

All requests for dynamic content (anything excluding /assets/*) are forwarded to Amazon API Gateway. This is a fully managed service for creating, publishing, and securing APIs at any scale. It acts as the “front door” to the PHP application, routing requests downstream to Lambda functions. The Lambda functions contain the business logic and interaction with the MySQL database. You can pass the input to the Lambda function as any combination of request headers, path variables, query string parameters, and body.

Notable AWS features for PHP developers

Amazon Aurora Serverless

During re:Invent 2017, AWS announced Aurora Serverless, an on-demand serverless relational database with a pay-per-use cost model. This manages the responsibility of relational database provisioning and scaling for the developer.

Lambda Layers and custom runtime API.

At re:Invent 2018, AWS announced two new Lambda features. These enable developers to build custom runtimes, and share and manage common code between functions.

Improved VPC networking for Lambda functions.

In September 2019, AWS announced significant improvements in cold starts for Lambda functions inside a VPC. This results in faster function startup performance and more efficient usage of elastic network interfaces, reducing VPC cold starts.

Amazon RDS Proxy

At re:Invent 2019, AWS announced the launch of a new service called Amazon RDS Proxy. A fully managed database proxy that sits between your application and your relational database. It efficiently pools and shares database connections to improve the scalability of your application.

 

Significant moments in the serverless LAMP stack timeline

Significant moments in the serverless LAMP stack timeline

Combining these services, it is now it is possible to build secure and performant scalable serverless applications with PHP and relational databases.

Custom runtime API

The custom runtime API is a simple interface to enable Lambda function execution in any programming language or a specific language version. The custom runtime API requires an executable text file called a bootstrap. The bootstrap file is responsible for the communication between your code and the Lambda environment.

To create a custom runtime, you must first compile the required version of PHP in an Amazon Linux environment compatible with the Lambda execution environment .To do this, follow these step-by-step instructions.

The bootstrap file

The file below is an example of a basic PHP bootstrap file. This example is for explanation purposes as there is no error handling or abstractions taking place. To ensure that you handle exceptions appropriately, consult the runtime API documentation as you build production custom runtimes.

#!/opt/bin/php
<?PHP

// This invokes Composer's autoloader so that we'll be able to use Guzzle and any other 3rd party libraries we need.
require __DIR__ . '/vendor/autoload.php;

// This is the request processing loop. Barring unrecoverable failure, this loop runs until the environment shuts down.
do {
    // Ask the runtime API for a request to handle.
    $request = getNextRequest();

    // Obtain the function name from the _HANDLER environment variable and ensure the function's code is available.
    $handlerFunction = array_slice(explode('.', $_ENV['_HANDLER']), -1)[0];
    require_once $_ENV['LAMBDA_TASK_ROOT'] . '/src/' . $handlerFunction . '.php;

    // Execute the desired function and obtain the response.
    $response = $handlerFunction($request['payload']);

    // Submit the response back to the runtime API.
    sendResponse($request['invocationId'], $response);
} while (true);

function getNextRequest()
{
    $client = new \GuzzleHttp\Client();
    $response = $client->get('http://' . $_ENV['AWS_LAMBDA_RUNTIME_API'] . '/2018-06-01/runtime/invocation/next');

    return [
      'invocationId' => $response->getHeader('Lambda-Runtime-Aws-Request-Id')[0],
      'payload' => json_decode((string) $response->getBody(), true)
    ];
}

function sendResponse($invocationId, $response)
{
    $client = new \GuzzleHttp\Client();
    $client->post(
    'http://' . $_ENV['AWS_LAMBDA_RUNTIME_API'] . '/2018-06-01/runtime/invocation/' . $invocationId . '/response',
       ['body' => $response]
    );
}

The #!/opt/bin/php declaration instructs the program loader to use the PHP binary compiled for Amazon Linux.

The bootstrap file performs the following tasks, in an operational loop:

  1. Obtains the next request.
  2. Executes the code to handle the request.
  3. Returns a response.

Follow these steps to package the bootstrap and compiled PHP binary together into a `runtime.zip`.

Libraries and dependencies

The runtime bootstrap uses an HTTP-based local interface. This retrieves the event payload for each Lambda function invocation and returns back the response from the function. This bootstrap file uses Guzzle, a popular PHP HTTP client, to make requests to the custom runtime API. The Guzzle package is installed using Composer package manager. Installing packages in this way creates a mechanism for incorporating additional libraries and dependencies as the application evolves.

Follow these steps to create and package the runtime dependencies into a `vendors.zip` binary.

Lambda Layers provides a mechanism to centrally manage code and data that is shared across multiple functions. When a Lambda function is configured with a layer, the layer’s contents are put into the /opt directory of the execution environment. You can include a custom runtime in your function’s deployment package, or as a layer. Lambda executes the bootstrap file in your deployment package, if available. If not, Lambda looks for a runtime in the function’s layers. There are several open source PHP runtime layers available today, most notably:

The following steps show how to publish the `runtime.zip` and `vendor.zip` binaries created earlier into Lambda layers and use them to build a Lambda function with a PHP runtime:

  1.  Use the AWS Command Line Interface (CLI) to publish layers from the binaries created earlier
    aws lambda publish-layer-version \
        --layer-name PHP-example-runtime \
        --zip-file fileb://runtime.zip \
        --region eu-west-1

    aws lambda publish-layer-version \
        --layer-name PHP-example-vendor \
        --zip-file fileb://vendors.zip \
        --region eu-west-1

  2. Make note of each command’s LayerVersionArn output value (for example arn:aws:lambda:eu-west-1:XXXXXXXXXXXX:layer:PHP-example-runtime:1), which you’ll need for the next steps.

Creating a PHP Lambda function

You can create a Lambda function via the AWS CLI, the AWS Serverless Application Model (SAM), or directly in the AWS Management Console. To do this using the console:

  1. Navigate to the Lambda section  of the AWS Management Console and choose Create function.
  2. Enter “PHPHello” into the Function name field, and choose Provide your own bootstrap in the Runtime field. Then choose Create function.
  3. Right click on bootstrap.sample and choose Delete.
  4. Choose the layers icon and choose Add a layer.
  5. Choose Provide a layer version ARN, then copy and paste the ARN of the custom runtime layer from in step 1 into the Layer version ARN field.
  6. Repeat steps 6 and 7 for the vendor ARN.
  7. In the Function Code section, create a new folder called src and inside it create a new file called index.php.
  8. Paste the following code into index.php:
    //index function
    function index($data)
    {
     return "Hello, ". $data['name'];
    }
    
  9. Insert “index” into the Handler input field. This instructs Lambda to run the index function when invoked.
  10. Choose Save at the top right of the page.
  11. Choose Test at the top right of the page, and  enter “PHPTest” into the Event name field. Enter the following into the event payload field and then choose Create:{ "name": "world"}
  12. Choose Test and Select the dropdown next to the execution result heading.

You can see that the event payload “name” value is used to return “hello world”. This is taken from the $data['name'] parameter provided to the Lambda function. The log output provides details about the actual duration, billed duration, and amount of memory used to execute the code.

Conclusion

This post explains how to create a Lambda function with a PHP runtime using Lambda Layers and the custom runtime API. It introduces the architecture for a serverless LAMP stack that scales with application traffic.

Lambda allows for functions with mixed runtimes to interact with each other. Now, PHP developers can join other serverless development teams focusing on shipping code. With serverless technologies, you no longer have to think about restarting webhosts, scaling or hosting.

Start building your own custom runtime for Lambda.

Building a location-based, scalable, serverless web app – part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-location-based-scalable-serverless-web-app-part-1/

Web applications represent a major category of serverless usage. When used with single-page application (SPA) frameworks for front-end development, you can create highly responsive apps. With a serverless backend, these apps can scale to hundreds of thousands of users without you managing a single server.

In this 3-part series, I demonstrate how to build an example serverless web application. The application includes authentication, real-time updates, and location-specific features. I explore the functionality, architecture, and design choices involved. I provide a complete code repository for both the front-end and backend. By the end of these posts, you can use these patterns and examples in your own web applications.

In this series:

  • Part 1: Deploy the frontend and backend applications, and learn about how SPA web applications interact with serverless backends.
  • Part 2: Review the backend architecture, Amazon API Gateway HTTP APIs, and the geohashing implementation.
  • Part 3: Understand the backend data processing and aggregation with Amazon DynamoDB, and the final deployment of the application to production.

The code uses the AWS Serverless Application Model (SAM), enabling you to deploy the application easily in your own AWS account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost for usage beyond development and testing.

To set up the example, visit the GitHub repo and follow the instructions in the README.md file.

Introducing “Ask Around Me” – The app for finding answers from local users

Ask Around Me is a web application that allows you to ask questions to a community of local users. It’s designed to be used on a smartphone browser.

 

Ask Around Me front end application

The front-end uses Auth0 for authentication. For simplicity, it supports social logins with other identity providers. Once a user is logged in, the app displays their local area:

No questions in your area

Users can then post questions to the neighborhood. Questions can be ratings-based (“How relaxing is the park?”) or geography-based (“Where is best coffee?”).

Ask a new question

Posted questions are published to users within a 5-mile radius. Any user in this area sees new questions appear in the list automatically:

New questions in Ask Around Me

Other users answer questions by providing a star-rating or dropping a pin on a map. As the question owner, you see real-time average scores or a heat map, depending on the question type:

Ask Around Me Heatmap

The app is designed to be fun and easy to use. It uses authentication to ensure that votes are only counted once per user ID. It uses geohashing to ensure that users only see and answer questions within their local area. It also keeps the question list and answers up to date in real time to create a sense of immediacy.

In terms of traffic, the app is expected to receive 1,000 questions and 10,000 answers posted per hour. The query that retrieves local questions is likely to receive 50,000 requests per hour. In the course of these posts, I explore the architecture and services chosen to handle this volume. All of this is built serverlessly with cost effectiveness in mind. The cost scales in line with usage, and I discuss how to make the best use of the app budget in this scenario.

SPA frameworks and serverless backends

While you can apply a serverless backend to almost any type of web or mobile framework, SPA frameworks can make development much easier. For modern web development, SPA frameworks like React.js, Vue.js, Angular have grown in popularity for serverless development. They have become the standard way to build complex, rich front-ends.

These frameworks offer benefits to both front-end developers and users. For developers, you can create the application within an IDE and test locally with hot reloading, which renders new content in the same context in the browser. For users, it creates a web experience that’s similar to a traditional application, with reactive content and faster interactive capabilities.

When you build a SPA-based application, the build process creates HTML, JavaScript, and CSS files. You serve these static assets from an Amazon CloudFront distribution with an Amazon S3 bucket set as the origin. CloudFront serves these files from 216 global points of presence, resulting in low latency downloads regardless of where the user is located.

CloudFront/S3 app distribution

You can also use AWS Amplify Console, which can automate the build and deployment process. This is triggered by build events in your code repo so once you commit code changes, these are automatically deployed to production.

A traditional webserver often serves both the application’s static assets together with dynamic content. In this model, you offload the serving of all of the application assets to a global CDN. The dynamic application is a serverless backend powered by Amazon API Gateway and AWS Lambda. By using a SPA framework with a serverless backend, you can create performant, highly scalable web applications that are also easy to develop.

Configuring Auth0

This application integrates Auth0 for user authentication. The front-end calls out to this service when users are not logged in, and Auth0 provides an open standard JWT token after the user is authenticated. Before you can install and use the application, you must sign up for an Auth0 account and configure the application:

  1. Navigate to https://auth0.com/ and choose Sign Up. Complete the account creation process.
  2. From the dashboard, choose Create Application. Enter AskAroundMe as the name and select Single Page Web Applications for the Application Type. Choose Create.Auth0 configuration
  3. In the next page, choose the Settings tab. Copy the Client ID and Domain values to a text editor – you need these for setting up the Vue.js application later.Auth0 configuration next step
  4. Further down on this same tab, enter the value http://localhost:8080 into the Allowed Logout URLs, Allowed Callback URLs and Allowed Web Origins fields. Choose Save Changes.
  5. On the Connections tab, in the Social section, add google-oauth2 and twitter and ensure that the toggles are selected. This enables social sign-in for your application.Auth0 Connections tab

This configuration allows the application to interact with the Auth0 service from your local machine. In production, you must enter the domain name of the application in these fields. For more information, see Auth0’s documentation for Application Settings.

Deploying the application

In the code repo, there are separate directories for the front-end and backend applications. You must install the backend first. To complete this step, follow the detailed instructions in the repo’s README.md.

There are several important environment variables to note from the backend installation process:

  • IoT endpoint address and Cognito Pool ID: these are used for real-time messaging between the backend and frontend applications.
  • API endpoint: the base URL path for the backend’s APIs.
  • Region: the AWS Region where you have deployed the application.

Next, you deploy the Vue.js application from the frontend directory:

  1. The application uses the Google Maps API – sign up for a developer account and make a note of your API key.
  2. Open the main.js file in the src directory. Lines 45 through 62 contain the configuration section where you must add the environment variables above:Ask Around Me Vue.js configuration

Ensure you complete the Auth0 configuration and remaining steps in the README.md file, then you are ready to test.

To launch the frontend application, run npm run serve to start the development server. The terminal shows the local URL where the application is now running:

Running the Vue.js app

Open a web browser and navigate to http://localhost:8080 to see the application.

How Vue.js applications work with a serverless backend

Unlike a traditional web application, SPA applications are loaded in the user’s browser and start executing JavaScript on the client-side. The app loads assets and initializes itself before communicating with the serverless backend. This lifecycle and behavior is comparable to a conventional desktop or mobile application.

Vue.js is a component-based framework. Each component optionally contains a user interface with related code and styling. Overall application state may be managed by a store – this example uses Vuex. You can use many of the patterns employed in this application in your own apps.

Auth0 provides a Vue.js component that automates storing and parsing the JWT token in the local browser. Each time the app starts, this component verifies the token and makes it available to your code. This app uses Vuex to manage the timing between the token becoming available and the app needing to request data.

The application completes several initialization steps before querying the backend for a list of questions to display:

Initialization process for the app

Several components can request data from the serverless backend via API Gateway endpoints. In src/views/HomeView.vue, the component loads a list of questions when it determines the location of the user:

const token = await this.$auth.getTokenSilently()
const url = `${this.$APIurl}/questions?lat=${this.currentLat}&lng=${this.currentLng}`
console.log('URL: ', url)
// Make API request with JWT authorization
const { data } = await axios.get(url, {
  headers: {
    // send access token through the 'Authorization' header
    Authorization: `Bearer ${token}`   
  }
})

// Commit question list to global store
this.$store.commit('setAllQuestions', data)

This process uses the Axios library to manage the HTTP request and pass the authentication token in the Authorization header. The resulting dataset is saved in the Vuex store. Since SPA applications react to changes in data, any frontend component displaying data is automatically refreshed when it changes.

The src/components/IoT.vue component uses MQTT messaging via AWS IoT Core. This manages real-time updates published to the frontend. When a question receives a new answer, this component receives an update. The component updates the question status in the global store, and all other components watching this data automatically receive those updates:

        mqttClient.on('message', function (topic, payload) {
          const msg = JSON.parse(payload.toString())
          
          if (topic === 'new-answer') {
            _store.commit('updateQuestion', msg)
          } else {
            _store.commit('saveQuestion', msg)
          }
        })

The application uses both API Gateway synchronous queries and MQTT WebSocket updates to communicate with the backend application. As a result, you have considerable flexibility for tracking overall application state and providing your users with a responsive application experience.

Conclusion

In this post, I introduce the Ask Around Me example web application. I discuss the benefits of using single-page application (SPA) frameworks for both developers and users. I cover how they can create highly scalable and performant web applications when powered with a serverless backend.

In this section, you configure Auth0 and deploy the frontend and backend from the application’s GitHub repo. I review the backend SAM template and the architecture it deploys.

In part 2, I will explain the backend architecture, the Amazon API Gateway configuration, and the geohashing implementation.

Best practices for organizing larger serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/best-practices-for-organizing-larger-serverless-applications/

Well-designed serverless applications are decoupled, stateless, and use minimal code. As projects grow, a goal for development managers is to maintain the simplicity of design and low-code implementation. This blog post provides recommendations for designing and managing code repositories in larger serverless projects, and best practices for deploying releases of production systems.

Organizing your code repositories

Many serverless applications begin as monolithic applications. This can occur either because a simple application has grown more complex over time, or because developers are following existing development practices. A monolithic application is represented by a single AWS Lambda function performing multiple tasks, and a mono-repo is a single repository containing the entire application logic.

Monoliths work well for the simplest serverless applications that perform single-purpose functions. These are small applications such as cron jobs, data processing tasks, and some asynchronous processes. As those applications evolve into workflows or develop new features, it becomes important to refactor the code into smaller services.

Using frameworks such as the AWS Serverless Application Model (SAM) or the Serverless Framework can make it easier to group common pieces of functionality into smaller services. Each of these can have a separate code repository. For SAM, the template.yaml file contains all the resources and function definitions needed for an application. Consequently, breaking an application into microservices with separate templates is a simple way to split repos and resource groups.

Separate templates for microservices

In the smallest unit of a serverless application, it’s also possible to create one repository per function. If these functions are independent and do not share other AWS resources, this may be appropriate. Helper functions and simple event processing code are examples of candidates for this kind of repo structure.

In most cases, it makes sense to create repos around groups of functions and resources that define a microservice. In an ecommerce example, “Payment processing” is a microservice with multiple smaller related functions that share common resources.

As with any software, the repo design depends upon the use-case and structure of development teams. One large repo makes it harder for developer teams to work on different features, and test and deploy. Having too many repos can create duplicate code, and difficulty in sharing resources across repos. Finding the balance for your project is an important step in designing your application architecture.

Using AWS services instead of code libraries

AWS services are important building blocks for your serverless applications. These can frequently provide greater scale, performance, and reliability than bundled code packages with similar functionality.

For example, many web applications that are migrated to Lambda use web frameworks like Flask (for Python) or Express (for Node.js). Both packages support routing and separate user contexts that are well suited if the application is running on a web server. Using these packages in Lambda functions results in architectures like this:

Web servers in Lambda functions

In this case, Amazon API Gateway proxies all requests to the Lambda function to handle routing. As the application develops more routes, the Lambda function grows in size and deployments of new versions replace the entire function. It becomes harder for multiple developers to work on the same project in this context.

This approach is generally unnecessary, and it’s often better to take advantage of the native routing functionality available in API Gateway. In many cases, there is no need for the web framework in the Lambda function, which increases the size of the deployment package. API Gateway is also capable of validating parameters, reducing the need for checking parameters with custom code. It can also provide protection against unauthorized access, and a range of other features more suited to be handled at the service level. When using API Gateway this way, the new architecture looks like this:

Using API Gateway for routing

Additionally, the Lambda functions consist of less code and fewer package dependencies. This makes testing easier and reduces the need to maintain code library versions. Different developers in a team can work on separate routing functions independently, and it becomes simpler to reuse code in future projects. You can configure routes in API Gateway in the application’s SAM template:

Resources:
  GetProducts:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: getProducts/
      Handler: app.handler
      Runtime: nodejs12.x
      Events:
        GetProductsAPI:
          Type: Api 
          Properties:
            Path: /getProducts
            Method: get

Similarly, you should usually avoid performing workflow orchestrations within Lambda functions. These are sections of code that call out to other services and functions, and perform subsequent actions based on successful execution or failure.

Lambda functions with embedded workflow orchestrations

These workflows quickly become fragile and difficult to modify for new requirements. They can cause idling in the Lambda function, meaning that the function is waiting for return values from external sources, increasingly the cost of execution.

Often, a better approach is to use AWS Step Functions, which can represent complex workflows as JSON definitions in the application’s SAM template. This service reduces the amount of custom code required, and enables long-lived workflows that minimize idling in Lambda functions. It also manages in-flight executions as workflows are upgraded. The example above, rearchitected with a Step Functions workflow, looks like this:

Using Step Functions for orchestration

Using multiple AWS accounts for development teams

There are many ways to deploy serverless applications to production. As applications grow and become more important to your business, development managers generally want to improve the robustness of the deployment process. You have a number of options within AWS for managing the development and deployment of serverless applications.

First, it is highly recommended to use more than one AWS account. Using AWS Organizations, you can centrally manage the billing, compliance, and security of these accounts. You can attach policies to groups of accounts to avoid custom scripts and manual processes. One simple approach is to provide each developer with an AWS account, and then use separate accounts for a beta deployment stage and production:

Multiple AWS accounts in a deployment pipeline

The developer accounts can contains copies of production resources and provide the developer with admin-level permissions to these resources. Each developer has their own set of limits for the account, so their usage does not impact your production environment. Individual developers can deploy CloudFormation stacks and SAM templates into these accounts with minimal risk to production assets.

This approach allows developers to test Lambda functions locally on their development machines against live cloud resources in their individual accounts. It can help create a robust unit testing process, and developers can then push code to a repository like AWS CodeCommit when ready.

By integrating with AWS Secrets Manager, you can store different sets of secrets in each environment and eliminate any need for credentials stored in code. As code is promoted from developer account through to the beta and production accounts, the correct set of credentials is automatically used. You do not need to share environment-level credentials with individual developers.

It’s also possible to implement a CI/CD process to start build pipelines when code is deployed. To deploy a sample application using a multi-account deployment flow, follow this serverless CI/CD tutorial.

Managing feature releases in serverless applications

As you implement CI/CD pipelines for your production serverless applications, it is best practice to favor safe deployments over entire application upgrades. Unlike traditional software deployments, serverless applications are a combination of custom code in Lambda functions and AWS service configurations.

A feature release may consist of a version change in a Lambda function. It may have a different endpoint in API Gateway, or use a new resource such as a DynamoDB table. Access to the deployed feature may be controlled via user configuration and feature toggles, depending upon the application. AWS SAM has AWS CodeDeploy built-in, which allows you to configure canary deployments in the YAML configuration:

Resources:
 GetProducts:
   Type: AWS::Serverless::Function
   Properties:
     CodeUri: getProducts/
     Handler: app.handler
     Runtime: nodejs12.x

     AutoPublishAlias: live

     DeploymentPreference:
       Type: Canary10Percent10Minutes 
       Alarms:
         # A list of alarms that you want to monitor
         - !Ref AliasErrorMetricGreaterThanZeroAlarm
         - !Ref LatestVersionErrorMetricGreaterThanZeroAlarm
       Hooks:
         # Validation Lambda functions run before/after traffic shifting
         PreTraffic: !Ref PreTrafficLambdaFunction
         PostTraffic: !Ref PostTrafficLambdaFunction

CodeDeploy automatically creates aliases pointing to the old and versions of a function. The canary deployment enables you to gradually shift traffic from the old to the new alias, as you become confident that the new version is working as expected. Or you can rollback the update if needed. You can also set PreTraffic and PostTraffic hooks to invoke Lambda functions before and after traffic shifting.

Conclusion

As any software application grows in size, it’s important for development managers to organize code repositories and manage releases. There are established patterns in serverless to help manage larger applications. Generally, it’s best to avoid monolithic functions and mono-repos, and you should scope repositories to either the microservice or function level.

Well-designed serverless applications use custom code in Lambda functions to connect with managed services. It’s important to identify libraries and packages that can be replaced with services to minimize the deployment size and simplify the code base. This is especially true in applications that have been migrated from server-based environments.

Using AWS Organizations, you manage groups of accounts to enable your developers to have their own AWS accounts for development. This enables engineers to clone production assets and test against the AWS Cloud when writing and debugging code. You can use a CI/CD pipeline to push code through a beta environment to production, while safeguarding secrets using Secrets Manager. You can also use CodeDeploy to manage canary deployments easily.

To learn more about deploying Lambda functions with SAM and CodeDeploy, follow the steps in this tutorial.

Build a serverless Martian weather display with CircuitPython and AWS Lambda

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/build-a-serverless-martian-weather-display-with-circuitpython-and-aws-lambda/

Build a standalone digital weather display of Mars showing the latest images from the Mars Curiosity Rover.

This project uses an Adafruit PyPortal, an open-source IoT touch display. Traditionally, a microcontroller is programmed with firmware compiled using various specific toolchains. Fortunately, the PyPortal is programmed using CircuitPython, a lightweight version of Python that works on embedded hardware. You just copy your code to the PyPortal like you would to a thumb drive and it runs.

I deploy the backend, the part in the cloud that does all the heavy lifting, using the AWS Serverless Application Repository (SAR). The code on the PyPortal makes a REST call to the backend to handle the requests to the NASA Mars Rover Photos API and InSight: Mars Weather Service API. It then converts and resizes the image before returning the information to the PyPortal for display.

An Adafruit PyPortal displaying the latest images from the Mars Curiosity Rover and weather data from InSight Mars Lander.

An Adafruit PyPortal displaying the latest images from the Mars Curiosity Rover and weather data from InSight Mars Lander.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

Using a serverless backend reduces the load on the PyPortal. The PyPortal makes a call to the backend API and receives a small JSON object with the relevant data. This allows you to change to the logic of where and how to get the image and weather data without needing physical access to the device.

The backend API consists of an AWS Lambda function, written in Python, behind an Amazon API Gateway endpoint. When invoked, the FetchMarsData function makes requests to two separate NASA APIs. First it fetches the latest images from the Mars Curiosity Rover, typically from the previous day, and picks one at random. It resizes and converts the image to bitmap format before uploading to Amazon S3 with public read permissions. The PyPortal downloads the image from S3 later.

The function then calls the InSight: Mars Weather Service API. It retrieves the average air temperature, wind speed, pressure, season, solar day (sol), as well as the first and last timestamp of daily sampling. The API returns these values and the S3 image URL as a JSON object.

I use the AWS Serverless Application Model (SAM) to create the backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Generate a free NASA API key at api.nasa.gov. This is required to gain access to the NASA data APIs.
  2. Navigate to the aws-serverless-pyportal-mars-weather-display application in the Serverless Application Repository.
  3. Choose Deploy.
  4. On the next page, under Application Settings, enter the parameter, NasaApiKey.

  5. Once complete, choose View CloudFormation Stack.

  6. Select the Outputs tab and make a note of the MarsApiUrl. This is required for configuring the PyPortal.

  7. Navigate to the MarsApiKey URL listed in the Outputs tab.

  8. Click Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the MarsApiUrl.

PyPortal setup

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.2.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Insert the microSD card in the slot located on the back of the device.
  4. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help if you need to troubleshoot issues.
  5. Optionally if you have a 3D printer at home, you can print a case for your PyPortal. This can protect your project while also being a great way to display it on a desk.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. Flashing new firmware on the PyPortal is as simple as copying a Python file and necessary assets over to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the MarsApiKey and MarsApiUrl API Gateway endpoint, which can be found under Outputs in the AWS CloudFormation stack created by the Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request.
    Optionally, if you installed the Mu Editor, you can click on “Serial” to follow along the device log.Animated gif of the PyPortal device displaying a Mars rover image and Mars weather data.

Understanding how CircuitPython calls API Gateway

The main CircuitPython file is code.py. At the end of the file, the while loop periodically performs the operations necessary to display the photos from the Curiosity Rover and the InSight Mars lander weather data.

while True:
    data = callAPIEndpoint(secrets['mars_api_url'])
    downloadImage(data['image_url'])
    showDisplay(data['insight'], 
    displayTime=60*interval_minutes)

First, it calls the API Gateway endpoint using the URL from the secrets.py file, and passes the returned JSON to helper functions. The callAPIEndpoint(url) function passes the MarsApiKey in the header and a timeout of 30 seconds to the wifi.get() method. The timeout is required for integrations with services like Lambda and API Gateway. Remember, the CircuitPython code is running on a microcontroller and sometimes must wait longer when making requests.

def callAPIEndpoint(mars_api_url):
    headers = {"x-api-key": secrets['mars_api_key']}
    response = wifi.get(mars_api_url, headers=headers, timeout=30)
    data = response.json()
    print("JSON Response: ", data)
    response.close()
    return data

The JSON object that is received by the PyPortal is defined in the handler of the Lambda function. In the GitHub project downloaded earlier, see src/app.py.

def lambda_handler(event, context):
    url = fetchRoverImage()
    imgData = fetchImageData(url)
    image_s3_url = resize_image(imgData)
    weatherData = getMarsInsightWeather()

    return {
        "statusCode": 200,
        "body": json.dumps({
            "image_url": image_s3_url,
            "insight": weatherData
        })
    }

Similar to the CircuitPython code, this uses helper functions to perform all the various operations needed to retrieve and craft the data. At completion, the returned JSON is passed as the response to the PyPortal.

A quick way to add a new property is to edit the Lambda function directly through the AWS Lambda Console. Here, a key “hello” is added with a value “world”:

In the CircuitPython code.py file, the key is now available in the JSON response from API Gateway. The following prints the key value, which can be seen using the Mu Editor Serial debugger.

data = callAPIEndpoint(secrets['mars_api_url'])

print(data[‘hello’])

The Lambda function is packaged with the AWS Python SDK, boto3, which provides methods for interacting with a variety of AWS services. The Python Requests library is also included to make calls to the NASA APIs. Try exploring how to incorporate other services or APIs into your project. To understand how to modify the visual display on the PyPortal itself, see the displayio guide from Adafruit.

Conclusion

I show how to build a “live” Martian weather display using an Adafruit PyPortal, CircuitPython, and AWS Serverless technologies. Whether this is your first time using hardware or a serverless backend in the AWS Cloud, this project is simplified by the use of CircuitPython and the Serverless Application Model.

I also show how to make a request to API Gateway from the PyPortal. I then craft a response in Lambda for the PyPortal. Since both use variants of the Python programming language, much of the syntax stays the same.

To learn more, explore other devices supported by CircuitPython and the variety of community contributed libraries. Combined with the breadth of AWS services, you can push the boundaries of creativity.

Building a Scalable Document Pre-Processing Pipeline

Post Syndicated from Joel Knight original https://aws.amazon.com/blogs/architecture/building-a-scalable-document-pre-processing-pipeline/

In a recent customer engagement, Quantiphi, Inc., a member of the Amazon Web Services Partner Network, built a solution capable of pre-processing tens of millions of PDF documents before sending them for inference by a machine learning (ML) model. While the customer’s use case—and hence the ML model—was very specific to their needs, the pipeline that does the pre-processing of documents is reusable for a wide array of document processing workloads. This post will walk you through the pre-processing pipeline architecture.

Pre-processing pipeline architecture-SM

Architectural goals

Quantiphi established the following goals prior to starting:

  • Loose coupling to enable independent scaling of compute components, flexible selection of compute services, and agility as the customer’s requirements evolved.
  • Work backwards from business requirements when making decisions affecting scale and throughput and not simply because “fastest is best.” Scale components only where it makes sense and for maximum impact.
  •  Log everything at every stage to enable troubleshooting when something goes wrong, provide a detailed audit trail, and facilitate cost optimization exercises by identifying usage and load of every compute component in the architecture.

Document ingestion

The documents are initially stored in a staging bucket in Amazon Simple Storage Service (Amazon S3). The processing pipeline is kicked off when the “trigger” Amazon Lambda function is called. This Lambda function passes parameters such as the name of the staging S3 bucket and the path(s) within the bucket which are to be processed to the “ingestion app.”

The ingestion app is a simple application that runs a web service to enable triggering a batch and lists documents from the S3 bucket path(s) received via the web service. As the app processes the list of documents, it feeds the document path, S3 bucket name, and some additional metadata to the “ingest” Amazon Simple Queue Service (Amazon SQS) queue. The ingestion app also starts the audit trail for the document by writing a record to the Amazon Aurora database. As the document moves downstream, additional records are added to the database. Records are joined together by a unique ID and assigned to each document by the ingestion app and passed along throughout the pipeline.

Chunking the documents

In order to maximize grip and control, the architecture is built to submit single-page files to the ML model. This enables correlating an inference failure to a specific page instead of a whole document (which may be many pages long). It also makes identifying the location of features within the inference results an easier task. Since the documents being processed can have varied sizes, resolutions, and page count, a big part of the pre-processing pipeline is to chunk a document up into its component pages prior to sending it for inference.

The “chunking orchestrator” app repeatedly pulls a message from the ingest queue and retrieves the document named therein from the S3 bucket. The PDF document is then classified along two metrics:

  • File size
  • Number of pages

We use these metrics to determine which chunking queue the document is sent to:

  • Large: Greater than 10MB in size or greater than 10 pages
  • Small: Less than or equal to 10MB and less than or equal to 10 pages
  • Single page: Less than or equal to 10MB and exactly one page

Each of these queues is serviced by an appropriately sized compute service that breaks the document down into smaller pieces, and ultimately, into individual pages.

  • Amazon Elastic Cloud Compute (EC2) processes large documents primarily because of the high memory footprint needed to read large, multi-gigabyte PDF files into memory. The output from these workers are smaller PDF documents that are stored in Amazon S3. The name and location of these smaller documents is submitted to the “small documents” queue.
  • Small documents are processed by a Lambda function that decomposes the document into single pages that are stored in Amazon S3. The name and location of these single page files is sent to the “single page” queue.

The Dead Letter Queues (DLQs) are used to hold messages from their respective size queue which are not successfully processed. If messages start landing in the DLQs, it’s an indication that there is a problem in the pipeline. For example, if messages start landing in the “small” or “single page” DLQ, it could indicate that the Lambda function processing those respective queues has reached its maximum run time.

An Amazon CloudWatch Alarm monitors the depth of each DLQ. Upon seeing DLQ activity, a notification is sent via Amazon Simple Notification Service (Amazon SNS) so an administrator can then investigate and make adjustments such as tuning the sizing thresholds to ensure the Lambda functions can finish before reaching their maximum run time.

In order to ensure no documents are left behind in the active run, there is a failsafe in the form of an Amazon EC2 worker that retrieves and processes messages from the DLQs. This failsafe app breaks a PDF all the way down into individual pages and then does image conversion.

For documents that don’t fall into a DLQ, they make it to the “single page” queue. This queue drives each page through the “image conversion” Lambda function which converts the single page file from PDF to PNG format. These PNG files are stored in Amazon S3.

Sending for inference

At this point, the documents have been chunked up and are ready for inference.

When the single-page image files land in Amazon S3, an S3 Event Notification is fired which places a message in a “converted image” SQS queue which in turn triggers the “model endpoint” Lambda function. This function calls an API endpoint on an Amazon API Gateway that is fronting the Amazon SageMaker inference endpoint. Using API Gateway with SageMaker endpoints avoided throttling during Lambda function execution due to high volumes of concurrent calls to the Amazon SageMaker API. This pattern also resulted in a 2x inference throughput speedup. The Lambda function passes the document’s S3 bucket name and path to the API which in turn passes it to the auto scaling SageMaker endpoint. The function reads the inference results that are passed back from API Gateway and stores them in Amazon Aurora.

The inference results as well as all the telemetry collected as the document was processed can be queried from the Amazon Aurora database to build reports showing number of documents processed, number of documents with failures, and number of documents with or without whatever feature(s) the ML model is trained to look for.

Summary

This architecture is able to take PDF documents that range in size from single page up to thousands of pages or gigabytes in size, pre-process them into single page image files, and then send them for inference by a machine learning model. Once triggered, the pipeline is completely automated and is able to scale to tens of millions of pages per batch.

In keeping with the architectural goals of the project, Amazon SQS is used throughout in order to build a loosely coupled system which promotes agility, scalability, and resiliency. Loose coupling also enables a high degree of grip and control over the system making it easier to respond to changes in business needs as well as focusing tuning efforts for maximum impact. And with every compute component logging everything it does, the system provides a high degree of auditability and introspection which facilitates performance monitoring, and detailed cost optimization.

Using AWS ParallelCluster with a serverless API

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/using-aws-parallelcluster-with-a-serverless-api/

This post is contributed by Dario La Porta, AWS Senior Consultant – HPC

AWS ParallelCluster simplifies the creation and the deployment of HPC clusters. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. AWS Lambda automatically runs your code without requiring you to provision or manage servers.

In this post, I create a serverless API of the AWS ParallelCluster command line interface using these services. With this API, you can create, monitor, and destroy your clusters. This makes it possible to integrate AWS ParallelCluster programmatically with other applications you may have running on-premises or in the AWS Cloud.

The serverless integration of AWS ParallelCluster can enable a cleaner and more reproducible infrastructure as code paradigm to legacy HPC environments.

Taking this serverless, infrastructure as code approach enables several new types of functionality for HPC environments. For example, you can build on-demand clusters from an API when on-premises resources cannot handle the workload. AWS ParallelCluster can extend on-premises resources for running elastic and large-scale HPC on AWS’ virtually unlimited infrastructure.

You can also create an event-driven workflow in which new clusters are created when new data is stored in an S3 bucket. With event-driven workflows, you can be creative in finding new ways to build HPC infrastructure easily. It also helps optimize time for researchers.

Security is paramount in HPC environments because customers are performing scientific analyses that are central to their businesses. By using a serverless API, this solution can improve security by removing the need to run the AWS ParallelCluster CLI in a user environment. This help keep customer environments secure and more easily control the IAM roles and security groups that researchers have access to.

Additionally, the Amazon API Gateway for HPC job submission post explains how to submit a job in the cluster using the API. You can use this instead of connecting to the master node via SSH.

This diagram shows the components required to create the cluster and interact with the solution.

Cluster Architecture

Cluster Architecture

Cost of the solution

You can deploy the solution in this blog post within the AWS Free Tier. Make sure that your AWS ParallelCluster configuration uses the t2.micro instance type for the cluster’s master and compute instances. This is the default instance type for AWS ParallelCluster configuration.

For real-world HPC use cases, you most likely want to use a different instance type, such as C5 or C5n. C5n in particular can work well for HPC workloads because it includes the option to use the Elastic Fabric Adapter (EFA) network interface. This makes it possible to scale tightly coupled workloads to more compute instances and reduce communications latency when using protocols such as MPI.

To stay within the AWS Free Tier allowance, be sure to destroy the created resources as described in the teardown section of this post.

VPC configuration

Choose Launch Stack to create the VPC used for this configuration in your account:

The stack creates the VPC, the public subnets, and the private subnet required for the cluster in the eu-west-1 Region.

Stack Outputs

Stack Outputs

You can also use an existing VPC that complies with the AWS ParallelCluster network requirements.

Deploy the API with AWS SAM

The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS. You use AWS SAM to simplify the setup of the serverless architecture.

In this case, the framework automates the manual configuration of setting up the API Gateway and Lambda function. Instead you can focus more on how the API works with AWS ParallelCluster. It improves security and provides a simple, alternative method for cluster lifecycle management.

You can install the AWS SAM CLI by following the Installing the AWS SAM CLI documentation. You can download the code used in this example from this repo. Inside the repo:

  • the sam-app folder in the aws-sample repository contains the code required to build the AWS ParallelCluster serverless API.
  • sam-app/template.yml contains the policy required for the Lambda function for the creation of the cluster. Be sure to modify <AWS ACCOUNT ID> to match the value for your account.

The AWS Identity and Access Management Roles in AWS ParallelCluster document contains the latest version of the policy, in the ParallelClusterUserPolicy section.

To deploy the application, run the following commands:

cd sam-app
sam build
sam deploy --guided

From here, provide parameter values for the SAM deployment wizard that are appropriate for your Region and AWS account. After the deployment, take a note of the Outputs:

SAM deploying

SAM deploying

SAM Stack Outputs

SAM Stack Outputs

The API Gateway endpoint URL is used to interact with the API, and has the following format:

https:// <ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster

AWS ParallelCluster configuration file

AWS ParallelCluster is an open source cluster management tool to deploy and manage HPC clusters in the AWS Cloud. AWS ParallelCluster uses a configuration file to build the cluster and its syntax is explained in the documentation guide. The pcluster.conf configuration file can be created in a directory of your local file system.

The configuration file has been tested with AWS ParallelCluster v2.6.0. The master_subnet_id contains the id of the created public subnet and the compute_subnet_id contains the private one.

Deploy the cluster with the pcluster API

The pcluster API created in the previous steps requires some parameters:

  • command – the pcluster command to execute. A detailed list is available commands is available in the AWS ParallelCluster CLI commands page.
  • cluster_name – the name of the cluster.
  • –data-binary “$(base64 /path/to/pcluster/config)” – parameter used to pass the local AWS ParallelCluster configuration file to the API.
  • -H “additional_parameters: <param1> <param2> <…>” – used to pass additional parameters to the pcluster cli.

The following command creates a cluster named “cluster1”:

$ curl --request POST -H "additional_parameters: --nowait"  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=create&cluster_name=cluster1"

Beginning cluster creation for cluster: cluster1
Creating stack named: parallelcluster-cluster1
Status: CREATE_IN_PROGRESS

The cluster creation status can be queried with the following:

$ curl --request POST -H "additional_parameters: --nowait"  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=status&cluster_name=cluster1"

Status: CREATE_IN_PROGRESS

When the cluster is in the “CREATE_COMPLETE” state, you can retrieve the master node IP address using the following API call:

$ curl --request POST -H "additional_parameters: --nowait"  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=status&cluster_name=cluster1"

Status: CREATE_COMPLETE

$ curl --request POST -H "additional_parameters: "  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=status&cluster_name=cluster1"

Status: CREATE_COMPLETE
MasterServer: RUNNING
MasterPublicIP: 34.253.102.227
ClusterUser: ec2-user
MasterPrivateIP: 10.0.0.134

When the cluster is not needed anymore, destroy it with the following API call:

$ curl --request POST -H "additional_parameters: --nowait"  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=delete&cluster_name=cluster1"

Deleting: cluster1

The additional_parameters: —nowait prevents waiting for stack events after executing a stack command and avoids triggering the Lambda function timeout. The Amazon API Gateway for HPC job submission post explains how you can submit a job in the cluster using the API, instead of connecting to the master node via SSH.

The authentication to the API can be managed by following the Controlling and Managing Access to a REST API in API Gateway Documentation.

Teardown

You can destroy the resources by deleting the CloudFormation stacks created during installation. Deleting a Stack on the AWS CloudFormation Console explains the required steps.

Conclusion

In this post, I show how to integrate AWS ParallelCluster with Amazon API Gateway and manage the lifecycle of an HPC cluster using this API. Using Amazon API Gateway and AWS Lambda, you can run a serverless implementation of the AWS ParallelCluster CLI. This makes it possible to integrate AWS ParallelCluster programmatically with other applications you run on-premise or in the AWS Cloud.

This solution can help you improve the security of your HPC environment by simplifying the IAM roles and security groups that must be granted to individual users to successfully create HPC clusters. With this implementation, researchers no longer must run the AWS ParallelCluster CLI in their own user environment. As a result, by simplifying the security management of your HPC clusters’ lifecycle management, you can better ensure that important research is safe and secure.

To learn more, read more about how to use AWS ParallelCluster.

ICYMI: Serverless Q1 2020

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/icymi-serverless-q1-2020/

Welcome to the ninth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

A calendar of the January, February, and March.

In case you missed our last ICYMI, checkout what happened last quarter here.

Launches/New products

In 2018, we launched the AWS Well-Architected Tool. This allows you to review workloads in a structured way based on the AWS Well-Architected Framework. Until now, we’ve provided workload-specific advice using the concept of a “lens.”

As of February, this tool now lets you apply those lenses to provide greater visibility in specific technology domains to assess risks and find areas for improvement. Serverless is the first available lens.

You can apply a lens when defining a workload in the Well-Architected Tool console.

A screenshot of applying a lens.

HTTP APIs beta was announced at AWS re:Invent 2019. Now HTTP APIs is generally available (GA) with more features to help developers build APIs better, faster, and at lower cost. HTTP APIs for Amazon API Gateway is built from the ground up based on lessons learned from building REST and WebSocket APIs, and looking closely at customer feedback.

For the majority of use cases, HTTP APIs offers up to 60% reduction in latency.

HTTP APIs costs at least 71% lower when compared against API Gateway REST APIs.

A bar chart showing the cost comparison between HTTP APIs and API Gateway.

HTTP APIs also offers a more intuitive experience and powerful features, like easily configuring cross origin resource scripting (CORS), JWT authorizers, auto-deploying stages, and simplified route integrations.

AWS Lambda

You can now view and monitor the number of concurrent executions of your AWS Lambda functions by version and alias. Previously, the ConcurrentExecutions metric measured and emitted the sum of concurrent executions for all functions in the account. It included even those that had a reserved concurrency limit specified.

Now, the ConcurrentExecutions metric is emitted for all functions, versions, aliases. This can be used to see which functions consume your concurrency limits and estimate peak traffic based on consumption averages. Fine grain visibility in these areas can help plan appropriate configuration for Provisioned Concurrency.

A Lambda function written in Ruby 2.7.

A Lambda function written in Ruby 2.7.

AWS Lambda now supports Ruby 2.7. Developers can take advantage of new features in this latest release of Ruby, like pattern matching, argument forwarding and numbered arguments. Lambda functions written in Ruby 2.7 run on Amazon Linux 2.

Updated AWS Mock .NET Lambda Test Tool

Updated AWS Mock .NET Lambda Test Tool

.NET Core 3.1 is now a supported runtime in AWS Lambda. You can deploy to Lambda by setting the runtime parameter value to dotnetcore3.1. Updates have also been released for the AWS Toolkit for Visual Studio and .NET Core Global Tool Amazon .Lambda.Tools. These make it easier to build and deploy your .NET Core 3.1 Lambda functions.

With .NET Core 3.1, you can take advantage of all the new features it brings to Lambda, including C# 8.0, F# 4.7 support, and .NET Standard 2.1 support, a new JSON serializer, and a ReadyToRun feature for ahead-of-time compilation. The AWS Mock .NET Lambda Test Tool has also been updated to support .NET Core 3.1 with new features to help debug and improve your workloads.

Cost Savings

Last year we announced Savings Plans for AWS Compute Services. This is a flexible discount model provided in exchange for a commitment of compute usage over a period of one or three years. AWS Lambda now participates in Compute Savings Plans, allowing customers to save money. Visit the AWS Cost Explorer to get started.

Amazon API Gateway

With the HTTP APIs launched in GA, customers can build APIs for services behind private ALBs, private NLBs, and IP-based services registered in AWS Cloud Map such as ECS tasks. To make it easier for customers to work between API Gateway REST APIs and HTTP APIs, customers can now use the same custom domain across both REST APIs and HTTP APIs. In addition, this release also enables customers to perform granular throttling for routes, improved usability when using Lambda as a backend, and better error logging.

AWS Step Functions

AWS Step Functions VS Code plugin.

We launched the AWS Toolkit for Visual Studio Code back in 2019 and last month we added toolkit support for AWS Step Functions. This enables you to define, visualize, and create workflows without leaving VS Code. As you craft your state machine, it is continuously rendered with helpful tools for debugging. The toolkit also allows you to update state machines in the AWS Cloud with ease.

To further help with debugging, we’ve added AWS Step Functions support for CloudWatch Logs. For standard workflows, you can select different levels of logging and can exclude logging of a workflow’s payload. This makes it easier to monitor event-driven serverless workflows and create metrics and alerts.

AWS Amplify

AWS Amplify is a framework for building modern applications, with a toolchain for easily adding services like authentication, storage, APIs, hosting, and more, all via command line interface.

Customers can now use the Amplify CLI to take advantage of AWS Amplify console features like continuous deployment, instant cache invalidation, custom redirects, and simple configuration of custom domains. This means you can do end-to-end development and deployment of a web application entirely from the command line.

Amazon DynamoDB

You can now easily increase the availability of your existing Amazon DynamoDB tables into additional AWS Regions without table rebuilds by updating to the latest version of global tables. You can benefit from improved replicated write efficiencies without any additional cost.

On-demand capacity mode is now available in the Asia Pacific (Osaka-Local) Region. This is a flexible capacity mode for DynamoDB that can serve thousands of requests per second without requiring capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance cost and performance.

AWS Serverless Application Repository

The AWS Serverless Application Repository (SAR) is a service for packaging and sharing serverless application templates using the AWS Serverless Application Model (SAM). Applications can be customized with parameters and deployed with ease. Previously, applications could only be shared publicly or with specific AWS account IDs. Now, SAR has added sharing for AWS Organizations. These new granular permissions can be added to existing SAR applications. Learn how to take advantage of this feature today to help improve your organizations productivity.

Amazon Cognito

Amazon Cognito, a service for managing identity providers and users, now supports CloudWatch Usage Metrics. This allows you to monitor events in near-real time, such as sign-in and sign-out. These can be turned into metrics or CloudWatch alarms at no additional cost.

Cognito User Pools now supports logging for all API calls with AWS CloudTrail. The enhanced CloudTrail logging improves governance, compliance, and operational and risk auditing capabilities. Additionally, Cognito User Pools now enables customers to configure case sensitivity settings for user aliases, including native user name, email alias, and preferred user name alias.

Serverless posts

Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest published to the AWS Compute Blog this quarter.

January

February

March

Tech Talks and events

We hold AWS Online Tech Talks covering serverless topics throughout the year. You can find these in the serverless section of the AWS Online Tech Talks page. We also delivered talks at conferences and events around the globe, regularly join in on podcasts, and record short videos you can find to learn in quick byte-sized chunks.

Here are the highlights from Q1.

January

February

March

Live streams

Rob Sutter, a Senior Developer Advocate on AWS Serverless, has started hosting Serverless Office Hours every Tuesday at 14:00 ET on Twitch. He’ll be imparting his wisdom on Step Functions, Lambda, Golang, and taking questions on all things serverless.

Check out some past sessions:

Happy Little APIs Season 2 is airing every other Tuesday on the AWS Twitch Channel. Checkout the first episode where Eric Johnson and Ran Ribenzaft, Serverless Hero and CTO of Epsagon, talk about private integrations with HTTP API.

Eric Johnson is also streaming “Sessions with SAM” every Thursday at 10AM PST. Each week Eric shows how to use SAM to solve different problems with serverless and how to leverage SAM templates to build out powerful serverless applications. Catch up on the last few episodes on our Twitch channel.

Relax with a cup of your favorite morning beverage every Friday at 12PM EST with a Serverless Coffee Break with James Beswick. These are chats about all things serverless with special guests. You can catch these live on Twitter or on your own time with these recordings.

AWS Serverless Heroes

This year, we’ve added some new faces to the list of AWS Serverless Heroes. The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.

Still looking for more?

The Serverless landing page has even more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and Getting Started tutorials.

Building a Raspberry Pi telepresence robot using serverless: Part 2

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-raspberry-pi-telepresence-robot-using-serverless-part-2/

The deployed web frontend and the robot it controls.

The deployed web frontend and the robot it controls.

In a previous post, I show how to build a telepresence robot using serverless technologies and a Raspberry Pi. The result is a robot that transmits live video using Amazon Kinesis Video Streams with WebRTC. It can be driven remotely via an AWS Lambda function using an Amazon API Gateway REST endpoint.

This post walks through deploying a web interface to view the live stream and control the robot. The application is built using AWS Amplify and Vue.js. Amplify is a development framework that makes it easy to add authentication, hosting, and other AWS resources. It also provides a pipeline for deploying web applications.

I use the Amplify Command Line Interface (CLI) to create an authentication flow for user sign-in using Amazon Cognito. I then show how to set up an authorizer in API Gateway so that only authenticated users can drive the robot. An AWS Identity Access and Management (IAM) role sets permissions so users can assume access to Kinesis Video Streams to view the live video feed. The web application is then configured and run locally for testing. Finally, using the Amplify CLI, I show how to add hosting and publish a production ready web application.

Prerequisites

You need the following to complete the project:

Amplify CLI and project setup

An architecture diagram showing the client relationship between the AWS resources deployed by Amplify.

The Amplify CLI allows you to create and manage resource on AWS. With the libraries and UI components provided by the Amplify Framework, you can build powerful applications using a variety of cloud services.

The web interface for the telepresence robot is built using Amplify Vue.js components for user registration and sign-in. Download the application and use the Amplify CLI to configure resources for the web application.

To install and configure Amplify on the frontend web application, refer to the project set-up instructions on the GitHub project.

Creating an API Gateway authorizer

In the first guide, API Gateway is used to create a REST endpoint to send commands to the robot. Currently, the endpoint accepts requests without any authentication. To ensure that only authenticated users can control the robot, you must create an authorizer for the API.

The backend resources deployed by the Amplify web application include a Cognito User Pool. This is a user directory that provides sign-up and sign-in services, user profiles, and identity providers. The following instructions demonstrate how to configure an authorizer on API Gateway that verifies access using a user pool.

  1. Navigate to the Amazon API Gateway console.
  2. Choose the API created in the first guide for driving the robot.
  3. Choose Authorizers from the menu.
  4. Choose Create New Authorizer. Choose Cognito for Type and select the user pool created by the Amplify CLI. Set Token Source to Authorization.
  5. Choose Create.
  6. Choose Resources from the menu.
  7. Choose POST, Method Request.
  8. Set Authorization to the newly created authorizer.

Adding permissions

The web application loads a component for viewing video from the robot over a WebRTC connection. WebRTC is a protocol for negotiating peer to peer data connections by using a signaling channel.

The previous guide configured the robot to use a Kinesis Video Signaling Channel. Users signed into the web application must assume some permissions for Kinesis Video Streams to access the signaling channel.

When the Amplify CLI deploys an authentication flow, it creates a role in IAM. Cognito uses this role to assume permissions for a user pool based on matching conditions.

This Trust Relationship on the authRole controls when the role’s permissions are assumed. In this case, on a matching “authenticated” user from the identity pool.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "cognito-identity.amazonaws.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "cognito-identity.amazonaws.com:aud": "us-west-2:12345e-9548-4a5a-b44c-12345677"
        },
        "ForAnyValue:StringLike": {
          "cognito-identity.amazonaws.com:amr": "authenticated"
        }
      }
    }
  ]
}

Follow these steps to attach Kinesis Video Streams permissions to the authRole.

  1. Navigate to the IAM console.
  2. Choose Roles from the menu.
  3. Use the search bar to find “authRole”. It is prefixed by the stack name associated with the Amplify deployment. Choose it from the list.
  4. Choose Add inline policy.
  5. Select the JSON tab and paste in the following. In the Resource property, replace <RobotName> with the name of the robot created in the first guide.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "kinesisvideo:GetSignalingChannelEndpoint",
                    "kinesisvideo:ConnectAsMaster",
                    "kinesisvideo:GetIceServerConfig",
                    "kinesisvideo:ConnectAsViewer",
                    "kinesisvideo:DescribeSignalingChannel"
                ],
                "Resource": "arn:aws:kinesisvideo:*:*:channel/<RobotName>/*"
            }
        ]
    }
    
  6. Choose Review Policy.
  7. Choose Create Policy.

Configuring the application

The authorizer allows authenticated users to invoke the Lambda function through API Gateway. The permissions set on the authRole control access to the live video. The web application must know the endpoint for sending commands and the Kinesis Video Signaling Channel to use for the robot.

This information is configured in web-app/src/main.js. It requires a file named config.json to let the application know which endpoint and signaling channel to use.

  1. Inside the application folder aws-serverless-telepresence-robot/web-app/src, create a new file named config.json.
    {
      "endpoint": "",
      "channelARN": ""
    }
  2. Replace endpoint with the Invoke URL of the robot API. This can be found in API Gateway console under Stages, Prod. It can also be found under Outputs in the AWS CloudFormation stack created by the aws-serverless-telepresence-robot serverless application from the first guide.
  3. Replace channelARN with the ARN of your robot’s signaling channel. This can be found in the Amazon Kinesis Video Streams console under Signaling channels.

Running the application

You can build and run the application locally for testing purposes. It still uses the backend deployed in the cloud. Do this before publishing to production:

  1. Inside the web-app directory, run the following command:
    npm run serve
  2. Navigate to the locally hosted application at http://localhost:8080
  3. Follow the onscreen steps to create a new account.
  4. Choose Start Video. If the robot is active, a WebRTC connection is made and live video is displayed.
  5. Use the onscreen arrow buttons to drive the robot.

Deploying a hosted application

Amplify makes it easy to deploy a hosted application. The following commands configure and deploy hosting resources in Amazon S3 and Amazon CloudFront. This allows you to securely and quickly deploy your application for production use.

  1. Inside aws-serverless-telepresence-robot/web-app, run the following. When prompted, select PROD, this configures the application to deploy using S3 and CloudFront.
    amplify add hosting
  2. Finally, this command builds and publishes all the backend and frontend resources for your Amplify project. On completion, it provides a URL to the hosted web application. Note, it can take a while for the CloudFront distribution to deploy.
    amplify publish

Conclusion

In this post, I show how to build a web interface for remotely viewing and controlling the robot. This is done using AWS Amplify, Vue.js, and a previously deployed serverless application.

With a few commands, the Amplify CLI is used to configure backend resources for a web frontend. Cognito is used as an identity provider. An Authorizer is created for an API Gateway endpoint, allowing authenticated users to send commands to the robot from the frontend. An IAM Role with a trusted relationship with the Cognito User Pool is given permissions to use Kinesis Video Signaling Channels, which are passed to the authenticated users. This allows the web frontend to open a live video connection to the telepresence robot using WebRTC.

Once run and tested locally, I showed how the Amplify CLI can streamline configuring hosting and deployment of a production web application using S3 and CloudFront. The summation of this is a custom-built telepresence robot with a web application for viewing and operating securely, all done without managed servers.

The principles used in this project can be applied towards a variety of use cases. Use this to build out a fleet of remote vehicles to monitor factories or for personal home security. You can create a community for users to experience environments remotely. The interface Vue component can also easily be modified for custom commands sent to the application running on the robot.

Building a Raspberry Pi telepresence robot using serverless: Part 1

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-raspberry-pi-telepresence-robot-using-serverless-part-1/

A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.

A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.

A telepresence robot allows you to explore remote environments from the comfort of your home through live stream video and remote control. These types of robots can improve the lives of the disabled, elderly, or those that simply cannot be with their coworkers or loved ones in person. Some are used to explore off-world terrain and others for search and rescue.

This guide walks through building a simple telepresence robot using a Pimoroni STS-PI Raspberry Pi robot kit. A Raspberry Pi is a small low-cost device that runs Linux. Add-on modules for Raspberry Pi are called “hats”. You can substitute this kit with any mobile platform that uses two motors wired to an Adafruit Motor Hat or a Pimoroni Explorer Hat.

The sample serverless application uses AWS Lambda and Amazon API Gateway to create a REST API for driving the robot. A Python application running on the robot uses AWS IoT Core to receive drive commands and authenticate with Amazon Kinesis Video Streams with WebRTC using an IoT Credentials Provider. In the next blog I walk through deploying a web frontend to both view the livestream and control the robot via the API.

Prerequisites

You need the following to complete the project:

A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.

A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.

Estimated Cost: $120

There are three major parts to this project. First deploy the serverless backend using the AWS Serverless Application Repository. Then assemble the robot and run an installer on the Raspberry Pi. Finally, configure and run the Python application on the robot to confirm it can be driven through the API and is streaming video.

Deploy the serverless application

In this section, use the Serverless Application Repository to deploy the backend resources for the robot. The resources to deploy are defined using the AWS Serverless Application Model (SAM), an open-source framework for building serverless applications using AWS CloudFormation. To deeper understand how this application is built, look at the SAM template in the GitHub repository.

An architecture diagram of the AWS IoT and Amazon Kinesis Video Stream resources of the deployed application.

The Python application that runs on the robot requires permissions to connect as an IoT Thing and subscribe to messages sent to a specific topic on the AWS IoT Core message broker. The following policy is created in the SAM template:

RobotIoTPolicy:
      Type: "AWS::IoT::Policy"
      Properties:
        PolicyName: !Sub "${RobotName}Policy"
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
            - Effect: Allow
              Action:
                - iot:Connect
                - iot:Subscribe
                - iot:Publish
                - iot:Receive
              Resource:
                - !Sub "arn:aws:iot:*:*:topicfilter/${RobotName}/action"
                - !Sub "arn:aws:iot:*:*:topic/${RobotName}/action"
                - !Sub "arn:aws:iot:*:*:topic/${RobotName}/telemetry"
                - !Sub "arn:aws:iot:*:*:client/${RobotName}"

To transmit video, the Python application runs the amazon-kinesis-video-streams-webrtc-sdk-c sample in a subprocess. Instead of using separate credentials to authenticate with Kinesis Video Streams, a Role Alias policy is created so that IoT credentials can be used.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "iot:Connect",
        "iot:AssumeRoleWithCertificate"
      ],
      "Resource": "arn:aws:iot:Region:AccountID:rolealias/robot-camera-streaming-role-alias",
      "Effect": "Allow"
    }
  ]
}

When the above policy is attached to a certificate associated with an IoT Thing, it can assume the following role:

 KVSCertificateBasedIAMRole:
      Type: 'AWS::IAM::Role'
      Properties:
        AssumeRolePolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: 'Allow'
            Principal:
              Service: 'credentials.iot.amazonaws.com'
            Action: 'sts:AssumeRole'
        Policies:
        - PolicyName: !Sub "KVSIAMPolicy-${AWS::StackName}"
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
            - Effect: Allow
              Action:
                - kinesisvideo:ConnectAsMaster
                - kinesisvideo:GetSignalingChannelEndpoint
                - kinesisvideo:CreateSignalingChannel
                - kinesisvideo:GetIceServerConfig
                - kinesisvideo:DescribeSignalingChannel
              Resource: "arn:aws:kinesisvideo:*:*:channel/${credentials-iot:ThingName}/*"

This role grants access to connect and transmit video over WebRTC using the Kinesis Video Streams signaling channel deployed by the serverless application. An architecture diagram of the API endpoint in the deployed application.

A deployed API Gateway endpoint, when called with valid JSON, invokes a Lambda function that publishes to an IoT message topic, RobotName/action. The Python application on the robot subscribes to this topic and drives the motors based on any received message that maps to a command.

  1. Navigate to the aws-serverless-telepresence-robot application in the Serverless Application Repository.
  2. Choose Deploy.
  3. On the next page, under Application Settings, fill out the parameter, RobotName.
  4. Choose Deploy.
  5. Once complete, choose View CloudFormation Stack.
  6. Select the Outputs tab. Copy the ApiURL and the EndpointURL for use when configuring the robot.

Create and download the AWS IoT device certificate

The robot requires an AWS IoT root CA (fetched by the install script), certificate, and private key to authenticate with AWS IoT Core. The certificate and private key are not created by the serverless application since they can only be downloaded on creation. Create a new certificate and attach the IoT policy and Role Alias policy deployed by the serverless application.

  1. Navigate to the AWS IoT Core console.
  2. Choose Manage, Things.
  3. Choose the Thing that corresponds with the name of the robot.
  4. Under Security, choose Create certificate.
  5. Choose Activate.
  6. Download the Private Key and Thing Certificate. Save these securely, as this is the only time you can download this certificate.
  7. Choose Attach Policy.
  8. Two policies are created and must be attached. From the list, select
    <RobotName>Policy
    AliasPolicy-<AppName>
  9. Choose Done.

Flash an operating system to an SD card

The Raspberry Pi single-board Linux computer uses an SD card as the main file system storage. Raspbian Buster Lite is an officially supported Debian Linux operating system that must be flashed to an SD card. Balena.io has created an application called balenaEtcher for the sole purpose of accomplishing this safely.

  1. Download the latest version of Raspbian Buster Lite.
  2. Download and install balenaEtcher.
  3. Insert the SD card into your computer and run balenaEtcher.
  4. Choose the Raspbian image. Choose Flash to burn the image to the SD card.
  5. When flashing is complete, balenaEtcher dismounts the SD card.

Configure Wi-Fi and SSH headless

Typically, a keyboard and monitor are used to configure Wi-Fi or to access the command line on a Raspberry Pi. Since it is on a mobile platform, configure the Raspberry Pi to connect to a Wi-Fi network and enable remote access headless by adding configuration files to the SD card.

  1. Re-insert the SD card to your computer so that it shows as volume boot.
  2. Create a file in the boot volume of the SD card named wpa_supplicant.conf.
  3. Paste in the following contents, substituting your Wi-Fi credentials.
    ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
            update_config=1
            country=<Insert country code here>
    
            network={
             ssid="<Name of your WiFi>"
             psk="<Password for your WiFi>"
            }

  4. Create an empty file without a file extension in the boot volume named ssh. At boot, the Raspbian operating system looks for this file and enables remote access if it exists. This can be done from a command line:
    cd path/to/volume/boot
    touch ssh

  5. Safely eject the SD card from your computer.

Assemble the robot

For this section, you can use the Pimoroni STS-Pi robot kit with a Pimoroni Explorer Hat, along with a Raspberry Pi Model 3 B+ or newer, and a camera module. Alternatively, you can use any two motor robot platform that uses the Explorer Hat or Adafruit Motor Hat.

  1. Follow the instructions in this video to assemble the Pimoroni STS-Pi robot kit.
  2. Place the SD card in the Raspberry Pi.
  3. Since the installation may take some time, power the Raspberry Pi using a USB 5V power supply connected to a wall plug rather than a battery.

Connect remotely using SSH

Use your computer to gain remote command line access of the Raspberry Pi using SSH. Both devices must be on the same network.

  1. Open a terminal application with SSH installed. It is already built into Linux and Mac OS, to enable SSH on Windows follow these instructions.
  2. Enter the following to begin a secure shell session as user pi on the default local hostname raspberrypi, which resolves to the IP address of the device using MDNS:
  3. If prompted to add an SSH key to the list of known hosts, type yes.
  4. When prompted for a password, type raspberry. This is the default password and can be changed using the raspi-config utility.
  5. Upon successful login, you now have shell access to your Raspberry Pi device.

Enable the camera using raspi-config

A built-in utility, raspi-config, provides an easy to use interface for configuring Raspbian. You must enable the camera module, along with I2C, a serial bus used for communicating with the motor driver.

  1. In an open SSH session, type the following to open the raspi-config utility:
    sudo raspi-config

  2. Using the arrows, choose Interfacing Options.
  3. Choose Camera. When prompted, choose Yes to enable the camera module.
  4. Repeat the process to enable the I2C interface.
  5. Select Finish and reboot.

Run the install script

An installer script is provided for building and installing the Kinesis Video Stream WebRTC producer, AWSIoTPythonSDK and Pimoroni Explorer Hat Python libraries. Upon completion, it creates a directory with the following structure:

├── /home/pi/Projects/robot
│  └── main.py // The main Python application
│  └── config.json // Parameters used by main.py
│  └── kvsWebrtcClientMasterGstSample //Kinesis Video Stream producer
│  └── /certs
│     └── cacert.pem // Amazon SFSRootCAG2 Certificate Authority
│     └── certificate.pem // AWS IoT certificate placeholder
│     └── private.pem.key // AWS IoT private key placeholder
  1. Open an SSH session on the Raspberry Pi.
  2. (Optional) If using the Adafruit Motor Hat, run this command, otherwise the script defaults to the Pimoroni Explorer Hat.
    export MOTOR_DRIVER=adafruit  

  3. Run the following command to fetch and execute the installer script.
    wget -O - https://raw.githubusercontent.com/aws-samples/aws-serverless-telepresence-robot/master/scripts/install.sh | bash

  4. While the script installs, proceed to the next section.

Configure the code

The Python application on the robot subscribes to AWS IoT Core to receive messages. It requires the certificate and private key created for the IoT thing to authenticate. These files must be copied to the directory where the Python application is stored on the Raspberry Pi.

It also requires the IoT Credentials endpoint is added to the file config.json to assume permissions necessary to transmit video to Amazon Kinesis Video Streams.

  1. Open an SSH session on the Raspberry Pi.
  2. Open the certificate.pem file with the nano text editor and paste in the contents of the certificate downloaded earlier.
    cd/home/pi/Projects/robot/certs
    nano certificate.pem

  3. Press CTRL+X and then Y to save the file.
  4. Repeat the process with the private.key.pem file.
    nano private.key.pem

  5. Open the config.json file.
    cd/home/pi/Projects/robot
    nano config.json

  6. Provide the following information:
    IOT_THINGNAME: The name of your robot, as set in the serverless application.
    IOT_CORE_ENDPOINT: This is found under the Settings page in the AWS IoT Core console.
    IOT_GET_CREDENTIAL_ENDPOINT: Provided by the serverless application.
    ROLE_ALIAS: This is already set to match the Role Alias deployed by the serverless application.
    AWS_DEFAULT_REGION: Corresponds to the Region the application is deployed in.
  7. Save the file using CTRL+X and Y.
  8. To start the robot, run the command:
    python3 main.py

  9. To stop the script, press CTRL+C.

View the Kinesis video stream

The following steps create a WebRTC connection with the robot to view the live stream.

  1. Navigate to the Amazon Kinesis Video Streams console.
  2. Choose Signaling channels from the left menu.
  3. Choose the channel that corresponds with the name of your robot.
  4. Open the Media Playback card.
  5. After a moment, a WebRTC peer to peer connection is negotiated and live video is displayed.
    An animated gif demonstrating a live video stream from the robot.

Sending drive commands

The serverless backend includes an Amazon API Gateway REST endpoint that publishes JSON messages to the Python script on the robot.

The robot expects a message:

{ “action”: <direction> }

Where direction can be “forward”, “backwards”, “left”, or “right”.

  1. While the Python script is running on the robot, open another terminal window.
  2. Run this command to tell the robot to drive forward. Replace <API-URL> using the endpoint listed under Outputs in the CloudFormation stack for the serverless application.
    curl -d '{"action":"forward"}' -H "Content-Type: application/json" -X POST https://<API-URL>/publish

    An animated gif demonstrating the robot being driven from a REST request.

Conclusion

In this post, I show how to build and program a telepresence robot with remote control and a live video feed in the cloud. I did this by installing a Python application on a Raspberry Pi robot and deploying a serverless application.

The Python application uses AWS IoT credentials to receive remote commands from the cloud and transmit live video using Kinesis Video Streams with WebRTC. The serverless application deploys a REST endpoint using API Gateway and a Lambda function. Any application that can connect to the endpoint can drive the robot.

In part two, I build on this project by deploying a web interface for the robot using AWS Amplify.

A preview of the web frontend built in the next blog.

A preview of the web frontend built in the next blog.

 

 

Use AWS Lambda authorizers with a third-party identity provider to secure Amazon API Gateway REST APIs

Post Syndicated from Bryant Bost original https://aws.amazon.com/blogs/security/use-aws-lambda-authorizers-with-a-third-party-identity-provider-to-secure-amazon-api-gateway-rest-apis/

Note: This post focuses on Amazon API Gateway REST APIs used with OAuth 2.0 and custom AWS Lambda authorizers. API Gateway also offers HTTP APIs, which provide native OAuth 2.0 features. For more information about which is right for your organization, see Choosing Between HTTP APIs and REST APIs.

Amazon API Gateway is a fully managed AWS service that simplifies the process of creating and managing REST APIs at any scale. If you are new to API Gateway, check out Amazon API Gateway Getting Started to get familiar with core concepts and terminology. In this post, I will demonstrate how an organization using a third-party identity provider can use AWS Lambda authorizers to implement a standard token-based authorization scheme for REST APIs that are deployed using API Gateway.

In the context of this post, a third-party identity provider refers to an entity that exists outside of AWS and that creates, manages, and maintains identity information for your organization. This identity provider issues cryptographically signed tokens to users containing information about the user identity and their permissions. In order to use these non-AWS tokens to control access to resources within API Gateway, you will need to define custom authorization code using a Lambda function to “map” token characteristics to API Gateway resources and permissions.

Defining custom authorization code is not the only way to implement authorization in API Gateway and ensure resources can only be accessed by the correct users. In addition to Lambda authorizers, API Gateway offers several “native” options that use existing AWS services to control resource access and do not require any custom code. To learn more about the established practices and authorization mechanisms, see Controlling and Managing Access to a REST API in API Gateway.

Lambda authorizers are a good choice for organizations that use third-party identity providers directly (without federation) to control access to resources in API Gateway, or organizations requiring authorization logic beyond the capabilities offered by “native” authorization mechanisms.

Benefits of using third-party tokens with API Gateway

Using a Lambda authorizer with third-party tokens in API Gateway can provide the following benefits:

  • Integration of third-party identity provider with API Gateway: If your organization has already adopted a third-party identity provider, building a Lambda authorizer allows users to access API Gateway resources by using their third-party credentials without having to configure additional services, such as Amazon Cognito. This can be particularly useful if your organization is using the third-party identity provider for single sign-on (SSO).
  • Minimal impact to client applications: If your organization has an application that is already configured to sign in to a third-party identity provider and issue requests using tokens, then minimal changes will be required to use this solution with API Gateway and a Lambda authorizer. By using credentials from your existing identity provider, you can integrate API Gateway resources into your application in the same manner that non-AWS resources are integrated.
  • Flexibility of authorization logic: Lambda authorizers allow for the additional customization of authorization logic, beyond validation and inspection of tokens.

Solution overview

The following diagram shows the authentication/authorization flow for using third-party tokens in API Gateway:

Figure 1: Example Solution Architecture

Figure 1: Example Solution Architecture

  1. After a successful login, the third-party identity provider issues an access token to a client.
  2. The client issues an HTTP request to API Gateway and includes the access token in the HTTP Authorization header.
  3. The API Gateway resource forwards the token to the Lambda authorizer.
  4. The Lambda authorizer authenticates the token with the third-party identity provider.
  5. The Lambda authorizer executes the authorization logic and creates an identity management policy.
  6. API Gateway evaluates the identity management policy against the API Gateway resource that the user requested and either allows or denies the request. If allowed, API Gateway forwards the user request to the API Gateway resource.

Prerequisites

To build the architecture described in the solution overview, you will need the following:

  • An identity provider: Lambda authorizers can work with any type of identity provider and token format. The post uses a generic OAuth 2.0 identity provider and JSON Web Tokens (JWT).
  • An API Gateway REST API: You will eventually configure this REST API to rely on the Lambda authorizer for access control.
  • A means of retrieving tokens from your identity provider and calling API Gateway resources: This can be a web application, a mobile application, or any application that relies on tokens for accessing API resources.

For the REST API in this example, I use API Gateway with a mock integration. To create this API yourself, you can follow the walkthrough in Create a REST API with a Mock Integration in Amazon API Gateway.

You can use any type of client to retrieve tokens from your identity provider and issue requests to API Gateway, or you can consult the documentation for your identity provider to see if you can retrieve tokens directly and issue requests using a third-party tool such as Postman.

Before you proceed to building the Lambda authorizer, you should be able to retrieve tokens from your identity provider and issue HTTP requests to your API Gateway resource with the token included in the HTTP Authorization header. This post assumes that the identity provider issues OAuth JWT tokens, and the example below shows a raw HTTP request addressed to the mock API Gateway resource with an OAuth JWT access token in the HTTP Authorization header. This request should be sent by the client application that you are using to retrieve your tokens and issue HTTP requests to the mock API Gateway resource.


# Example HTTP Request using a Bearer token\
GET /dev/my-resource/?myParam=myValue HTTP/1.1\
Host: rz8w6b1ik2.execute-api.us-east-1.amazonaws.com\
Authorization: Bearer eyJraWQiOiJ0ekgtb1Z5eEpPSF82UDk3...}

Building a Lambda authorizer

When you configure a Lambda authorizer to serve as the authorization source for an API Gateway resource, the Lambda authorizer is invoked by API Gateway before the resource is called. Check out the Lambda Authorizer Authorization Workflow for more details on how API Gateway invokes and exchanges information with Lambda authorizers. The core functionality of the Lambda authorizer is to generate a well-formed identity management policy that dictates the allowed actions of the user, such as which APIs the user can access. The Lambda authorizer will use information in the third-party token to create the identity management policy based on “permissions mapping” documents that you define — I will discuss these permissions mapping documents in greater detail below.

After the Lambda authorizer generates an identity management policy, the policy is returned to API Gateway and API Gateway uses it to evaluate whether the user is allowed to invoke the requested API. You can optionally configure a setting in API Gateway to automatically cache the identity management policy so that subsequent API invocations with the same token do not invoke the Lambda authorizer, but instead use the identity management policy that was generated on the last invocation.

In this post, you will build your Lambda authorizer to receive an OAuth access token and validate its authenticity with the token issuer, then implement custom authorization logic to use the OAuth scopes present in the token to create an identity management policy that dictates which APIs the user is allowed to access. You will also configure API Gateway to cache the identity management policy that is returned by the Lambda authorizer. These patterns provide the following benefits:

  • Leverage third-party identity management services: Validating the token with the third party allows for consolidated management of services such as token verification, token expiration, and token revocation.
  • Cache to improve performance: Caching the token and identity management policy in API Gateway removes the need to call the Lambda authorizer for each invocation. Caching a policy can improve performance; however, this increased performance comes with addition security considerations. These considerations are discussed below.
  • Limit access with OAuth scopes: Using the scopes present in the access token, along with custom authorization logic, to generate an identity management policy and limit resource access is a familiar OAuth practice and serves as a good example of customizable authentication logic. Refer to Defining Scopes for more information on OAuth scopes and how they are typically used to control resource access.

The Lambda authorizer is invoked with the following object as the event parameter when API Gateway is configured to use a Lambda authorizer with the token event payload; refer to Input to an Amazon API Gateway Lambda Authorizer for more information on the types of payloads that are compatible with Lambda authorizers. Since you are using a token-based authorization scheme, you will use the token event payload. This payload contains the methodArn, which is the Amazon Resource Name (ARN) of the API Gateway resource that the request was addressed to. The payload also contains the authorizationToken, which is the third-party token that the user included with the request.


# Lambda Token Event Payload  
{   
 type: 'TOKEN',  
 methodArn: 'arn:aws:execute-api:us-east-1:2198525...',  
 authorizationToken: 'Bearer eyJraWQiOiJ0ekgt...'  
}

Upon receiving this event, your Lambda authorizer will issue an HTTP POST request to your identity provider to validate the token, and use the scopes present in the third-party token with a permissions mapping document to generate and return an identity management policy that contains the allowed actions of the user within API Gateway. Lambda authorizers can be written in any Lambda-supported language. You can explore some starter code templates on GitHub. The example function in this post uses Node.js 10.x.

The Lambda authorizer code in this post uses a static permissions mapping document. This document is represented by apiPermissions. For a complex or highly dynamic permissions document, this document can be decoupled from the Lambda authorizer and exported to Amazon Simple Storage Service (Amazon S3) or Amazon DynamoDB for simplified management. The static document contains the ARN of the deployed API, the API Gateway stage, the API resource, the HTTP method, and the allowed token scope. The Lambda authorizer then generates an identity management policy by evaluating the scopes present in the third-party token against those present in the document.

The fragment below shows an example permissions mapping. This mapping restricts access by requiring that users issuing HTTP GET requests to the ARN arn:aws:execute-api:us-east-1:219852565112:rz8w6b1ik2 and the my-resource resource in the DEV API Gateway stage are only allowed if they provide a valid token that contains the email scope.


# Example permissions document  
{  
 "arn": "arn:aws:execute-api:us-east-1:219852565112:rz8w6b1ik2",  
 "resource": "my-resource",  
 "stage": "DEV",  
 "httpVerb": "GET",  
 "scope": "email"  
}

The logic to create the identity management policy can be found in the generateIAMPolicy() method of the Lambda function. This method serves as a good general example of the extent of customization possible in Lambda authorizers. While the method in the example relies solely on token scopes, you can also use additional information such as request context, user information, source IP address, user agents, and so on, to generate the returned identity management policy.

Upon invocation, the Lambda authorizer below performs the following procedure:

  1. Receive the token event payload, and isolate the token string (trim “Bearer ” from the token string, if present).
  2. Verify the token with the third-party identity provider.

    Note: This Lambda function does not include this functionality. The method, verifyAccessToken(), will need to be customized based on the identity provider that you are using. This code assumes that the verifyAccessToken() method returns a Promise that resolves to the decoded token in JSON format.

  3. Retrieve the scopes from the decoded token. This code assumes these scopes can be accessed as an array at claims.scp in the decoded token.
  4. Iterate over the scopes present in the token and create identity and access management (IAM) policy statements based on entries in the permissions mapping document that contain the scope in question.
  5. Create a complete, well-formed IAM policy using the generated IAM policy statements. Refer to IAM JSON Policy Elements Reference for more information on programmatically building IAM policies.
  6. Return complete IAM policy to API Gateway.
    
    /*
     * Sample Lambda Authorizer to validate tokens originating from
     * 3rd Party Identity Provider and generate an IAM Policy
     */
    
    const apiPermissions = [
      {
        "arn": "arn:aws:execute-api:us-east-1:219852565112:rz8w6b1ik2", // NOTE: Replace with your API Gateway API ARN
        "resource": "my-resource", // NOTE: Replace with your API Gateway Resource
        "stage": "dev", // NOTE: Replace with your API Gateway Stage
        "httpVerb": "GET",
        "scope": "email"
      }
    ];
    
    var generatePolicyStatement = function (apiName, apiStage, apiVerb, apiResource, action) {
      'use strict';
      // Generate an IAM policy statement
      var statement = {};
      statement.Action = 'execute-api:Invoke';
      statement.Effect = action;
      var methodArn = apiName + "/" + apiStage + "/" + apiVerb + "/" + apiResource + "/";
      statement.Resource = methodArn;
      return statement;
    };
    
    var generatePolicy = function (principalId, policyStatements) {
      'use strict';
      // Generate a fully formed IAM policy
      var authResponse = {};
      authResponse.principalId = principalId;
      var policyDocument = {};
      policyDocument.Version = '2012-10-17';
      policyDocument.Statement = policyStatements;
      authResponse.policyDocument = policyDocument;
      return authResponse;
    };
    
    var verifyAccessToken = function (accessToken) {
      'use strict';
      /*
      * Verify the access token with your Identity Provider here (check if your 
      * Identity Provider provides an SDK).
      *
      * This example assumes this method returns a Promise that resolves to 
      * the decoded token, you may need to modify your code according to how
      * your token is verified and what your Identity Provider returns.
      */
    };
    
    var generateIAMPolicy = function (scopeClaims) {
      'use strict';
      // Declare empty policy statements array
      var policyStatements = [];
      // Iterate over API Permissions
      for ( var i = 0; i  -1 ) {
          // User token has appropriate scope, add API permission to policy statements
          policyStatements.push(generatePolicyStatement(apiPermissions[i].arn, apiPermissions[i].stage, apiPermissions[i].httpVerb,
                                                        apiPermissions[i].resource, "Allow"));
        }
      }
      // Check if no policy statements are generated, if so, create default deny all policy statement
      if (policyStatements.length === 0) {
        var policyStatement = generatePolicyStatement("*", "*", "*", "*", "Deny");
        policyStatements.push(policyStatement);
      }
      return generatePolicy('user', policyStatements);
    };
    
    exports.handler = async function(event, context) {
      // Declare Policy
      var iamPolicy = null;
      // Capture raw token and trim 'Bearer ' string, if present
      var token = event.authorizationToken.replace("Bearer ", "");
      // Validate token
      await verifyAccessToken(token).then(data => {
        // Retrieve token scopes
        var scopeClaims = data.claims.scp;
        // Generate IAM Policy
        iamPolicy = generateIAMPolicy(scopeClaims);
      })
      .catch(err => {
        console.log(err);
        // Generate default deny all policy statement if there is an error
        var policyStatements = [];
        var policyStatement = generatePolicyStatement("*", "*", "*", "*", "Deny");
        policyStatements.push(policyStatement);
        iamPolicy = generatePolicy('user', policyStatements);
      });
      return iamPolicy;
    };  
    

The following is an example of the identity management policy that is returned from your function.


# Example IAM Policy
{
  "principalId": "user",
  "policyDocument": {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "execute-api:Invoke",
        "Effect": "Allow",
        "Resource": "arn:aws:execute-api:us-east-1:219852565112:rz8w6b1ik2/get/DEV/my-resource/"
      }
    ]
  }
}

It is important to note that the Lambda authorizer above is not considering the method or resource that the user is requesting. This is because you want to generate a complete identity management policy that contains all the API permissions for the user, instead of a policy that only contains allow/deny for the requested resource. By generating a complete policy, this policy can be cached by API Gateway and used if the user invokes a different API while the policy is still in the cache. Caching the policy can reduce API latency from the user perspective, as well as the total amount of Lambda invocations; however, it can also increase vulnerability to Replay Attacks and acceptance of expired/revoked tokens.

Shorter cache lifetimes introduce more latency to API calls (that is, the Lambda authorizer must be called more frequently), while longer cache lifetimes introduce the possibility of a token expiring or being revoked by the identity provider, but still being used to return a valid identity management policy. For example, the following scenario is possible when caching tokens in API Gateway:

  • Identity provider stamps access token with an expiration date of 12:30.
  • User calls API Gateway with access token at 12:29.
  • Lambda authorizer generates identity management policy and API Gateway caches the token/policy pair for 5 minutes.
  • User calls API Gateway with same access token at 12:32.
  • API Gateway evaluates access against policy that exists in the cache, despite original token being expired.

Since tokens are not re-validated by the Lambda authorizer or API Gateway once they are placed in the API Gateway cache, long cache lifetimes may also increase susceptibility to Replay Attacks. Longer cache lifetimes and large identity management policies can increase the performance of your application, but must be evaluated against the trade-off of increased exposure to certain security vulnerabilities.

Deploying the Lambda authorizer

To deploy your Lambda authorizer, you first need to create and deploy a Lambda deployment package containing your function code and dependencies (if applicable). Lambda authorizer functions behave the same as other Lambda functions in terms of deployment and packaging. For more information on packaging and deploying a Lambda function, see AWS Lambda Deployment Packages in Node.js. For this example, you should name your Lambda function myLambdaAuth and use a Node.js 10.x runtime environment.

After the function is created, add the Lambda authorizer to API Gateway.

  1. Navigate to API Gateway and in the navigation pane, under APIs, select the API you configured earlier
  2. Under your API name, choose Authorizers, then choose Create New Authorizer.
  3. Under Create Authorizer, do the following:
    1. For Name, enter a name for your Lambda authorizer. In this example, the authorizer is named Lambda-Authorizer-Demo.
    2. For Type, select Lambda
    3. For Lambda Function, select the AWS Region you created your function in, then enter the name of the Lambda function you just created.
    4. Leave Lambda Invoke Role empty.
    5. For Lambda Event Payload choose Token.
    6. For Token Source, enter Authorization.
    7. For Token Validation, enter:
      
      ^(Bearer )[a-zA-Z0-9\-_]+?\.[a-zA-Z0-9\-_]+?\.([a-zA-Z0-9\-_]+)$
      			

      This represents a regular expression for validating that tokens match JWT format (more below).

    8. For Authorization Caching, select Enabled and enter a time to live (TTL) of 1 second.
  4. Select Save.

 

Figure 2: Create a new Lambda authorizer

Figure 2: Create a new Lambda authorizer

This configuration passes the token event payload mentioned above to your Lambda authorizer, and is necessary since you are using tokens (Token Event Payload) for authentication, rather than request parameters (Request Event Payload). For more information, see Use API Gateway Lambda Authorizers.

In this solution, the token source is the Authorization header of the HTTP request. If you know the expected format of your token, you can include a regular expression in the Token Validation field, which automatically rejects any request that does not match the regular expression. Token validations are not mandatory. This example assumes the token is a JWT.


# Regex matching JWT Bearer Tokens  
^(Bearer )[a-zA-Z0-9\-_]+?\.[a-zA-Z0-9\-_]+?\.([a-zA-Z0-9\-_]+)$

Here, you can also configure how long the token/policy pair will be cached in API Gateway. This example enables caching with a TTL of 1 second.

In this solution, you leave the Lambda Invoke Role field empty. This field is used to provide an IAM role that allows API Gateway to execute the Lambda authorizer. If left blank, API Gateway configures a default resource-based policy that allows it to invoke the Lambda authorizer.

The final step is to point your API Gateway resource to your Lambda authorizer. Select the configured API Resource and HTTP method.

  1. Navigate to API Gateway and in the navigation pane, under APIs, select the API you configured earlier.
  2. Select the GET method.

    Figure 3: GET Method Execution

    Figure 3: GET Method Execution

  3. Select Method Request.
  4. Under Settings, edit Authorization and select the authorizer you just configured (in this example, Lambda-Authorizer-Demo).

    Figure 4: Select your API authorizer

    Figure 4: Select your API authorizer

Deploy the API to an API Gateway stage that matches the stage configured in the Lambda authorizer permissions document (apiPermissions variable).

  1. Navigate to API Gateway and in the navigation pane, under APIs, select the API you configured earlier.
  2. Select the / resource of your API.
  3. Select Actions, and under API Actions, select Deploy API.
  4. For Deployment stage, select [New Stage] and for the Stage name, enter dev. Leave Stage description and Deployment description blank.
  5. Select Deploy.

    Figure 5: Deploy your API stage

    Figure 5: Deploy your API stage

Testing the results

With the Lambda authorizer configured as your authorization source, you are now able to access the resource only if you provide a valid token that contains the email scope.

The following example shows how to issue an HTTP request with curl to your API Gateway resource using a valid token that contains the email scope passed in the HTTP Authorization header. Here, you are able to authenticate and receive an appropriate response from API Gateway.


# HTTP Request (including valid token with "email" scope)  
$ curl -X GET \  
> 'https://rz8w6b1ik2.execute-api.us-east-1.amazonaws.com/dev/my-resource/?myParam=myValue' \  
> -H 'Authorization: Bearer eyJraWQiOiJ0ekgtb1Z5eE...'  
  
{  
 "statusCode" : 200,  
 "message" : "Hello from API Gateway!"  
}

The following JSON object represents the decoded JWT payload used in the previous example. The JSON object captures the token scopes in scp, and you can see that the token contained the email scope.

Figure 6: JSON object that contains the email scope

Figure 6: JSON object that contains the email scope

If you provide a token that is expired, is invalid, or that does not contain the email scope, then you are not able to access the resource. The following example shows a request to your API Gateway resource with a valid token that does not contain the email scope. In this example, the Lambda authorizer rejects the request.


# HTTP Request (including token without "email" scope)  
$ curl -X GET \  
> 'https://rz8w6b1ik2.execute-api.us-east-1.amazonaws.com/dev/my-resource/?myParam=myValue' \  
> -H 'Authorization: Bearer eyJraWQiOiJ0ekgtb1Z5eE...'  
  
{  
 "Message" : "User is not authorized to access this resource with an explicit deny"  
}

The following JSON object represents the decoded JWT payload used in the above example; it does not include the email scope.

Figure 7: JSON object that does not contain the email scope

Figure 7: JSON object that does not contain the email scope

If you provide no token, or you provide a token not matching the provided regular expression, then you are immediately rejected by API Gateway without invoking the Lambda authorizer. API Gateway only forwards tokens to the Lambda authorizer that have the HTTP Authorization header and pass the token validation regular expression, if a regular expression was provided. If the request does not pass token validation or does not have an HTTP Authorization header, API Gateway rejects it with a default HTTP 401 response. The following example shows how to issue a request to your API Gateway resource using an invalid token that does match the regular expression you configured on your authorizer. In this example, API Gateway rejects your request automatically without invoking the authorizer.


# HTTP Request (including a token that is not a JWT)  
$ curl -X GET \  
> 'https://rz8w6b1ik2.execute-api.us-east-1.amazonaws.com/dev/my-resource/?myParam=myValue' \  
> -H 'Authorization: Bearer ThisIsNotAJWT'  
  
{  
 "Message" : "Unauthorized"  
}

These examples demonstrate how your Lambda authorizer allows and denies requests based on the token format and the token content.

Conclusion

In this post, you saw how Lambda authorizers can be used with API Gateway to implement a token-based authentication scheme using third-party tokens.

Lambda authorizers can provide a number of benefits:

  • Leverage third-party identity management services directly, without identity federation.
  • Implement custom authorization logic.
  • Cache identity management policies to improve performance of authorization logic (while keeping in mind security implications).
  • Minimally impact existing client applications.

For organizations seeking an alternative to Amazon Cognito User Pools and Amazon Cognito identity pools, Lambda authorizers can provide complete, secure, and flexible authentication and authorization services to resources deployed with Amazon API Gateway. For more information about Lambda authorizers, see API Gateway Lambda Authorizers.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Bryant Bost

Bryant Bost is an Application Consultant for AWS Professional Services based out of Washington, DC. As a consultant, he supports customers with architecting, developing, and operating new applications, as well as migrating existing applications to AWS. In addition to web application development, Bryant specializes in serverless and container architectures, and has authored several posts on these topics.

Building faster, lower cost, better APIs – HTTP APIs now generally available

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/building-better-apis-http-apis-now-generally-available/

In July 2015, AWS announced Amazon API Gateway. This enabled developers to build secure, scalable APIs quickly in front of a variety of different types of architectures. Since then, the API Gateway team continues to build new features and services for customers.

Figure 1: API Gateway feature highlights timeline

In early 2019, the team evaluated the current services and made plans for the next chapter of API Gateway. They prototyped new languages and technologies, applied lessons learned from building the REST and WebSocket APIs, and looked closely at customer feedback. The result is HTTP APIs for Amazon API Gateway, a service built from the ground up to be faster, lower cost, and simpler to use. In short, HTTP APIs offers a better solution for building APIs. If you are building an API and HTTP APIs fit your requirements, this is the place to start.

Faster

For the majority of use cases, HTTP APIs offers up to 60% reduction in latency. Developers strive to build applications with minimal latency and maximum functionality. They understand that each service involved in the application process can introduce latency.

All services add latency

Figure 2: All services add latency

With that in mind, HTTP APIs is built to reduce the latency overhead of the API Gateway service. Combining both the request and response, 99% of all requests (p99) have less than 10 ms of additional latency from HTTP API.

Lower cost

At Amazon, one of our core leadership principles is frugality. We believe in doing things in a cost-effective manner and passing those savings to our customers. With the availability of new technology, and the expertise of running API Gateway for almost five years, we built HTTP APIs to run more efficiently.

REST/HTTP APIs price comparison

Figure 3: REST/HTTP APIs price comparison

Using the pricing for us-east-1, figure 3 shows a cost comparison for 100 million, 500 million, and 1 billion requests a month. Overall, HTTP APIs is at least 71% lower cost compared to API Gateway REST APIs.

Simpler

On the user interface for HTTP API, the API Gateway team has made the entire experience more intuitive and easier to use.

CORS configuration

Figure 4: CORS configuration

Another example is the configuration of cross origin resource scripting (CORS). CORS provides security by controlling cross-domain access to servers and can be difficult to understand and configure. HTTP APIs enables a developer to configure CORS settings quickly using a simple, easy-to-understand UI. This same approach throughout the UI creates a powerful, yet easy approach to building APIs.

New features

HTTP APIs beta was announced at AWS re:Invent 2019 with powerful features like JWT authorizers, auto-deploying stages, and simplified route integrations. Today, HTTP APIs is generally available (GA) with more features to help developers build APIs faster, lower cost and better.

Private integrations

HTTP APIs now offers developers the ability to integrate with resources secured in a Amazon VPC. When developing with technologies like containers via Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS), the underlying Amazon EC2 clusters must reside inside a VPC. While it is possible to make these services available through Elastic Load Balancing, developers can also take advantage of HTTP APIs to front their applications.

VPC link configuration

Figure 5: VPC link configuration

To create a private integration, you need a VPC link. VPC links take advantage of AWS Hyperplane, the Network Function Virtualization platform used for Network Load Balancer and NAT Gateway. With this technology, multiple HTTP APIs can use a single VPC link to a VPC. Likewise, multiple REST APIs can use a REST APIs VPC link.

Private integration configuration

Figure 6: Private integration configuration

Once a VPC link exists, you can configure an HTTP APIs private integration using a Network Load Balancer (NLB), an Application Load Balancer (ALB), or the resource discovery service, AWS Cloud Map.

Custom domain cross compatibility

Amazon API Gateway now offers the ability to share custom domains across REST APIs and HTTP API. This flexibility allows developers to mix and match between REST APIs and HTTP APIs while building applications.

Custom domain cross compatibility

Figure 7: Custom domain cross compatibility

Previously, when building applications needing a consistent domain name, developers could only use a single type of API. Because applications often require features only available on REST, REST APIs are a common choice. From today, you can distribute these routes between HTTP APIs and REST APIs based on feature requirement.

Request throttling

HTTP APIs now offers the ability to do granular throttling at the stage and route level. API throttling is an often-overlooked API feature that is critical to the health of APIs and their infrastructure. By default, API Gateway limits the steady-state request rate to 10,000 requests per second (rps) with a 5,000 request burst limit. These are soft limits and can be raised via AWS Service Quotas.

HTTP APIs has the concept of stages that can be used for different purposes. Applications can have dev, beta, and prod stages on the same API. Additionally, for backwards compatibility, you can configure multiple production stages of the same API. Each stage has optional burst and rate limit settings that override the account level setting of 10,000 rps.

Stage level throttling

Figure 8: Stage level throttling

Throttling can also be set at the route level. A route is a combination of the path and method. For example, the GET method on a root path (/) combine to make a route. At the time of this writing, route level throttling must be created with the AWS Command Line Interface (CLI) or an AWS SDK. To set the throttling on a route of / [ANY] on the $default stage, use the following CLI command:

aws apigatewayv2 update-stage --api-id <your API ID> --stage-name $default --route-settings '{"ANY /": {"ThrottlingBurstLimit":1000, "ThrottlingRateLimit":2000}}'

Stage variables

HTTP APIs now supports the use of stage variables to pass dynamic data to the backend integration or even define the integration itself. When a stage is defined on HTTP API, it creates a new path to the backend integration. The following table shows a domain with several stages:

StagePath
$defaultwww.mydomain.com
devwww.mydomain.com/dev
betawww.mydomain.com/beta

When you access the link for the dev stage, the dev stage variables are passed to the backend integration in the event object. The backend uses this information when processing the request. While not a best practice for passing secrets, it is useful for designating non-secret data, like environment-specific endpoints or feature switches.

Stage variables are also used to dynamically define the backend integration. For example, if the integration uses one AWS Lambda function for production and another for testing, you can use stage variables to dynamically route to the appropriate Lambda function as shown below:

Dynamically choosing an integration point

Figure 9: Dynamically choosing an integration point

When building dynamic integrations, it is also important to update permissions accordingly. HTTP APIs automatically adds invocation rights when an integration points to a single Lambda function. However, when using multiple functions, you must create and manage the role manually. Do this by turning off the Grant API Gateway permission to invoke your Lambda function option and entering a custom role with the appropriate permissions.

Integration custom role

Figure 10: Integration custom role

Lambda payload version 2.0

HTTP APIs now supports an updated event payload and response format for the Lambda function integration. Version 2.0 payload simplifies the format of the event object sent to the Lambda function. Here is a comparison of the event object that is sent to a Lambda function in version 1.0 and 2.0:

Version 1.0

{
    "version": "1.0",
    "resource": "/Echo",
    "path": "/Echo",
    "httpMethod": "GET",
    "headers": {
        "Content-Length": "0",
        "Host": "0000000000.execute-api.us-east-1.amazonaws.com",
        "User-Agent": "TestClient",
        "X-Amzn-Trace-Id": "Root=1-5e6ab926-933e1530e55773a0709dfaa6",
        "X-Forwarded-For": "1.1.1.1",
        "X-Forwarded-Port": "443",
        "X-Forwarded-Proto": "https",
        "accept": "*/*",
        "accept-encoding": "gzip, deflate, br",
        "cache-control": "no-cache",
        "clientInformation": "private",
        "cookie": "Cookie_2=value; Cookie_3=value; Cookie_4=value"
    },
    "multiValueHeaders": {
        "Content-Length": [
            "0"
        ],
        "Host": [
            "0000000000.execute-api.us-east-1.amazonaws.com"
        ],
        "X-Amzn-Trace-Id": [
            "Root=1-5e6ab926-933e1530e55773a0709dfaa6"
        ],
        "X-Forwarded-For": [
            "1.1.1.1"
        ],
        "X-Forwarded-Port": [
            "443"
        ],
        "X-Forwarded-Proto": [
            "https"
        ],
        "accept": [
            "*/*"
        ],
        "accept-encoding": [
            "gzip, deflate, br"
        ],
        "cache-control": [
            "no-cache"
        ],
        "clientInformation": [
            "public",
            "private"
        ],
        "cookie": [
            "Cookie_2=value; Cookie_3=value; Cookie_4=value"
        ]
    },
    "queryStringParameters": {
        "getValueFor": "newClient"
    },
    "multiValueQueryStringParameters": {
        "getValueFor": [
            "newClient"
        ]
    },
    "requestContext": {
        "accountId": "0000000000",
        "apiId": "0000000000",
        "domainName": "0000000000.execute-api.us-east-1.amazonaws.com",
        "domainPrefix": "0000000000",
        "extendedRequestId": "JTHd9j2EoAMEPEA=",
        "httpMethod": "GET",
        "identity": {
            "accessKey": null,
            "accountId": null,
            "caller": null,
            "cognitoAuthenticationProvider": null,
            "cognitoAuthenticationType": null,
            "cognitoIdentityId": null,
            "cognitoIdentityPoolId": null,
            "principalOrgId": null,
            "sourceIp": "1.1.1.1",
            "user": null,
            "userAgent": "TestClient",
            "userArn": null
        },
        "path": "/Echo",
        "protocol": "HTTP/1.1",
        "requestId": "JTHd9j2EoAMEPEA=",
        "requestTime": "12/Mar/2020:22:35:18 +0000",
        "requestTimeEpoch": 1584052518094,
        "resourceId": null,
        "resourcePath": "/Echo",
        "stage": "$default"
    },
    "pathParameters": null,
    "stageVariables": null,
    "body": null,
    "isBase64Encoded": true
}

Version 2.0

{
    "version": "2.0",
    "routeKey": "ANY /Echo",
    "rawPath": "/Echo",
    "rawQueryString": "getValueFor=newClient",
    "cookies": [
        "Cookie_2=value",
        "Cookie_3=value",
        "Cookie_4=value"
    ],
    "headers": {
        "accept": "*/*",
        "accept-encoding": "gzip, deflate, br",
        "cache-control": "no-cache",
        "clientinformation": "public,private",
        "content-length": "0",
        "host": "0000000000.execute-api.us-east-1.amazonaws.com",
        "user-agent": "TestClient",
        "x-amzn-trace-id": "Root=1-5e6ab967-cfe253ce6f8b90986a678c40",
        "x-forwarded-for": "1.1.1.1",
        "x-forwarded-port": "443",
        "x-forwarded-proto": "https"
    },
    "queryStringParameters": {
        "getValueFor": "newClient"
    },
    "requestContext": {
        "accountId": "0000000000",
        "apiId": "0000000000",
        "domainName": "0000000000.execute-api.us-east-1.amazonaws.com",
        "domainPrefix": "0000000000",
        "http": {
            "method": "GET",
            "path": "/Echo",
            "protocol": "HTTP/1.1",
            "sourceIp": "1.1.1.1",
            "userAgent": "TestClient"
        },
        "requestId": "JTHoQgr2oAMEPMg=",
        "routeId": "47matwk",
        "routeKey": "ANY /Echo",
        "stage": "$default",
        "time": "12/Mar/2020:22:36:23 +0000",
        "timeEpoch": 1584052583903
    },
    "isBase64Encoded": true
}

Additionally, version 2.0 allows more flexibility in the format of the response object from the Lambda function. Previously, this was the required format of the response:

{
  “statusCode”: 200,
  “body”:
  {
    “Name”: “Eric Johnson”,
    “TwitterHandle”: “@edjgeek”
  },
  Headers: {
    “Access-Control-Allow-Origin”: “https://amazon.com”
  }
}

When using version 2.0, the response is simpler:

{
  “Name”: “Eric Johnson”,
  “TwitterHandle”: “@edjgeek”
}

When HTTP APIs receives the response, it uses data like CORS settings and integration response codes to populate the missing data.

By default, new Lambda function integrations use version 2.0. You can change this under the Advanced settings toggle for the Lambda function integration. The version applies to both the event and response payloads. If you choose version 1.0, the old event format is sent to the Lambda function and the full response object must be returned.

Lambda integration advanced settings

Figure 11: Lambda integration advanced settings

OpenAPI/Swagger support

HTTP APIs now supports importing Swagger or OpenAPI configuration files. This makes it simple to migrate from other API Gateway services to HTTP API. When importing a configuration file, HTTP APIs implements all supported features and reports any features that are not currently supported.

AWS Serverless Application Model (SAM) support

At the time of this writing, the AWS Serverless Application Framework (SAM) supports most features released in beta at re:Invent 2019. AWS SAM support for many GA features is scheduled for release by March 20, 2020.

Conclusion

For almost five years, Amazon API Gateway has enabled developers to build highly scalable and durable application programming interfaces. It has allowed the abstraction of tasks like authorization, throttling, and data validation from within the application code to a managed service. With the introduction of HTTP APIs for Amazon API Gateway, developers can now use this powerful service in a faster, lower cost, and better way.

Govern how your clients interact with Apache Kafka using API Gateway

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/govern-how-your-clients-interact-with-apache-kafka-using-api-gateway/

At some point, you may ask yourself:

  • How can I implement IAM authentication or authorization to Amazon Managed Streaming for Apache Kafka (MSK)?
  • How can I protect my Apache Kafka cluster from traffic spikes based on specific scenarios without setting quotas on the cluster?
  • How can I validate requests adhere to a JSON Schema?
  • How can I make sure parameters are included in the URI, query string, and headers?
  • How can Amazon MSK ingest messages lightweight clients without using an agent or the native Apache Kafka protocol?

These tasks are achievable using custom proxy servers or gateways, but these options can be difficult to implement and manage. On the other hand, API Gateway has these features and is a fully managed AWS service.

In this blog post we will show you how Amazon API Gateway can answer these questions as a component between your Amazon MSK cluster and your clients.

Amazon MSK is a fully managed service for Apache Kafka that makes it easy to provision Kafka clusters with just a few clicks without the need to provision servers, manage storage, or configure Apache Zookeeper manually. Apache Kafka is an open-source platform for building real-time streaming data pipelines and applications.

Some use cases include ingesting messages from lightweight IoT devices that don’t have support for native Kafka protocol and orchestrating your streaming services with other backend services including third-party APIs.

This pattern also comes with the following trade-offs:

  • Cost and complexity due to another service to run and maintain.
  • Performance overhead because it adds extra processing to construct and make HTTP requests. Additionally, REST Proxy needs to parse requests, transform data between formats both for produce, and consume requests.

When you implement this architecture in a production environment, you should consider these points with your business use case and SLA needs.

Solution overview

To implement the solution, complete the following steps:

  1. Create an MSK cluster, Kafka client, and Kafka REST Proxy
  2. Create a Kafka topic and configure the REST Proxy on a Kafka client machine
  3. Create an API with REST Proxy integration via API Gateway
  4. Test the end-to-end processes by producing and consuming messages to Amazon MSK

The following diagram illustrates the solution architecture.

 

Within this architecture, you create an MSK cluster and set up an Amazon EC2 instance with the REST Proxy and Kafka client. You then expose the REST Proxy through Amazon API Gateway and also test the solution by producing messages to Amazon MSK using Postman.

For the production implementation, make sure to set up the REST Proxy behind load balancer with an Auto Scaling group.

Prerequisites

Before you get started, you must have the following prerequisites:

  • An AWS account that provides access to AWS services
  • An IAM user with an access key and secret access key to configure the AWS CLI
  • An Amazon EC2 keypair

Creating an MSK cluster, Kafka client, and REST Proxy

AWS CloudFormation provisions all the required resources, including VPC, subnets, security groups, Amazon MSK cluster, Kafka client, and Kafka REST Proxy. To create these resources, complete the following steps:

  1. Launch in the us-east-1 or us-west-2It takes approximately 15 to 20 minutes to complete.
  2. From the AWS CloudFormation console, choose AmzonMSKAPIBlog.
  3. Under Outputs, get the MSKClusterARN, KafkaClientEC2InstancePublicDNS, and MSKSecurityGroupID details.
  4. Get the ZooKeeperConnectionString and other information about your cluster by entering the following code (provide your Region, cluster ARN, and AWS named profile):
    $ aws kafka describe-cluster --region <Replace_With_us-east-1_or_us-west-2> --cluster-arn <Replace_With_Your_cluster-arn> --profile <Replace_With_Your_Profile>

    The following code example shows one of the lines in the output of this command:

    {
    ….
    ….
    "ZookeeperConnectString": "z-2.XXXXXX.us-east-1.amazonaws.com:2181,z-3.XXXXXX.us-east-1.amazonaws.com:2181,z-1.XXXXXX.us-east-1.amazonaws.com:2181"
    }

  5. Get the BootstrapBrokerString by entering the following code (provide your Region, cluster ARN, and AWS named profile):

    $ aws kafka get-bootstrap-brokers --region <Replace_With_us-east-1_or_us-west-2> --cluster-arn "<Replace_With_us-east-1_or_us-west-2>" --profile <Replace_With_Your_Profile>

    The following code example shows the output of this command:

    {
    "BootstrapBrokerString": "b-2.XXXXXXXXXXXX.us-east-1.amazonaws.com:9092,b-1.XXXXXXXXXXXX.amazonaws.com:9092,b-3.XXXXXXXXXXXX.us-east-1.amazonaws.com:9092"
    }

Creating a Kafka topic and configuring a Kafka REST Proxy

To create a Kafka topic and configure a Kafka REST Proxy on a Kafka client machine, complete the following steps:

  1. SSH into your Kafka client Amazon EC2 instance. See the following code:
    ssh -i <Replace_With_Your_pemfile> [email protected]<Replace_With_Your_KafkaClientDNS>

  2. Go to the bin folder (kafka/kafka_2.12-2.2.1/bin/) of the Apache Kafka installation on the client machine.
  3. Create a topic by entering the following code (provide the value you obtained for ZookeeperConnectString in the previous step):
    ./kafka-topics.sh --create --zookeeper <Replace_With_Your_ZookeeperConnectString> --replication-factor 3 --partitions 1 --topic amazonmskapigwblog

    If the command is successful, you see the following message: Created topic amazonmskapigwblog.

  4. To connect the Kafka REST server to the Amazon MSK cluster, modify kafka-rest.properties in the directory (/home/ec2-user/confluent-5.3.1/etc/kafka-rest/) to point to your Amazon MSK’s ZookeeperConnectString and BootstrapserversConnectString information. See the following code:
    sudo vi /home/ec2-user/confluent-5.3.1/etc/kafka-rest/kafka-rest.properties
    
    	zookeeper.connect=<Replace_With_Your_ZookeeperConnectString>
    bootstrap.servers=<Replace_With_Your_BootstrapserversConnectString> 

    As an additional, optional step, you can create an SSL for securing communication between REST clients and the REST Proxy (HTTPS). If SSL is not required, you can skip steps 5 and 6.

  5. Generate the server and client certificates. For more information, see Creating SLL Keys and Certificates on the Confluent website.
  6. Add the necessary property configurations to the kafka-rest.properties configuration file. See the following code example:
    listeners=http://0.0.0.0:8082,https://0.0.0.0:8085
    ssl.truststore.location=<Replace_With_Your_tuststore.jks>
    ssl.truststore.password=<Replace_With_Your_tuststorepassword>
    ssl.keystore.location=<Replace_With_Your_keystore.jks>
    ssl.keystore.password=<Replace_With_Your_keystorepassword>
    ssl.key.password=<Replace_With_Your_sslkeypassword>

    For more detailed instructions, see Encryption and Authentication with SSL on the Confluent website.

You have now created a Kafka topic and configured Kafka REST Proxy to connect to your Amazon MSK cluster.

Creating an API with Kafka REST Proxy integration

To create an API with Kafka REST Proxy integration via API Gateway, complete the following steps:

  1. On the API Gateway console, choose Create API.
  2. For API type, choose REST API.
  3. Choose Build.
  4. Choose New API.
  5. For API Name, enter a name (for example, amazonmsk-restapi).
  6. As an optional step, for Description, enter a brief description.
  7. Choose Create API.The next step is to create a child resource.
  8. Under Resources, choose a parent resource item.
  9. Under Actions, choose Create Resource.The New Child Resource pane opens.
  10. Select Configure as proxy resource.
  11. For Resource Name, enter proxy.
  12. For Resource Path, enter /{proxy+}.
  13. Select Enable API Gateway CORS.
  14. Choose Create Resource.After you create the resource, the Create Method window opens.
  15. For Integration type, select HTTP Proxy.
  16. For Endpoint URL, enter an HTTP backend resource URL (your Kafka Clien Amazont EC2 instance PublicDNS; for example, http://KafkaClientEC2InstancePublicDNS:8082/{proxy} or https://KafkaClientEC2InstancePublicDNS:8085/{proxy}).
  17. Use the default settings for the remaining fields.
  18. Choose Save.
  19. For SSL, for Endpoint URL, use the HTTPS endpoint.In the API you just created, the API’s proxy resource path of {proxy+} becomes the placeholder of any of the backend endpoints under http://YourKafkaClientPublicIP:8082/.
  20. Choose the API you just created.
  21. Under Actions, choose Deploy API.
  22. For Deployment stage, choose New Stage.
  23. For Stage name, enter the stage name (for example, dev, test, or prod).
  24. Choose Deploy.
  25. Record the Invoke URL after you have deployed the API.

Your external Kafka REST Proxy, which was exposed through API Gateway, now looks like https://YourAPIGWInvoleURL/dev/topics/amazonmskapigwblog. You use this URL in the next step.

Testing the end-to-end processes

To test the end-to-end processes by producing and consuming messages to Amazon MSK. Complete the following steps:

  1. SSH into the Kafka Client Amazon EC2 instance. See the following code:
    ssh -i "xxxxx.pem" [email protected]

  2. Go to the confluent-5.3.1/bin directory and start the kafka-rest service. See the following code:
    ./kafka-rest-start /home/ec2-user/confluent-5.3.1/etc/kafka-rest/kafka-rest.properties

    If the service already started, you can stop it with the following code:

    ./kafka-rest-stop /home/ec2-user/confluent-5.3.1/etc/kafka-rest/kafka-rest.properties

  3. Open another terminal window.
  4. In the kafka/kafka_2.12-2.2.1/bin directory, start the Kafka console consumer. See the following code:
    ./kafka-console-consumer.sh --bootstrap-server "BootstrapserversConnectString" --topic amazonmskapigwblog --from-beginning 

    You can now produce messages using Postman. Postman is an HTTP client for testing web services.

    Be sure to open TCP ports on the Kafka client security group from the system you are running Postman.

  5. Under Headers, choose the key Content-Type with value application/vnd.kafka.json.v2+json.
  6. Under Body, select raw.
  7. Choose JSON.This post enters the following code:
    {"records":[{"value":{"deviceid": "AppleWatch4","heartrate": "72","timestamp":"2019-10-07 12:46:13"}}]} 

    The following screen shot shows messages coming to the Kafka consumer from the API Gateway Kafka REST endpoint.

Conclusion

This post demonstrated how easy it is to set up REST API endpoints for Amazon MSK with API Gateway. This solution can help you produce and consume messages to Amazon MSK from any IoT device or programming language without depending on native Kafka protocol or clients.

If you have questions or suggestions, please leave your thoughts in the comments.

 


About the Author

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 

 

Francisco Oliveira is a senior big data solutions architect with AWS. He focuses on building big data solutions with open source technology and AWS. In his free time, he likes to try new sports, travel and explore national parks.

Generating REST APIs from data classes in Python

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/generating-rest-apis-from-data-classes-in-python/

This post is courtesy of Robert Enyedi – Senior Research Engineer – AI Labs

Implementing and managing public APIs is greatly simplified by API Gateway. Among the various features of API Gateway, the ability to import API definitions in the Open API format is powerful.

In this post, I show how you can automatically generate REST APIs directly from Python data classes. This method includes a highly automated workflow for exposing Python services as public APIs using the API Gateway. Recent changes in the Python language open the door for full automation of API publishing directly from code.

Open API and API Gateway

The Open API specification is a popular mechanism to declare the structure of REST APIs. It’s language-independent and allows you to determine API operations and their data types. Previously called Swagger, it is a standardization effort with benefits for the service developer and service consumer. It reduces repetitive tasks, increases API quality, and removes the guesswork from calling a service.

Examples shown here use data classes, which are supported in Python 3.7 or higher. There are backports of data classes to Python 3.6 available but they are beyond the scope of this post.

Python standard type annotations

The type hints syntax, defined in PEP 526 and implemented in Python 3.5, allow the declaration of a type for identifiers. This includes local variables, function and method parameters, and return type or class fields. They improve the readability of the code and provide useful information for tools. This allows your IDE to be more effective at auto-completion, semantic error detection, and refactoring.

Code checkers such as Mypy can better catch problems at build time. These are the typical advantages of statically typed languages. With Python, because type annotations are optional and a recent addition to the language, not all the project’s dependencies have types. That’s why tooling is less accurate in detecting all error conditions.

Python data classes

Data classes are an even more recent addition to the language. Described in PEP 557 and introduced in Python 3.7 they allow a simplified declaration of class data structures useful for storing state. Combined with type hints, one can use the @dataclass decorator:

@dataclass
class Person:
  name: str
  age: int

Then the Python implementation can generate:

  1. The constructor:
    Person(”Joe”, 12)
  2. Comparator methods to allow operations such as:
    Person(name=”Joe”, age=12) == Person(name=”Joe”, age=12)
  3. The __repr__() implementation to pretty print the object:
    Person(name='Joe', age=12)

Building an API using data classes

Data classes containing fields with type hints lend themselves to automation of API definitions. This solution uses data classes to generate Open API service definitions with AWS extensions and to create API Gateway configurations.

Similar solutions exist for strictly typed languages like Java, C# or Scala. In Python, this level of automation was not available until version 3.7. This code uses the Dataclasses JSON library to automate the serialization of data classes.

1. Start with the entity definition, in this case a person:

@dataclass
@dataclass_json
class Person:
  name: str
  age: int

2. Create one class for the request and another for the response to help payload serialization:

@dataclass
@dataclass_json
class CreatePersonRequest:
  person: Person

@dataclass
@dataclass_json
class CreatePersonResponse:
  person_id: int

3. Next, implement the route handler (this example uses the Flask Web framework):

OPERATION_CREATE_PERSON: str = 'create-person'
@app.route(f'/{OPERATION_CREATE_PERSON}', methods=['POST'])
def create_person():
    payload = request.get_json()
    logging.info(f"Incoming payload for {OPERATION_CREATE_PERSON}: {payload}")
    person = CreatePersonRequest.from_json(payload)

The payload is deserialized transparently using the schema derived from the data class definition of Person.

4. To generate a corresponding API definition, enter:

spec = {}

generate_operation(path=OPERATION_CREATE_PERSON,
                   request_schema=CreatePersonRequest.schema(),
                   request_schema_name=CreatePersonResponse.__name__,
                   response_schema=CreatePersonResponse.schema(),
                   response_schema_name=CreatePersonResponse.__name__,
                   spec=spec)

spec_dict = spec.to_dict()

The implementation of generate_operation() makes use of the apispec library to programmatically construct the Open API definition.

With spec_dict containing the Open API specification, it’s used to either create or update the API definition. You can also run any Open API tools on this definition, such as SDK generators, mock servers, or documentation generators. There’s a comprehensive catalog of tools maintained at https://openapi.tools/.

As a sensible default, the code generates API operations guarded by API keys supplied with the x-api-key header:

"securitySchemes": {
      "api_key": {
        "type": "apiKey",
        "name": "x-api-key",
        "in": "header"
      }
    }

The spec uses API Gateway extensions to include implementation-specific metadata. The most important is the one linking the API definition to the ECS backend:

"x-amazon-apigateway-integration": {
          "passthroughBehavior": "when_no_match",
          "type": "http_proxy",
          "httpMethod": "POST",
          "uri": "http://myecshost-1234567890.us-east-1.elb.amazonaws.com/create-person"
        }

You can use a similar pattern to connect the gateway to a different service, such as AWS Lambda:

"x-amazon-apigateway-integration": {
          "uri": "arn:aws:apigateway:...:lambda:path/.../functions/arn:aws:lambda:...:...:function:yourLambdaFunction/invocations",
          "responses": {
            "default": {
              "statusCode": "200"
            }
          },
          "passthroughBehavior": "when_no_match",
          "httpMethod": "POST",
          "contentHandling": "CONVERT_TO_TEXT",
          "type": "aws"
        }

For more information on the API Gateway extension to Open API, visit the AWS documentation.

Generating the API using API Gateway

This example uses the boto3 API Gateway API to expose a public API.

1. To create the API, enter the following:

api_definition = json.dumps(spec_dict, indent=2)
api_gateway_client.import_rest_api( body=api_definition )

2. To update the API, merge the changes into a manually modified API definition (mode='merge'), or completely overwrite the API (mode='overwrite'). It is often safer to merge the API, as follows:

api_gateway_client.put_rest_api(body=api_definition, mode='merge', restApiId=find_api_id(api_gateway_client, api_name))

The find_api_id() helper function looks up the API ID based on its name.

3. Check the API Gateway dashboard in the AWS Management Console for the new API definition. It shows the API and its resources:

API Gateway dashboard

Now you are ready to issue a test call to the external API to validate its security and functionality. The Open API definition of a manually created or modified API can be exported by various means, including from the stage editor.

Validate the API

The correct way to call the API is shown in test_get_dubbing_job_status_API() from test/ondemand_test_call_service.py:

response = _send_request(secure=True,
                         host='<<yourapi>>.execute-api.us-east-1.amazonaws.com',
                         service_port=80,
                         path='sample-generated-api',
                         operation=OPERATION_CREATE_PERSON,
                         request=CreatePersonRequest(Person(name='Jane Doe', age='40')),
                         api_key='<<yourapikey>>')

response_obj = CreatePersonResponse.from_json(response)

assert response_obj.person_id is not None

If you call the API without the api_key parameter, it returns an HTTP 403 code and the error message:

{"message":"Forbidden"}

Conclusion

This post shows how to automatically expose Python services as public APIs directly from the code. With the introduction of Python data classes, it is easy to automate JSON serialization.

Now you can fully automate the API generation and deployment tasks for API Gateway.  Introducing a new entity is trivial, and adding a new field to your API requires only writing its definition. You can develop a fully functional API based upon these building blocks.

Learn more from this sample repository, and adapt the code for your projects to achieve a high level of automation for your public APIs.

 

Building a serverless URL shortener app without AWS Lambda – part 3

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/building-a-serverless-url-shortener-app-without-lambda-part-3/

This is the final installment of a three-part series on building a serverless URL shortener without using AWS Lambda. This series highlights the power of Amazon API Gateway and its ability to directly integrate with services like Amazon DynamoDB. The result is a low latency, highly available application that is built with managed services and requires minimal code.

In part one of this series, I demonstrate building a serverless URL shortener application without using AWS Lambda. In part two, I walk through implementing application security using Amazon API Gateway settings and Amazon Cognito. In this part of this series, I cover application observability and performance.

Application observability

Before I can gauge the performance of the application, I must first be able to observe the performance of my application. There are two AWS services that I configure to help with observability, AWS X-Ray and Amazon CloudWatch.

X-Ray

X-Ray is a tracing service that enables developers to observe and debug distributed applications. With X-Ray enabled, every call to the API Gateway endpoint is tagged and monitored throughout the application services. Now that I have the application up and running, I want to test for errors and latency. I use Nordstrom’s open-source load testing library, serverless-artillery, to generate activity to the API endpoint. During the load test, serverless-artillery generates 8,000 requests per second (RPS) for a period of five minutes. The results are as follows:

X-Ray Tracing

This indicates that, from the point the request reaches API Gateway to when a response is generated, the average time for each request is 8 milliseconds (ms) with a 4 ms integration time to DynamoDB. It also indicates that there were no errors, faults, or throttling.

I change the parameters to increase the load and observe how the application performs. This time serverless-artillery generates 11,000rps for a period of 30 seconds. The results are as follows:

X-Ray tracing with throttling

X-Ray now indicates request throttling. This is due to the default throttling limits of API Gateway. Each account has a soft limit of 10,000rps with a burst limit of 5k requests. Since I am load testing the API with 11,000rps, API Gateway is throttling requests over 10k per second. When throttling occurs, API Gateway responds to the client with a status code of 429. Using X-Ray, I can drill down into the response data to get a closer look at requests by status code.

X-Ray analytics

CloudWatch

The next tool I use for application observability is Amazon CloudWatch. CloudWatch captures data for individual services and supports metric based alarms. I create the following alarms to have insight into my application:

AlarmTrigger
APIGateway4xxAlarmOne percent of the API calls result in a 4xx error over a one-minute period.
APIGateway5xxAlarmOne percent of the API calls result in a 5xx error over a one-minute period.
APIGatewayLatencyAlarmThe p99 latency experience is over 75 ms over a five-minute period.
DDB4xxAlarmOne percent of the DynamoDB requests result in a 4xx error over a one-minute period.
DDB5xxAlarmOne percent of the DynamoDB requests result in a 5xx error over a one-minute period.
CloudFrontTotalErrorRateAlarmFive requests to CloudFront result in a 4xx or 5xx error over a one-minute period.
CloudFrontTotalCacheHitRAteAlarm80% or less of the requests to CloudFront result in a cache hit over a five-minute period. While this is not an error or a problem, it indicates the need for a more aggressive caching story.

Each of these alarms is configured to publish to a notification topic using Amazon Simple Notification Service (SNS). In this example I have configured my email address as a subscriber to the SNS topic. I could also subscribe a Lambda function or a mobile number for SMS message notification. I can also get a quick view of the status of my alarms on the CloudWatch console.

CloudWatch and X-Ray provide additional alerts when there are problems. They also provide observability to help remediate discovered issues.

Performance

With observability tools in place, I am now able to evaluate the performance of the application. In part one, I discuss using API Gateway and DynamoDB as the primary services for this application and the performance advantage provided. However, these performance advantages are limited to the backend only. To improve performance between the client and the API I configure throttling and a content delivery network with Amazon CloudFront.

Throttling

Request throttling is handled with API Gateway and can be configured at the stage level or at the resource and method level. Because this application is a URL shortener, the most important action is the 301 redirect that happens at /{linkId} – GET. I want to ensure that these calls take priority, so I set a throttling limit on all other actions.

The best way to do this is to set a global throttling of 2000rps with a burst of 1k. I then configure an override on the /{linkId} – GET method to 10,000rps with a burst of 5k. If the API is experiencing an extraordinarily high volume of calls, all other calls are rejected.

Content delivery network

The distance between a user and the API endpoint can severely affect the performance of an application. Simply put, the further the data has to travel, the slower the application. By configuring a CloudFront distribution to use the Amazon CloudFront Global Edge Network, I bring the data closer to the user and increase performance.

I configure the cache for /{linkId} – GET to “max-age=300” which tells CloudFront to store the response of that call for 5 minutes. The first call queries the API and database for a response, while all subsequent calls in the next five minutes receive the local cached response. I then set all other endpoint cache to “no-cache, no-store”, which tells CloudFront to never store the value from these calls. This ensures that as users are creating or editing their short-links, they get the latest data.

By bringing the data closer to the user, I now ensure that regardless of where the user is, they receive improved performance. To evaluate this, I return to serverless-artillery and test the CloudFront endpoint. The results are as follows:

MinMaxAveragep10p50p90p95p99
8.12 ms739 ms21.7 ms10.1 ms12.1 ms20 ms34 ms375 ms

To be clear, these are the 301 redirect response times. I configured serverless-artillery not to follow the redirects as I have no control of the speed of the resulting site. The maximum response time was 739 ms. This would be the initial uncached call. The p50 metric shows that half of the traffic is seeing a 12 ms or better response time while the p95 indicates that most of my traffic is experiencing an equal or better than 34 ms response time.

Conclusion

In this series, I talk through building a serverless URL shortener without the use of any Lambda functions. The resulting architecture looks like this:

This application is built by integrating multiple managed services together and applying business logic via mapping templates on API Gateway. Because these are all serverless managed services, they provide inherent availability and scale to meet the client load as needed.

While the “Lambda-less” pattern is not a match for every application, it is a great answer for building highly performant applications with minimal logic. The advantage to this pattern is also in its extensibility. With the data saved to DynamoDB, I can use the DynamoDB streaming feature to connect additional processing as needed. I can also use CloudFront access logs to evaluate internal application metrics. Clone this repo to start serving your own shortened URLs and submit a pull request if you have an improvement.

Did you miss any of this series?

  1. Part 1: Building the application.
  2. Part 2: Securing the application.

Happy coding!

Building a serverless URL shortener app without AWS Lambda – part 2

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/building-a-serverless-url-shortener-app-without-lambda-part-2/

This post is the second installment of a three-part series on building a serverless URL shortener without using AWS Lambda. The purpose of the series is to highlight the power of Amazon API Gateway and its ability to integrate directly with backend services like Amazon DynamoDB. The result is a low latency, highly available application that is built with managed services and requires minimal code.

In part one, I cover using Apache Velocity Templating Language (VTL) and Amazon API Gateway to manage business logic usually processed by an AWS Lambda function. Now I discuss several methods for securing the API Gateway and any resources behind it. Whether building a functionless application as described in part one, or proxying a compute layer with API Gateway, this post offers some best practices for configuring API Gateway security.

To refer to the full application, visit https://github.com/aws-samples/amazon-api-gateway-url-shortener. The template.yaml file is the AWS SAM configuration for the application, and the api.yaml is the OpenAPI configuration for the API. I include instructions on how to deploy the full application, together with a simple web client, in the README.md file.

There are several steps to secure the API. First, I use AWS Identity and Access Management (IAM) to ensure I practice least privilege access to the application services. Additionally, I enable authentication and authorization, enforce request validation, and configure Cross-Origin Resource Sharing (CORS).

Secure functionless architecture

IAM least privileges

When configuring API Gateway to limit access to services, I create two specific roles for interaction between API Gateway and Amazon DynamoDB.

The first, DDBReadRole, limits actions to GetItem, Scan, and Query. This role is applied to all GET methods on the API. For POST, PUT, and DELETE methods, there is a separate role called DDBCrudRole that allows only the DeleteItem and UpdateItem actions. Additionally, the SAM template dynamically assigns these roles to a specific table. Thus, allowing these roles to only perform the actions on this specific table.

Authentication and authorization

For authentication, I configure user management with Amazon Cognito. I then configure an API Gateway Cognito authorizer to manage request authorization. Finally, I configure the client for secure requests.

Configuring Cognito for authentication

For authentication, Cognito provides user directories called user pools that allow user creation and authentication. For the user interface, developers have the option of building their own with AWS Amplify or having Cognito host the authentication pages. For simplicity, I opt for Cognito hosted pages. The workflow looks like this:

Cognito authentication flow

To set up the Cognito service, I follow these steps:

  1. Create a Cognito user pool. The user pool defines the user data and registration flows. This application is configured to use an email address as the primary user name. It also requires email validation at time of registration.Cognito user pool
  2. Create a Cognito user pool client. The client application is connected to the user pool and has permission to call unauthenticated APIs to register and login users. The client application configures the callback URLs as well as the identity providers, authentication flows, and OAuth scopes.Cognito app client
  3. Create a Cognito domain for the registration and login pages. I configure the domain to use the standard Cognito domains with a subdomain of shortener. I could also configure this to match a custom domain.Cognito domain

Configuring the Cognito authorizer

Next, I integrate the user pool with API Gateway by creating a Cognito authorizer. The authorizer allows API Gateway to verify an incoming request with the user pool to allow or deny access. To configure the authorizer, I follow these steps:

  1. Create the authorizer on the API Gateway. I create a new authorizer and connect it to the proper Cognito user pool. I also set the header name to Authorization.Cognito authorizer
  2. Next I attach the authorizer to each resource and method needing authorization by this particular authorizer.Connect Cognito authorizer to method

Configure the client for secure requests

The last step for authorized requests is to configure the client. As explained above, the client interacts with Amazon Cognito to authenticate and obtain temporary credentials. The truncated temporary credentials follow the format:

{
  "id_token": "eyJraWQiOiJnZ0pJZzBEV3F4SVUwZngreklE…",
  "access_token": "eyJraWQiOiJydVVHemFuYjJ0VlZicnV1…",
  "refresh_token": "eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU…",
  "expires_in": 3600,
  "token_type": "Bearer"
}

For the client to access any API Gateway resources that require authentication, it must include the Authorization header with the value set to the id_token. API Gateway treats it as a standard JSON Web Token (JWT), and decodes for authorization.

Request validation

The next step in securing the application is to validate the request payload to ensure it contains the expected data. When creating a new short link, the POST method request body must match the following:

{
  “id”: ”short link”,
  “url”: “target url”
}

To configure request validation, I first create a schema defining the expected POST method body payload. The schema looks like this:

{
  "required" : [ "id", "url" ],
  "type" : "object",
  "properties" : {
    "id" : { "type" : "string"},
    "url" : {
      "pattern" : "^https?://[[email protected]:%._\\+~#=]{2,256}\\.[a-z]{2,6}\\b([[email protected]:%_\\+.~#?&//=]*)",
      "type" : "string”
    }
  }
}

The schema requires both id and url, and requires that they are both strings. It also uses a regex pattern to ensure that the url is a valid format.

Next, I create request validator definitions. Using the OpenAPI extensibility markup, I create three validation options: all, params-only, and body-only. Here is the markup:

 

x-amazon-apigateway-request-validators:
  all:
    validateRequestBody: true
    validateRequestParameters: true
  body:
    validateRequestBody: true
    validateRequestParameters: false
  params:
    validateRequestBody: false
    validateRequestParameters: true

These definitions appear in the OpenAPI template and are mapped to the choices on the console.

Attaching validation to methods

With the validation definitions in place, and the schema defined, I then attach the schema to the POST method and require validation of the request body against the schema. If the conditions of the schema are not met, API Gateway rejects the request with a status code of 400 and an error message stating, “Invalid request body”.

CORS

Cross-Origin Resource Sharing is a mechanism for allowing applications from different domains to communicate. The limitations are based on exchanged headers between the client and the server. For example, the server passes the Access-Control-Allow-Origin header, which indicates which client domain is allowed to interact with the server. The client passes the Origin header that indicates what domain the request is coming from. If the two headers do not match exactly, then the request is rejected.

It is also possible to use a wildcard value for many of the allowed values. For Origin, this means that any client domain can connect to the backend domain. While wildcards are possible, it is missing an opportunity to add another layer of security to the application. In light of this, I configure CORS to restrict API access the client application. To help understand the different CORS settings required, here is a layout of the API endpoints:

API resource and methods structure

When an endpoint requires authorization, or a method other than GET is used, browsers perform a pre-flight OPTIONS check. This means they make a request to the server to find out what the server allows.

To accommodate this, I configure an OPTIONS response using an API Gateway mock endpoint. This is the header configuration for the /app OPTIONS call:

Access-Control-Allow-Methods‘POST, GET, OPTIONS’
Access-Control-Allow-Headers‘authorization, content-type’
Access-Control-Allow-Origin‘<client-domain>’

The configuration for the /app/{linkId} OPTIONS call is similar:

Access-Control-Allow-Methods‘PUT, DELETE, OPTIONS’
Access-Control-Allow-Headers‘authorization, content-type’
Access-Control-Allow-Origin‘<client-domain>’

In addition to the OPTIONS call, I also add the browser required, Access-Control-Allow-Origin to the response header of PUT, POST, and DELETE methods.

Adding a header to the response is a two-step process. Because the response to the client is modeled at the Method Response, I first set the expected header here:

Response headers

The Integration Response is responsible for mapping the data from the integrated backend service to the proper values, so I map the value of the header here:

Resonse header values

With the proper IAM roles in place, authentication and authorization configured, and data validation enabled, I now have a secure backend to my serverless URL Shortener. Additionally, by making proper use of CORS I have given my test client access to the API to provide a full-stack application.

Conclusion

In this post, I demonstrate configuring built-in features of API Gateway to secure applications fronted with API Gateway. While this is not an exhaustive list of API Gateway features, it is a good starting point for API security and what can be done at the API Gateway level. In part three, I discuss how to observe and improve the performance of the application, as well as reporting on internal application metrics.

Continue to part three.

Happy coding!

Building a serverless URL shortener app without AWS Lambda – part 1

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/building-a-serverless-url-shortener-app-without-lambda-part-1/

When building applications, developers often use a standard multi-tier architecture pattern that generally includes a presentation, processing, and data tier. When building such an application using serverless technologies on AWS, it might look like the following:

Serverless architecture

In this three-part series, I am going to challenge you to approach this a different way by building a functionless or “backend-less” URL shortener application, that looks like this:

Functionless architecture

In part one, I discuss configuring a service integration between Amazon API Gateway and Amazon DynamoDB, removing the need for AWS Lambda entirely. I also demonstrate using Apache’s Velocity Templating Language (VTL) to apply business logic and modify the API request and response as needed. In part two, I show how to use API Gateway to increase security. In part three, I demonstrate how to improve response time and configure observability to get insights into application performance and client usage.

At AWS re:Invent 2019, the new HTTP API for Amazon API Gateway was announced. At the time of this writing, this new service does not support VTL or some of the other features discussed, so instead I use a REST API. When HTTP API gains feature parity, we will publish an additional follow up to this post.

Throughout this blog series, there are deep links to AWS SAM and OpenAPI configurations to show how to build this application using infrastructure as code (IaC). To refer to the full application, visit https://github.com/aws-samples/amazon-api-gateway-url-shortener. The template.yaml file is the AWS SAM configuration for the application, and the api.yaml is the OpenAPI configuration for the API. I have included instructions on how to deploy the full application, including a simple web client, in the README.md file.

Why would I do this?

AWS Lambda is the standard compute resource for serverless applications. With a Lambda function, I can process complex business logic in any of the AWS supported runtimes or even in my own custom runtime. However, do I really need to use a Lambda function when the business logic is minimal, and the main purpose becomes the transportation of data? Instead, I can turn to API Gateway to transport the data and process minimal amounts of business logic, as needed, with VTL. This allows me to minimize my application resources and cost.

API Gateway service integration

While each request to an API Gateway REST endpoint follows the same path, to understand how service integrations work, I show the integration for /app – POST. This represents the lifecycle of a request made to http://myexampleapi.com/api using a POST method. The purpose of this endpoint is to post new short links to the database.

API Gateway request lifecycle

The Method Request and Method Response mainly handle authorization, modeling, and validation, and are covered in detail in part two of this blog. For now, I focus on the Integration Request and Integration Response. The Integration Request is responsible for service integrations, and looks like this:

POST integration request

The Integration type is AWS Service and the AWS Region is my closest Region, us-west-2. For AWS Service, I choose DynamoDB from the long list of available services. For the HTTP Method, when interacting with the DynamoDB API, the POST method is required to take action on the underlying table.

For the Action, I choose UpdateItem. The action is the same here as you would use in the CLI or SDK to interact with DynamoDB. Generally, when adding new items to the DynamoDB table, I use the PutItem command. However, in this instance I must use UpdateItem to get a specific set of return data from DynamoDB.

When creating a new record in DynamoDB, the PutItem action does not return the completed record in the single request. If I want to obtain the new record, I need to make a secondary call to DynamoDB to fetch the record. However, the API Gateway request lifecycle does not have the ability to call the database a second time. I need to make sure I get everything I need the first time around. The nature of the UpdateItem is to update an existing item or create a new one if it doesn’t exist. Additionally, it returns the newly created object which I can then return to the client.

Finally, I configure the execution role. On this method, API Gateway needs permission to read and write from DynamoDB. Here is the policy section of the DDBCrudRole:

Policies:
  - PolicyName: DDBCrudPolicy
    PolicyDocument:
      Version: '2012-10-17'
      Statement:
        Action:
          - dynamodb:DeleteItem
          - dynamodb:UpdateItem
        Effect: Allow
        Resource: !GetAtt LinkTable.Arn

This simple policy is used for all create, read, update, and delete (CRUD) operations, and UpdateItem is used for both create and update. This policy is part of the SAM template, and dynamically references the DynamoDB table name for the resource. This follows the principles of least privilege, only allowing access to the required table.

Modifying the request

Now that I have configured the integration from API Gateway to DynamoDB, I modify the incoming request to a format that DynamoDB understands. Further down the page on the Integration Request, you see the Mapping Template option:

Mapping templates

The mapping template evaluates incoming request body and looks for existing templates to apply. I have created a template for application/json to match the incoming body. Here is a summarized version of the template:

{
  "TableName": "URLShortener-LinkTable-QTK7WFAJ11YS",
  "ConditionExpression":"attribute_not_exists(id)",
  "Key": {
    "id": { "S": $input.json('$.id') }
  },
  "ExpressionAttributeNames": {
    "#u": "url",
    "#o": "owner",
    "#ts": "timestamp"
  },
  "ExpressionAttributeValues":{
    ":u": {"S": $input.json('$.url')},
    ":o": {"S": "$context.authorizer.claims.email"},
    ":ts": {"S": "$context.requestTime"}
  },
  "UpdateExpression": "SET #u = :u, #o = :o, #ts = :ts",
  "ReturnValues": "ALL_NEW"
}

If you have worked with the DynamoDB SDK, this might look familiar. The TableName indicates which table to use in the call. The ConditionExpression value ensures that the id passed does not already exist. The value for id is extracted from the request body using $input.json(‘$.id’).

To avoid colliding with reserved words, DynamoDB has the concept of ExpressionAttributeNames and ExpressionAttributeValues. In the ExpressionAttributeValues I have set ‘:o’ to $context.authorizer.claims.email. This extracts the authenticated user’s email from the request context and maps it to owner. This allows me to uniquely group a single user’s links into a global secondary index (GSI). Querying the GSI is much more efficient than scanning the entire table.

I also retrieve the requestTime from the context object, allowing me to place a timestamp in the record. I set the ReturnValues to return all new values for the record.  Finally, the UpdateExpression maps the values to the proper names and inserts the item into DynamoDB.

Modifying the response

Before I discuss the Integration Response, let’s examine the Method Response:

Method response

The Method Response is responsible for modeling the response to the client. In most cases, DynamoDB returns a status code of either 200 or 400. Therefore, I configure a 200 response and a 400 response.

When DynamoDB returns a 200 response, the data looks like the following:

{
  "id": {"S": "aws"},
  "owner": {"S": "[email protected]"},
  "timestamp": {"S": "27/Dec/2019:21:21:17 +0000"},
  "url": {"S": "http://aws.amazon.com"}
}

In the Integration Response, I have a template that converts this to a structure that the client is expecting. The template looks like this:

#set($inputRoot = $input.path('$'))
{
  "id":"$inputRoot.Attributes.id.S",
  "url":"$inputRoot.Attributes.url.S",
  "timestamp":"$inputRoot.Attributes.timestamp.S",
  "owner":"$inputRoot.Attributes.owner.S"
}

This template has a variable called ­$inputRoot to contain the root data. I then build out the return object, formatted for the client:

{
  "id": "aws",
  "url": http://aws.amazon.com,
  "timestamp": "27/Dec/2019:21:21:17 +0000",
  "owner": "[email protected]"
}

For a 400 status, I must evaluate the issue and respond accordingly. The mapping template looks like this:

#set($inputRoot = $input.path('$')) 
#if($inputRoot.toString().contains("ConditionalCheckFailedException")) 
  #set($context.responseOverride.status = 200)
  {"error": true,"message": "URL link already exists"} 
#end

This template checks for the string, “ConditionalCheckFailedException”. If it exists, then I know that the conditional check “attribute_not_exists(id)”, from the UpdateItem template in the Integration Request failed. To return a 200 response, I use the “#set($context.responseOverride.status = 200)” override andset the response with the error details.

With my integration and mapping templates in place for the /app – POST method, I now have the ability to create new short links for my URL shortener. Taking this same approach for reading, updating, and deleting short links, I now have a fully functioning backend for the URL shortener that only uses API Gateway and DynamoDB.

What we have built so far

Conclusion

In this post, I walked through using VTL to manage simple business logic at the processing tier with API Gateway. I covered configuring the service integration with DynamoDB and modifying the request and response payloads as needed. In part 2, I discuss different options for configuring Amazon API Gateway security.

To deploy the URL shortener, visit https://github.com/aws-samples/amazon-api-gateway-url-shortener. The README.md file contains instructions for launching the application.

Continue to part two.

Happy coding!

Analyzing API Gateway custom access logs for custom domain names

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/analyzing-api-gateway-custom-access-logs-for-custom-domain-names/

This post is courtesy of Taka Matsumoto, Cloud Support Engineer, AWS

If you are using custom domain names in Amazon API Gateway, it can be useful to gain insights into requests sent to each custom domain name. Although API Gateway provides CloudWatch metrics and options to deliver request logs to Amazon CloudWatch Logs, there is no pre-defined metric or log specific to custom domain names. If there is more than one custom domain name mapped to a single API, understanding the quantity and type of requests by domain name may help understand request patterns.

Using the custom access logging option in API Gateway enables delivery of custom logs to CloudWatch Logs, which can be analyzed using CloudWatch Logs Insights. This blog post walks through the steps to create a CloudWatch log group for API custom access logging, and uses CloudWatch Logs Insights for analysis.

Overview

In the tutorial, you create a CloudWatch log group for custom access logging. You then enable custom access logging for an API stage associated with a custom domain name. The IAM role used in this tutorial must be able to create and update the relevant resources in CloudWatch, IAM, and API Gateway. For this tutorial, use the US East (N. Virginia) Region. In the next steps, the tutorial covers:

  1. Creating a CloudWatch Log group.
  2. Creating an IAM role for access logging.
  3. Enabling custom access logging.
  4. Testing an API using a custom domain name.
  5. Analyzing Logs in CloudWatch Logs Insights.

Create a CloudWatch Log group

Before enabling custom access logging for your API’s stage, create a CloudWatch log group to deliver custom logs. Create a log group called APIGateway_CustomDomainLogs by following these steps:

  1. Go to the CloudWatch Logs console.
  2. Under Actions, click on Create log group and name the log group APIGateway_CustomDomainLogs. Learn more about creating a log group in Working with Log Groups and Log Streams.

Create log group

Create an IAM role for access logging

You must use an IAM role to deliver logs from API Gateway to CloudWatch Logs. If there is no IAM role already available for logging in API Gateway, create a new IAM role:

  1. Navigate to the IAM console.
  2. Under Roles, choose Create role.
  3. Select API Gateway for the service and choose Next: Permissions.

    IAM selection

  4. Leave the attached IAM policy (AmazonAPIGatewayPushToCloudWatchLogs), and choose Next: Tags.
  5. No tags are required for this tutorial. Leave these blank and choose Next: Review.
  6. Name the role APIGatewayCloudWatchLogsRole and choose Create role.
    Create role

3. Enable custom access logging

Now you enable custom access logging. Select one of the API stages that you invoke through a custom domain name:

  1. If there is no CloudWatch log role set for API Gateway, go to the API Gateway Settings page to add the CloudWatch log role ARN. The IAM role ARN follows this format: arn:aws:iam::123456789012:role/APIGatewayCloudWatchLogsRole.
  2. For your API with a custom domain name, go to Stages page and select the Logs/Tracing tab.
  3. Enter the following fields:
    – For CloudWatch Group, add the ARN (for example, arn:aws:logs:us-east-1:123456789012:log-group:APIGateway_CustomDomainLogs).
    – For Log Format, enter:

    {
        "RequestId": "$context.requestId",
        "DomainName": "$context.domainName",
        "APIId": "$context.apiId",
        "RequestPath": "$context.path",
        "RequestTime": "$context.requestTime",
        "SourceIp": "$context.identity.sourceIp",
        "ResourcePath": "$context.resourcePath",
        "Stage": "$context.stage"
    }

    Custom access logging

  4.  Choose Save changes.

Learn more about custom access logging setup in Set up API Logging Using the API Gateway Console. In the Log Format configuration, $context variables retrieve a domain name as well as other API request information. Learn more in $context Variables for Data Models, Authorizers, Mapping Templates, and CloudWatch Access Logging.

4. Test invoke an API using a custom domain name

Once you enable custom access logging, invoke the API using the custom domain name. The logs appear in the specified CloudWatch log group shortly after. A sample response in the CloudWatch log stream looks like the following:

{
    "RequestId": "1b1ebe20-817f-11e9-a796-f5e0ffdcdac7",
    "DomainName": "test.example.com”,
    "APIId": "12345abcde",
    "RequestPath": "/dev",
    "RequestTime": "28/May/2019:19:30:52 +0000",
    "SourceIp": "1.2.3.4",
    "ResourcePath": "/",
    "Stage": "dev"
}

5. Analyze logs in CloudWatch Logs Insights

After setting up the custom access logs, you can query against them to find more insights using the custom domain name.

  1. Go to CloudWatch Logs Insights console.
  2. In the log group text field, select the CloudWatch log group, APIGateway_CustomDomainLogs.
  3. Enter the following query.
    fields @timestamp, @message
    | filter DomainName like /(?i)(test.example.com)/

    This query returns a list of log entries for the custom domain called test.example.com. To run in your account, replace this value with your custom domain name.
    Custom filter

If your network security does not allow the use of web sockets, you cannot access the CloudWatch Logs Insights console. Instead, use the CloudWatch Logs Insights query capabilities using the API. Here are some example queries use the AWS CLI:

1. A sample command for aws logs start-query:

aws logs start-query --log-group-name APIGateway_CustomDomainLogs --start-time 1557085225000 --end-time 1559763625000 --query-string 'fields @timestamp, @message | filter DomainName like /(?i)(test.example.com)/'

The response looks like:

{
    "queryId": "a1234567-bfde-47c7-9d44-41ebed011c66"
}

Learn more about start-query command in aws logs start-query.

2. Run aws logs get-query-results to retrieve the result of the query. A sample command for aws logs get-query-results:

aws logs get-query-results --query-id a1234567-bfde-47c7-9d44-41ebed011c66

The response looks like:

{
    "results": [
        [
            {
                "field": "@timestamp",
                "value": "2019-05-28 19:30:52.494"
            },
            {
                "field": "@message",
                "value": "{\"RequestId\": \"12345678-7cb3-11e9-8896-c30af5588427\",\"DomainName\":\"test.exmaple.com\"}"
            },
            {
                "field": "@ptr",
                "value": "CmEKKAokOTYxNTQyNjM4MjQzOkFQSUdhdGV3YXlfQ3VzdG9tRG9tYWluEAISNRoYAgXM/KQtAAAAAA0L2foABc5YFyAAAAHSIAEoxviFhK4tMM+HhoSuLTgCQOoBSN4OUN4KEAAYAQ=="
            }
        ]
    ],
    "statistics": {
        "recordsMatched": 1.0,
        "recordsScanned": 1.0,
        "bytesScanned": 153.0
    },
    "status": "Complete"
}

You can use other queries to filter the results based on other attributes in the logs. Learn more about get-query-results command in aws logs get-query-results.

Conclusion

In this blog post, I show how to deliver custom access logs from API Gateway to CloudWatch Logs. I also show how to use CloudWatch Logs Insights to run a query against the logs for custom domain name metrics, which help provide insights into custom domain name usage.

To learn more about the query syntax, visit CloudWatch Logs Insights Query Syntax.