All posts by Chris Munns

Updated timeframe for the upcoming AWS Lambda and AWS [email protected] execution environment update

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/updated-timeframe-for-the-upcoming-aws-lambda-and-aws-lambdaedge-execution-environment-update/

On May 14th we announced an upcoming update to the AWS Lambda and AWS [email protected] execution environments. In that announcement we shared that we are updating the execution environment to a more recent version of Amazon Linux. This newer execution environment brings updates that offer improvements in capabilities, performance, security, and updated packages that your application code might interface with. The previous post explained approaches to proactively testing against the new update, and methods to update your code to be compatible in the rare case you were impacted.

So far, we’ve heard from many customers that their functions have not been impacted when using the new execution environment by configuring the Opt-in mechanism. For those that have been impacted, they have been able to follow the guidance on rebuilding any dependencies against the new execution environment and retesting their functions with success.

We also received feedback that customers wanted to see a longer time frame for validation as well as have more control over it, and so based on this feedback we’ve decided to modify the timeframe in two ways.

The first phase, Begin Testing, will be extended by three weeks, retroactive starting May 21 and now ending June 10. This will give you more time to test your functions with the Opt-in layer before any further changes to the platform kick in.

We are then taking the second phase, originally called Update/Create and breaking into two independent periods of time. The first, now referred to as the New Function Create phase, will be two weeks long and during this time all newly created functions will use the new execution environment unless a delayed-update layer is configured. The second new phase, Existing Function Update, will be three weeks long and during this time both newly created functions as well as existing functions that you update, will use the new execution environment unless a delayed-update layer is configured.

The end result is that you now have 5 more weeks in total to test and potentially update your functions for this change before the General Update begins. As a reminder, starting at that time, all functions without a delayed-update layer configured will begin migrating to the new execution environment.

New update timeline

The following is the new timeline for the update, which is now broken up over five phases:

May 14, 2019 – Begin Testing: You can begin testing your functions for the new execution environment locally with AWS SAM CLI or using an Amazon EC2 instance running on Amazon Linux 2018.03. You can also proactively enable the new environment in AWS Lambda using the opt-in mechanism described in the original announcement post.
June 11, 2019 – New Function Create: All newly created functions will result in those functions running on the new execution environment, unless they have a delayed-update layer configured.
June 25, 2019 – Existing Function Update: All newly created functions or existing functions that you update will result in those functions running on the new execution environment, unless they have a delayed-update layer configured.
July 16, 2019 – General Update: Existing functions begin using the new execution environment on invoke, unless they have a delayed-update layer configured.
July 23, 2019 – Delayed Update End: All functions with a delayed-update layer configured start being migrated automatically.
July 29, 2019 – Migration End: All functions have been migrated over to the new execution environment.

Note, that we have updated the original announcement post with this new timeline as well.

FAQ

We also wanted to take the chance to provide additional information on follow up questions customers have had about the update.

Q. How does this relate to the recent Node.js v10 runtime launch?
A. The Node.js v10 launch is unrelated and is not impacted by this change. The Node.js v10 runtime is based on Amazon Linux 2 as its execution environment. Please see the AWS Lambda Runtimes section in the documentation for more information.

Q. Does this update change the execution environment for other runtimes to run on Amazon Linux 2?
A. No, this update brings the execution environment to the latest Amazon Linux 1 distribution release. In the future, new runtimes will launch on Amazon Linux 2, but all previous existing runtimes will continue to run on Amazon Linux 1.

Q. Was this update related to the recent Intel Quarterly Security Release (QSR) 2019.1?
A. No, this motion to begin updating the execution environment for Lambda and [email protected] is unrelated to the Intel QSR. There is no action for Lambda or [email protected] customers to take in relation to the QSR.

Next Steps

Your feedback greatly matters to us and we will continue to listen and learn from you. Please continue to contact us through AWS Support, the AWS forums, or AWS account teams.

Upcoming updates to the AWS Lambda and AWS [email protected] execution environment

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/upcoming-updates-to-the-aws-lambda-execution-environment/

AWS Lambda was first announced at AWS re:Invent 2014. Amazon CTO Werner Vogels highlighted the aspect of needing to run no servers, no instances, nothing, you just write your code. In 2016, we announced the launch of [email protected], which lets you run Lambda functions to customize content that CloudFront delivers, executing the functions in AWS locations closer to the viewer.

At AWS, we talk often about “shared responsibility” models. Essentially, those are the places where there is a handoff between what we as a technology provider offer you and what you as the customer are responsible for. In the case of Lambda and [email protected], one of the key things that we manage is the “execution environment.” The execution environment is what your code runs inside of. It is composed of the underlying operating system, system packages, the runtime for your language (if a managed one), and common capabilities like environment variables. From the customer standpoint, your primary responsibility is for your application code and configuration.

In this post, we outline an upcoming change to the execution environment for Lambda and [email protected] functions for all runtimes with the exception of Node.js v10. As with any update, some functionality could be affected. We encourage you to read through this post to understand the changes and any actions that you might need to take.

Update overview

AWS Lambda and AWS [email protected] run on top of the Amazon Linux operating system distribution and maintain updates to both the core OS and managed language runtimes. We are updating the Lambda execution environment AMI to version 2018.03 of Amazon Linux. This newer AMI brings updates that offer improvements in capabilities, performance, security, and updated packages that your application code might interface with.

This does not apply to the recently announced Node.js v10 runtime which today runs on Amazon Linux 2.

The majority of functions will benefit seamlessly from the enhancements in this update without any action from you. However, in rare cases, package updates may introduce compatibility issues. Potential impacted functions may be those that contain libraries or application code compiled against specific underlying OS packages or other system libraries. If you are primarily using an AWS SDK or the AWS X-Ray SDK with no other dependencies, you will see no impact.

You have the following options in terms of next steps:

  • Take no action before the automatic update of the execution environment starting May 21 for all newly created/updated functions and June 11 for all existing functions.
  • Proactively test your functions against the new environment starting today.
  • Configure your functions to delay the execution environment update until June 18 to allow for a longer testing window.

In addition to the overall timeline for this change, this post also provides instructions on the following:

  • How to test your functions for this new execution environment locally and on Lambda/[email protected]
  • How to proactively update your functions.
  • How to extend the testing window by one week.

Update timeline

The following is the timeline for the update, which is broken up over four phases over the next several weeks:

May 14, 2019—Begin Testing: You can begin testing your functions for the new execution environment locally with AWS SAM CLI or using an Amazon EC2 instance running on Amazon Linux 2018.03. You can also proactively enable the new environment in AWS Lambda using the opt-in mechanism described later in this post.
May 21, 2019—Update/Create: All new function creates or function updates result in your functions running on the new execution environment.
June 11, 2019—General Update: Existing functions begin using the new execution environment on invoke unless they have a delayed-update layer configured.
June 18, 2019—Delayed Update End: All functions with a delayed-update layer configured start being migrated automatically.
June 24, 2019—Migration End: All functions have been migrated over to the new execution environment.

Recommended Approach

decision tree

You only have to act if your application uses dependencies that are compiled to work on the previous execution environment. Otherwise, you can continue to deploy new and updated Lambda functions without needing to perform any other testing steps. For those who aren’t sure if their functions use such dependencies, we encourage you to do a new deployment of your functions and to test their functionality.

There are two options for when you can start testing your functions on the new execution environment:

  • You can begin testing today using the opt-in mechanism described later.
  • Starting May 21, a new deploy or update of your functions uses the new execution environment.

If you confirm that your functions would be affected by the new execution environment, you can begin re-compiling or building your dependencies using the new reference AMI for the execution environment today and then repeat the testing. The final step is to redeploy your applications any time after May 21 to use the new execution environment.

Building your dependencies and application for the new execution environment

Because we are basing the environment off of an existing Amazon Linux AMI, you can start with building and testing your code against that AMI on EC2. With an updated EC2 instance running this AMI, you can compile and build your packages using your normal processes. For the list of AMI IDs in all public Regions, check the release notes. To start an EC2 instance running this AMI, follow the steps in the Launching an Instance Using the Launch Instance Wizard topic in the Amazon EC2 User Guide.

Opt-in/Delayed-Update with Lambda layers

Some of you may want to begin testing as soon as you’ve read this announcement. Some know that they should postpone until later in the timeline.

To give you some control over testing, we’re releasing two special Lambda layers. Lambda layers can be used to provide shared resources, code, or data between Lambda functions and can simplify the deployment and update process. These layers don’t actually contain any data or code. Instead, they act as a special flag to Lambda to run your function executions either specifically on the new or old execution environment.

The Opt-In layer allows you to start testing today. You can use the Delayed-Update layer when you know that you must make updates to your function or its configuration after May 21, but aren’t ready to deploy to the new execution environment. The Delayed-Update layer extends the initial period available to you to deploy your functions by one week until the end of June 17, without changing the execution environment.

Neither layer brings any performance or runtime changes beyond this. After June 24, the layers will have no functionality. In a future deployment, you should remove them from any function configurations.

The ARNs for the two scenarios:

  • To OPT-IN to the update to the new execution environment, add the following layer:

arn:aws:lambda:::awslayer:AmazonLinux1803

  • To DELAY THE UPDATE to the new execution environment until June 18, add the following layer:

arn:aws:lambda:::awslayer:AmazonLinux1703

The action for adding a layer to your existing functions requires an update to the Lambda function’s configuration. You can do this with the AWS CLI, AWS CloudFormation or AWS SAM, popular third-party frameworks, the AWS Management Console, or an AWS SDK.

Validating your functions

There are several ways for you to test your function code and assure that it will work after the execution environment has been updated.

Local testing

We’re providing an update to the AWS SAM CLI to enable you to test your functions locally against this new execution environment. The AWS SAM CLI uses a Docker image that mirrors the live Lambda environment locally wherever you do development. To test against this new update, make sure that you have the most recent update to AWS SAM CLI version 0.16.0. You also should have an AWS SAM template configured for your function.

  1. Install or update the AWS SAM CLI:
    $ pip install --upgrade aws-sam-cli

    -Or-

    $ pip install aws-sam-cli
  2. Confirm that you have a valid AWS SAM template:
    $ sam validate -t <template file name>

    If you don’t have a valid AWS SAM template, you can begin with a basic template to test your functions. The following example represents the basic needs for running your function against a variety. The Runtime value must be listed in the AWS Lambda Runtimes topic.

    AWSTemplateFormatVersion: 2010-09-09
    Transform: 'AWS::Serverless-2016-10-31'
    
    Resources:
      myFunction:
        Type: 'AWS::Serverless::Function'
        Properties:
          CodeUri: ./ 
          Handler: YOUR_HANDLER
          Runtime: YOUR_RUNTIME
  3. With a valid template, you can begin testing your function with mock event payloads. To generate a mock event payload, you can use the AWS SAM CLI local generate-event command. Here is an example of that command being run to generate an Amazon S3 notification type of event:
    sam local generate-event s3 put --bucket munns-test --key somephoto.jpeg
    {
      "Records": [
        {
          "eventVersion": "2.0", 
          "eventTime": "1970-01-01T00:00:00.000Z", 
          "requestParameters": {
            "sourceIPAddress": "127.0.0.1"
          }, 
          "s3": {
            "configurationId": "testConfigRule", 
            "object": {
              "eTag": "0123456789abcdef0123456789abcdef", 
              "sequencer": "0A1B2C3D4E5F678901", 
              "key": "somephoto.jpeg", 
              "size": 1024
            }, 
            "bucket": {
              "arn": "arn:aws:s3:::munns-test", 
              "name": "munns-test", 
              "ownerIdentity": {
                "principalId": "EXAMPLE"
              }
            }, 
            "s3SchemaVersion": "1.0"
          }, 
          "responseElements": {
            "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH", 
            "x-amz-request-id": "EXAMPLE123456789"
          }, 
          "awsRegion": "us-east-1", 
          "eventName": "ObjectCreated:Put", 
          "userIdentity": {
            "principalId": "EXAMPLE"
          }, 
          "eventSource": "aws:s3"
        }
      ]
    }

    You can then use the AWS SAM CLI local invoke command and pipe in the output from the previous command. Or, you can save the output from the previous command to a file and then pass in a reference to the file’s name and location with the -e flag. Here is an example of the pipe event method:

    sam local generate-event s3 put --bucket munns-test --key somephoto.jpeg | sam local invoke myFunction
    2019-02-19 18:45:53 Reading invoke payload from stdin (you can also pass it from file with --event)
    2019-02-19 18:45:53 Found credentials in shared credentials file: ~/.aws/credentials
    2019-02-19 18:45:53 Invoking index.handler (python2.7)
    
    Fetching lambci/lambda:python2.7 Docker container image......
    2019-02-19 18:45:53 Mounting /home/ec2-user/environment/forblog as /var/task:ro inside runtime container
    START RequestId: 7c14eea1-96e9-4b7d-ab54-ed1f50bd1a34 Version: $LATEST
    {"Records": [{"eventVersion": "2.0", "eventTime": "1970-01-01T00:00:00.000Z", "requestParameters": {"sourceIPAddress": "127.0.0.1"}, "s3": {"configurationId": "testConfigRule", "object": {"eTag": "0123456789abcdef0123456789abcdef", "key": "somephoto.jpeg", "sequencer": "0A1B2C3D4E5F678901", "size": 1024}, "bucket": {"ownerIdentity": {"principalId": "EXAMPLE"}, "name": "munns-test", "arn": "arn:aws:s3:::munns-test"}, "s3SchemaVersion": "1.0"}, "responseElements": {"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH", "x-amz-request-id": "EXAMPLE123456789"}, "awsRegion": "us-east-1", "eventName": "ObjectCreated:Put", "userIdentity": {"principalId": "EXAMPLE"}, "eventSource": "aws:s3"}]}
    END RequestId: 7c14eea1-96e9-4b7d-ab54-ed1f50bd1a34
    REPORT RequestId: 7c14eea1-96e9-4b7d-ab54-ed1f50bd1a34 Duration: 1 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 14 MB
    
    "Success! Parsed Events"

    You can see the full output of your function in the logs that follow the invoke command. In this example, the Python function prints out the event payload and then exits.

With the AWS SAM CLI, you can pass in valid test payloads that interface with data in other AWS services. You can also have your Lambda function talk to other AWS resources that exist in your account, for example Amazon DynamoDB tables, Amazon S3 buckets, and so on. You could also test an API interface using the local start-api command, provided that you have configured your AWS SAM template with events of the API type. Follow the full instructions for setting up and configuring the AWS SAM CLI in Installing the AWS SAM CLI. Find the full syntax guide for AWS SAM templates in the AWS Serverless Application Model documentation.

Testing in the Lambda console

After you have deployed your functions after the start of the Update/Create phase or with the Opt-In layer added, test your functions in the Lambda console.

  1. In the Lambda console, select the function to test.
  2. Select a test event and choose Test.
  3. If no test event exists, choose Configure test events.
    1. Choose Event template and select the relevant invocation service from which to test.
    2. Name the test event.
    3. Modify the event payload for your specific function.
    4. Choose Create and then return to step 2.

The results from the test are displayed.

Conclusion

With Lambda and [email protected], AWS has allowed developers to focus on just application code without the need to think about the work involved managing the actual servers that run the code. We believe that the mechanisms provided and processes described in this post allow you to easily test and update your functions for this new execution environment.

Some of you may have questions about this process, and we are ready to help you. Please contact us through AWS Support, the AWS forums, or AWS account teams.

Creating an AWS Batch environment for mixed CPU and GPU genomics workflows

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/creating-an-aws-batch-environment-for-mixed-cpu-and-gpu-genomics-workflows/

This post is courtesy of Lee Pang – AWS Technical Business Development 

I recently worked with a customer who needed to process a bunch of raw sequence files (FastQs) into Hi-C format (*.hic), which is used for the structural analysis of DNA/chromatin loops and sequence accessibility. The tooling they were interested in using was the Juicer suite and they needed a minimal workflow:

  • Align the sample to the reference using the juicer CLI utility.
  • Annotate loops using the HiCCUPS algorithm from the juicer-tools library.

Because they had many files to process, they wanted to do this as scalably as possible. The juicer step of the workflow was CPU and memory-intensive, while the HiCCUPS step needed GPU acceleration. So, they were interested in using AWS Step Functions and AWS Batch.

Since its launch, AWS Batch has enabled the ability to create scalable compute environments for processing a mixture of CPU and memory-intensive jobs. This covers the needs of the majority of genomics workflows. So, how do you create a genomics workflow environment using AWS Batch that also includes using GPUs? Thankfully, AWS Batch recently announced support for GPU resources!

In this post, I show you how to use these new features to execute mixed CPU and GPU genomics workflows. By the end of this post, you will be able to build the architecture shown below.

Configuring AWS Batch for running CPU and GPU jobs

To handle a mixture of CPU and GPU jobs, the recommended strategy is to create multiple compute environments and job queues:

  • GPU-only resources
    • Compute environments (using the ECS GPU Optimized AMI)
    • Spot Instances of the P2 and P3 instance family
    • On-Demand Instances of the P2 and P3 instance family
    • Job queue for GPU compute environments
  • CPU-only resources
    • Compute environments (using the default ECS Optimized AMI)
    • Spot Instances of the “optimal” instance family
    • On-Demand Instances of the “optimal” instance family
    • Job queue for CPU compute environments

With the above in place, you then point CPU jobs to the “CPU” queue and GPU jobs to the “GPU” queue.

Notice that CPU and GPU resources are kept separate with queues and compute environments. I don’t recommend creating a compute environment or job queue that mixes CPU and GPU optimized instance types. In a mixed compute environment or queue, there is a chance that CPU jobs could be placed on GPU instances when no GPU jobs are scheduled. This could result in few (or no) GPU instances available when GPU jobs must be run.

Create a GPU compute environment

Amazon EC2 has a wide variety of instance types. This includes the P3 family of instances that enable GPU-accelerated computing. Previously, the best option for using GPUs in AWS Batch was to create a compute environment based on the publicly available Deep Learning AMI used by AI/ML services. For example, Amazon SageMaker has support for running containers and NVIDIA / CUDA drivers pre-installed.

Earlier this year, the Amazon ECS team announced the availability of the ECS GPU-optimized AMI. It’s essentially the same as the existing Amazon ECS-optimized Amazon Linux 2 AMI but with pre-installed capabilities to provide Docker containers with access to GPU acceleration. For more information about what’s included, see Amazon ECS-optimized AMIs.

The key points are:

This is a much more lightweight solution, and makes creating GPU-specific AWS Batch compute environments much easier.

Creating an AWS Batch compute environment specifically for GPU jobs is similar to creating one for CPU jobs. The key difference is that you select only GPU instance families for the instance types.

For compute environments that use the P2, P3, and P3dn instance families, AWS Batch automatically associates the ECS GPU-optimized AMI. You don’t have to create a custom AMI for GPU jobs to run on these instances.

The G2 and G3 families use a different type of GPU and have different drivers. Compute environments that use G2 and G3 families need a custom AMI to take advantage of acceleration. Otherwise, they default to the ECS-optimized AMI.

To create a GPU-enabled compute environment with the AWS CLI, create a file called gpu-ce.json with the following contents:

{
    "computeEnvironmentName": "gpu",
    "type": "MANAGED",
    "state": "ENABLED",
    "serviceRole": "arn:aws:iam::<account-id>:role/service-role/AWSBatchServiceRole",
    "computeResources": {
        "type": "EC2",
        "subnets": [
            "<subnet-id-1>",
            "<subnet-id-2>",
            "<subnet-id-3>"
        ],
        "tags": {
            "Name": "batch-gpu-worker"
        },
        "desiredvCpus": 0,
        "minvCpus": 0,
        "instanceTypes": [
            "p3",
            "p2"
        ],
        "instanceRole": "arn:aws:iam::<account-id>:instance-profile/ecsInstanceRole",
        "maxvCpus": 256,
        "securityGroupIds": [
            "<security-group-1>",
            "<security-group-2>"
        ],
        "ec2KeyPair": "<keypair-name>"
    }
}

From the command line, run the following:

aws batch create-compute-environment --cli-input-json file://gpu-ce.json

Create a GPU job queue

When you have a GPU compute environment, you can associate it with a dedicated GPU job queue.

From the command line, run the following:

aws batch create-job-queue \
    --job-queue-name gpu \
    --state ENABLED \
    --priority 100 \
    --compute-environment-order order=1,computeEnvironment=gpu

Specifying GPU resources in AWS Batch jobs

Creating job definitions in AWS Batch has not changed much. However, now there’s an additional field under Resource requirements that lets you specify how many GPUs the job should use.

The JSON for registering a job definition with GPU requirements looks like the following:

{
    "jobDefinitionName": "hiccups", 
    "type": "container", 
    "parameters": {
        "OutputS3Prefix": "s3://<bucket-name>/juicer/HIC003", 
        "InputHICS3Path": "s3://<bucket-name>/juicer/HIC003/aligned/inter.hic"
    }, 
    "containerProperties": {
        "mountPoints": [], 
        "image": "<docker-image-repository>/juicer-tools:latest", 
        "environment": [], 
        "vcpus": 8, 
        "command": [
            "hiccups", 
            "Ref::InputHICS3Path", 
            "Ref::OutputS3Prefix"
        ], 

        /* BEGIN NEW STUFF (delete comment before use) */
        "resourceRequirements" : [
            {
                "type" : "GPU",
                "value" : "1"
            }
        ],
        /* END NEW STUFF (delete comment before use) */
        
        "volumes": [], 
        "memory": 60000, 
        "ulimits": []
    }
}

Containerization considerations

To ensure that your containerized task can use GPU acceleration, use nvidia/cuda base images when building the container image for your job. For example, for a CentOS-based image with CUDA 9.2, your Dockerfile should have the following:

FROM nvidia/cuda:9.2-devel-centos7

This could be at the top if you are building the container entirely from scratch, or later if you are using a multi-stage build.

In this case, I already had a CentOS-based image for the juicer utility that I could recycle for juicer-tools. So my Dockerfile looked like the following:

FROM juicer AS base
FROM nvidia/cuda:9.2-devel-centos7

COPY --from=base /opt/juicer /opt/juicer

RUN yum install -y awscli
RUN yum install -y java-1.8.0-openjdk

WORKDIR /opt/juicer/scripts
COPY juicer-tools.aws.sh .

WORKDIR /opt/juicer/work
ENTRYPOINT ["/opt/juicer/scripts/juicer-tools.aws.sh"]

Running a mixed CPU / GPU workflow

To run a workflow that contains a mixture of CPU– and GPU-based jobs, you can use AWS Step Functions. This blog channel previously covered how Step Functions and AWS Batch can be combined to run genomics workflows on AWS. Also, Step Functions and AWS Batch are now more tightly integrated, making scalable workflow solutions much easier to build.

With all the AWS Batch resources in place, it is a matter of pointing each task at the job queue with the right compute resources.

The solution I put together for the customer with the Juicer suite resulted in the following state machine:

{
    "Comment": "State machine for FASTQ to HIC with annotation using juicer and hiccups",
    "StartAt": "JuicerTask",
    "States": {
        "JuicerTask": {
            "Type": "Task",
            "InputPath": "$",
            "ResultPath": "$.juicer.status",
            "Resource": "arn:aws:states:::batch:submitJob.sync",
            "Parameters": {
                "JobDefinition": "arn:aws:batch:<region>:<account_number>:job-definition/juicer:2",
                "JobName": "juicer",
                "JobQueue": "arn:aws:batch:<region>:<account_number>:job-queue/cpu",
                "Parameters.$": "$.juicer.parameters"
            },
            "Next": "HiccupsTask"
        },
        "HiccupsTask": {
            "Type": "Task",
            "InputPath": "$",
            "ResultPath": "$.hiccups.status",
            "Resource": "arn:aws:states:::batch:submitJob.sync",
            "Parameters": {
                "JobDefinition": "arn:aws:batch:<region>:<account_number>:job-definition/juicer-tools:2",
                "JobName": "hiccups",
                "JobQueue": "arn:aws:batch:<region>:<account_number>:job-queue/gpu",
                "Parameters.$": "$.hiccups.parameters"
            },
            "End": true
        }
    }
}

JuicerTask is submitted to the cpu job queue, while HiccupsTask is submitted to the gpu job queue.

Conclusion

With the ECS GPU-optimized AMI and AWS Batch support for GPU resources, you can easily build a scalable solution for running genomics workflows with both CPU and GPU resources.

Build on!

From Poll to Push: Transform APIs using Amazon API Gateway REST APIs and WebSockets

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/from-poll-to-push-transform-apis-using-amazon-api-gateway-rest-apis-and-websockets/

This post is courtesy of Adam Westrich – AWS Principal Solutions Architect and Ronan Prenty – Cloud Support Engineer

Want to deploy a web application and give a large number of users controlled access to data analytics? Or maybe you have a retail site that is fulfilling purchase orders, or an app that enables users to connect data from potentially long-running third-party data sources. Similar use cases exist across every imaginable industry and entail sending long-running requests to perform a subsequent action. For example, a healthcare company might build a custom web portal for physicians to connect and retrieve key metrics from patient visits.

This post is aimed at optimizing the delivery of information without needing to poll an endpoint. First, we outline the current challenges with the consumer polling pattern and alternative approaches to solve these information delivery challenges. We then show you how to build and deploy a solution in your own AWS environment.

Here is a glimpse of the sample app that you can deploy in your own AWS environment:

What’s the problem with polling?

Many customers need implement the delivery of long-running activities (such as a query to a data warehouse or data lake, or retail order fulfillment). They may have developed a polling solution that looks similar to the following:

  1. POST sends a request.
  2. GET returns an empty response.
  3. Another… GET returns an empty response.
  4. Yet another… GET returns an empty response.
  5. Finally, GET returns the data for which you were looking.

The challenges of traditional polling methods

  • Unnecessary chattiness and cost due to polling for result sets—Every time your frontend polls an API, it’s adding costs by leveraging infrastructure to compute the result. Empty polls are just wasteful!
  • Hampered mobile battery life—One of the top contributors of apps that eat away at battery life is excessive polling. Make sure that your app isn’t on your users’ Top App Battery Usage list-of-shame that could result in deletion.
  • Delayed data arrival due to polling schedule—Some approaches to polling include an incremental backoff to limit the number of empty polls. This sometimes results in a delay between data readiness and data arrival.

But what about long polling?

  • User request deadlocks can hamper application performance—Long synchronous user responses can lead to unexpected user wait times or UI deadlocks, which can affect mobile devices especially.
  • Memory leaks and consumption could bring your app down—Keeping long-running tasks queries open may overburden your backend and create failure scenarios, which may bring down your app.
  • HTTP default timeouts across browsers may result in inconsistent client experience—These timeouts vary across browsers, and can lead to an inconsistent experience across your end users. Depending on the size and complexity of the requests, processing can last longer than many of these timeouts and take minutes to return results.

Instead, create an event-driven architecture and move your APIs from poll to push.

Asynchronous push model

To create optimal UX experiences, frontend developers often strive to create progressive and reactive user experiences. Users can interact with frontend objects (for example, push buttons) with little lag when sending requests and receiving data. But frontend developers also want users to receive timely data, without sacrificing additional user actions or performing unnecessary processing.

The birth of microservices and the cloud over the past several years has enabled frontend developers and backend service designers to think about these problems in an asynchronous manner. This enables the virtually unlimited resources of the cloud to choreograph data processing. It also enables clients to benefit from progressive and reactive user experiences.

This is a fresh alternative to the synchronous design pattern, which often relies on client consumers to act as the conductor for all user requests. The following diagram compares the flow of communication between patterns.Comparison of communication patterns

Asynchronous orchestration can be easily choreographed using the workflow definition, monitoring, and tracking console with AWS Step Functions state machines. Break up services into functions with AWS Lambda and track executions within a state diagram, like the following:

With this approach, consumers send a request, which initiates downstream distributed processing. The subsequent steps are invoked according to the state machine workflow and each execution can be monitored.

Okay, but how does the consumer get the result of what they’re looking for?

Multiple approaches to this problem

There are different approaches for consumers to retrieve their resulting data. In order to create the optimal solution, there are several questions that a service owner may need to ask.

Is there known trust with the client (such as another microservice)?

If the answer is Yes, one approach is to use Amazon SNS. That way, consumers can subscribe to topics and have data delivered using email, SMS, or even HTTP/HTTPS as an event subscriber (that is, webhooks).

With webhooks, the consumer creates an endpoint where the service provider can issue a callback using a small amount of consumer-side resources. The consumer is waiting for an incoming request to facilitate downstream processing.

  1. Send a request.
  2. Subscribe the consumer to the topic.
  3. Open the endpoint.
  4. SNS sends the POST request to the endpoint.

If the trust answer is No, then clients may not be able to open a lightweight HTTP webhook after sending the initial request. In that case, you must open the webhook. Consider an alternative framework.

Are you restricted to only using a REST protocol?

Some applications can only run HTTP requests. These scenarios include the age of the technology or browser for end users, and potential security requirements that may block other protocols.

In these scenarios, customers may use a GET method as a best practice, but we still advise avoiding polling. In these scenarios, some design questions may be: Does data readiness happen at a predefined time or duration interval? Or can the user experience tolerate time between data readiness and data arrival?

If the answer to both of these is Yes, then consider trying to send GET calls to your RESTful API one time. For example, if a job averages 10 minutes, make your GET call at 10 minutes after the submission. Sounds simple, right? Much simpler than polling.

Should I use GraphQL, a WebSocket API, or another framework?

Each framework has tradeoffs.

If you want a more flexible query schema, you may gravitate to GraphQL, which follows a “data-driven UI” approach. If data drives the UI, then GraphQL may be the best solution for your use case.

AWS AppSync is a serverless GraphQL engine that supports the heavy lifting of these constructs. It offers functionality such as AWS service integration, offline data synchronization, and conflict resolution. With GraphQL, there’s a construct called Subscriptions where clients can subscribe to event-based subscriptions upon a data change.

Amazon API Gateway makes it easy for developers to deploy secure APIs at scale. With the recent introduction of API Gateway WebSocket APIs, web, mobile clients, and backend services can communicate over an established WebSocket connection. This also allows clients to be more reactive to data updates and only do work one time after an update has been received over the WebSocket connection.

The typical frontend design approach is to create a UI component that is updated when the results of the given procedure are complete. This is beneficial over a complete webpage refresh and gains in the customer’s user experience.

Because many companies have elected to use the REST framework for creating API-driven tightly bound service contracts, a RESTful interface can be used to validate and receive the request. It can provide a status endpoint used to deliver status. Also, it provides additional flexibility in delivering the result to the variety of clients, along with the WebSocket API.

Poll-to-push solution with API Gateway

Imagine a scenario where you want to be updated as soon as the data is created. Instead of the traditional polling methods described earlier, use an API Gateway WebSocket API. That pushes new data to the client as it’s created, so that it can be rendered on the client UI.

Alternatively, a WebSocket server can be deployed on Amazon EC2. With this approach, your server is always running to accept and maintain new connections. In addition, you manage the scaling of the instance manually at times of high demand.

By using an API Gateway WebSocket API in front of Lambda, you don’t need a machine to stay always on, eating away your project budget. API Gateway handles connections and invokes Lambda whenever there’s a new event. Scaling is handled on the service side. To update our connected clients from the backend, we can use the API Gateway callback URL. The AWS SDKs make communication from the backend easy. As an example, see the Boto3 sample of post_to_connection:

import boto3 
#Use a layer or deployment package to 
#include the latest boto3 version.
...
apiManagement = boto3.client('apigatewaymanagementapi', region_name={{api_region}},
                      endpoint_url={{api_url}})
...
response = apiManagement.post_to_connection(Data={{message}},ConnectionId={{connectionId}})
...

Solution example

To create this more optimized solution, create a simple web application that enables a user to make a request against a large dataset. It returns the results in a flat file over WebSocket. In this case, we’re querying, via Amazon Athena, a data lake on S3 populated with AWS Twitter sentiment (Twitter: @awscloud or #awsreinvent). However, this solution could apply to any data store, data mart, or data warehouse environment, or the long-running return of data for a response.

For the frontend architecture, create this web application using a JavaScript framework (such as JQuery). Use API Gateway to accept a REST API for the data and then open a WebSocket connection on the client to facilitate the return of results:poll to push solution architecture

  1. The client sends a REST request to API Gateway. This invokes a Lambda function that starts the Step Functions state machine execution. The function also returns a task token for the open connection activity worker. The function then returns the execution ARN and task token to the client.
  2. Using the data returned by the REST request, the client connects to the WebSocket API and sends the task token to the WebSocket connection.
  3. The WebSocket notifies the Step Functions state machine that the client is connected. The Lambda function completes the OpenConnection task through validating the task token and sending a success message.
  4. After RunAthenaQuery and OpenConnection are successful, the state machine updates the connected client over the WebSocket API that their long-running job is complete. It uses the REST API call post_to_connection.
  5. The client receives the update over their WebSocket connection using the IssueCallback Lambda function, with the callback URL from the API Gateway WebSocket API.

In this example application, the data response is an S3 presigned URL composed of results from Athena. You can then configure your frontend to use that S3 link to download to the client.

Why not just open the WebSocket API for the request?

While this approach can work, we advise against it for this use case. Break the required interfaces for this use case into three processes:

  1. Submit a request.
  2. Get the status of the request (to detect failure).
  3. Deliver the results.

For the submit request interface, RESTful APIs have strong controls to ensure that user requests for data are validated and given guardrails for the intended query purpose. This helps prevent rogue requests and unforeseen impacts to the analytics environment, especially when exposed to a large number of users.

With this example solution, you’re restricting the data requests to specific countries in the frontend JavaScript. Using the RESTful POST method for API requests enables you to validate data as query string parameters, such as the following:

https://<apidomain>.amazonaws.com/Demo/CreateStateMachineAndToken?Country=France

API Gateway models and mapping templates can also be used to validate or transform request payloads at the API layer before they hit the backend. Use models to ensure that the structure or contents of the client payload are the same as you expect. Use mapping templates to transform the payload sent by clients to another format, to be processed by the backend.

This REST validation framework can also be used to detect header information on WebSocket browser compatibility (external site). While using WebSocket has many advantages, not all browsers support it, especially older browsers. Therefore, a REST API request layer can pass this browser metadata and determine whether a WebSocket API can be opened.

Because a REST interface is already created for submitting the request, you can easily add another GET method if the client must query the status of the Step Functions state machine. That might be the case if a health check in the request is taking longer than expected. You can also add another GET method as an alternative access method for REST-only compatible clients.

If low-latency request and retrieval are an important characteristic of your API and there aren’t any browser-compatibility risks, use a WebSocket API with JSON model selection expressions to protect your backend with a schema.

In the spirit of picking the best tool for the job, use a REST API for the request layer and a WebSocket API to listen for the result.

This solution, although secure, is an example for how a non-polling solution can work on AWS. At scale, it may require refactoring due to the cross-talk at high concurrency that may result in client resubmissions.

To discover, deploy, and extend this solution into your own AWS environment, follow the PollToPush instructions in the AWS Serverless Application Repository.

Conclusion

When application consumers poll for long-running tasks, it can be a wasteful, detrimental, and costly use of resources. This post outlined multiple ways to refactor the polling method. Use API Gateway to host a RESTful interface, Step Functions to orchestrate your workflow, Lambda to perform backend processing, and an API Gateway WebSocket API to push results to your clients.

Deploying a personalized API Gateway serverless developer portal

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/deploying-a-personalized-api-gateway-serverless-developer-portal/

This post is courtesy of Drew Dresser, Application Architect – AWS Professional Services

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Customers of these APIs often want a website to learn and discover APIs that are available to them. These customers might include front-end developers, third-party customers, or internal system engineers. To produce such a website, we have created the API Gateway serverless developer portal.

The API Gateway serverless developer portal (developer portal or portal, for short) is an application that you use to make your API Gateway APIs available to your customers by enabling self-service discovery of those APIs. Your customers can use the developer portal to browse API documentation, register for, and immediately receive their own API key that they can use to build applications, test published APIs, and monitor their own API usage.

Over the past few months, the team has been hard at work contributing to the open source project, available on Github. The developer portal was relaunched on October 29, 2018, and the team will continue to push features and take customer feedback from the open source community. Today, we’re happy to highlight some key functionality of the new portal.

Benefits for API publishers

API publishers use the developer portal to expose the APIs that they manage. As an API publisher, you need to set up, maintain, and enable the developer portal. The new portal has the following benefits:

Benefits for API consumers

API Consumers use the developer portal as a traditional application user. An API consumer needs to understand the APIs being published. API consumers might be front-end developers, distributed system engineers, or third-party customers. The new developer portal comes with the following benefits for API consumers:

  • Explore – API consumers can quickly page through lists of APIs. When they find one they’re interested in, they can immediately see documentation on that API.
  • Learn – API consumers might need to drill down deeper into an API to learn its details. They want to learn how to form requests and what they can expect as a response.
  • Test – Through the developer portal, API consumers can get an API key and invoke the APIs directly. This enables developers to develop faster and with more confidence.

Architecture

The developer portal is a completely serverless application. It leverages Amazon API Gateway, Amazon Cognito User Pools, AWS Lambda, Amazon DynamoDB, and Amazon S3. Serverless architectures enable you to build and run applications without needing to provision, scale, and manage any servers. The developer portal is broken down in to multiple microservices, each with a distinct responsibility, as shown in the following image.

Identity management for the developer portal is performed by Amazon Cognito and a Lambda function in the Login & Registration microservice. An Amazon Cognito User Pool is configured out of the box to enable users to register and login. Additionally, you can deploy the developer portal to use a UI hosted by Amazon Cognito, which you can customize to match your style and branding.

Requests are routed to static content served from Amazon S3 and built using React. The React app communicates to the Lambda backend via API Gateway. The Lambda function is built using the aws-serverless-express library and contains the business logic behind the APIs. The business logic of the web application queries and add data to the API Key Creation and Catalog Update microservices.

To maintain the API catalog, the Catalog Update microservice uses an S3 bucket and a Lambda function. When an API’s Swagger file is added or removed from the bucket, the Lambda function triggers and maintains the API catalog by updating the catalog.json file in the root of the S3 bucket.

To manage the mapping between API keys and customers, the application uses the API Key Creation microservice. The service updates API Gateway with API key creations or deletions and then stores the results in a DynamoDB table that maps customers to API keys.

Deploying the developer portal

You can deploy the developer portal using AWS SAM, the AWS SAM CLI, or the AWS Serverless Application Repository. To deploy with AWS SAM, you can simply clone the repository and then deploy the application using two commands from your CLI. For detailed instructions for getting started with the portal, see Use the Serverless Developer Portal to Catalog Your API Gateway APIs in the Amazon API Gateway Developer Guide.

Alternatively, you can deploy using the AWS Serverless Application Repository as follows:

  1. Navigate to the api-gateway-dev-portal application and choose Deploy in the top right.
  2. On the Review page, for ArtifactsS3BucketName and DevPortalSiteS3BucketName, enter globally unique names. Both buckets are created for you.
  3. To deploy the application with these settings, choose Deploy.
  4. After the stack is complete, get the developer portal URL by choosing View CloudFormation Stack. Under Outputs, choose the URL in Value.

The URL opens in your browser.

You now have your own serverless developer portal application that is deployed and ready to use.

Publishing a new API

With the developer portal application deployed, you can publish your own API to the portal.

To get started:

  1. Create the PetStore API, which is available as a sample API in Amazon API Gateway. The API must be created and deployed and include a stage.
  2. Create a Usage Plan, which is required so that API consumers can create API keys in the developer portal. The API key is used to test actual API calls.
  3. On the API Gateway console, navigate to the Stages section of your API.
  4. Choose Export.
  5. For Export as Swagger + API Gateway Extensions, choose JSON. Save the file with the following format: apiId_stageName.json.
  6. Upload the file to the S3 bucket dedicated for artifacts in the catalog path. In this post, the bucket is named apigw-dev-portal-artifacts. To perform the upload, run the following command.
    aws s3 cp apiId_stageName.json s3://yourBucketName/catalog/apiId_stageName.json

Uploading the file to the artifacts bucket with a catalog/ key prefix automatically makes it appear in the developer portal.

This might be familiar. It’s your PetStore API documentation displayed in the OpenAPI format.

With an API deployed, you’re ready to customize the portal’s look and feel.

Customizing the developer portal

Adding a customer’s own look and feel to the developer portal is easy, and it creates a great user experience. You can customize the domain name, text, logo, and styling. For a more thorough walkthrough of customizable components, see Customization in the GitHub project.

Let’s walk through a few customizations to make your developer portal more familiar to your API consumers.

Customizing the logo and images

To customize logos, images, or content, you need to modify the contents of the your-prefix-portal-static-assets S3 bucket. You can edit files using the CLI or the AWS Management Console.

Start customizing the portal by using the console to upload a new logo in the navigation bar.

  1. Upload the new logo to your bucket with a key named custom-content/nav-logo.png.
    aws s3 cp {myLogo}.png s3://yourPrefix-portal-static-assets/custom-content/nav-logo.png
  2. Modify object permissions so that the file is readable by everyone because it’s a publicly available image. The new navigation bar looks something like this:

Another neat customization that you can make is to a particular API and stage image. Maybe you want your PetStore API to have a dog picture to represent the friendliness of the API. To add an image:

  1. Use the command line to copy the image directly to the S3 bucket location.
    aws s3 cp my-image.png s3://yourPrefix-portal-static-assets/custom-content/api-logos/apiId-stageName.png
  2. Modify object permissions so that the file is readable by everyone.

Customizing the text

Next, make sure that the text of the developer portal welcomes your pet-friendly customer base. The YAML files in the static assets bucket under /custom-content/content-fragments/ determine the portal’s text content.

To edit the text:

  1. On the AWS Management Console, navigate to the website content S3 bucket and then navigate to /custom-content/content-fragments/.
  2. Home.md is the content displayed on the home page, APIs.md controls the tab text on the navigation bar, and GettingStarted.md contains the content of the Getting Started tab. All three files are written in markdown. Download one of them to your local machine so that you can edit the contents. The following image shows Home.md edited to contain custom text:
  3. After editing and saving the file, upload it back to S3, which results in a customized home page. The following image reflects the configuration changes in Home.md from the previous step:

Customizing the domain name

Finally, many customers want to give the portal a domain name that they own and control.

To customize the domain name:

  1. Use AWS Certificate Manager to request and verify a managed certificate for your custom domain name. For more information, see Request a Public Certificate in the AWS Certificate Manager User Guide.
  2. Copy the Amazon Resource Name (ARN) so that you can pass it to the developer portal deployment process. That process is now includes the certificate ARN and a property named UseRoute53Nameservers. If the property is set to true, the template creates a hosted zone and record set in Amazon Route 53 for you. If the property is set to false, the template expects you to use your own name server hosting.
  3. If you deployed using the AWS Serverless Application Repository, navigate to the Application page and deploy the application along with the certificate ARN.

After the developer portal is deployed and your CNAME record has been added, the website is accessible from the custom domain name as well as the new Amazon CloudFront URL.

Customizing the logo, text content, and domain name are great tools to make the developer portal feel like an internally developed application. In this walkthrough, you completely changed the portal’s appearance to enable developers and API consumers to discover and browse APIs.

Conclusion

The developer portal is available to use right away. Support and feature enhancements are tracked in the public GitHub. You can contribute to the project by following the Code of Conduct and Contributing guides. The project is open-sourced under the Amazon Open Source Code of Conduct. We plan to continue to add functionality and listen to customer feedback. We can’t wait to see what customers build with API Gateway and the API Gateway serverless developer portal.

ICYMI: Serverless Q4 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/icymi-serverless-q4-2018/

This post is courtesy of Eric Johnson, Senior Developer Advocate – AWS Serverless

Welcome to the fourth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

This edition of ICYMI includes all announcements from AWS re:Invent 2018!

If you didn’t see them, check our Q1 ICYMIQ2 ICYMI, and Q3 ICYMI posts for what happened then.

So, what might you have missed this past quarter? Here’s the recap.

New features

AWS Lambda introduced the Lambda runtime API and Lambda layers, which enable developers to bring their own runtime and share common code across Lambda functions. With the release of the runtime API, we can now support runtimes from AWS partners such as the PHP runtime from Stackery and the Erlang and Elixir runtimes from Alert Logic. Using layers, partners such as Datadog and Twistlock have also simplified the process of using their Lambda code libraries.

To meet the demand of larger Lambda payloads, Lambda doubled the payload size of asynchronous calls to 256 KB.

In early October, Lambda also lengthened the runtime limit by enabling Lambda functions that can run up to 15 minutes.

Lambda also rolled out native support for Ruby 2.5 and Python 3.7.

You can now process Amazon Kinesis streams up to 68% faster with AWS Lambda support for Kinesis Data Streams enhanced fan-out and HTTP/2 for faster streaming.

Lambda also released a new Application view in the console. It’s a high-level view of all of the resources in your application. It also gives you a quick view of deployment status with the ability to view service metrics and custom dashboards.

Application Load Balancers added support for targeting Lambda functions. ALBs can provide a simple HTTP/S front end to Lambda functions. ALB features such as host- and path-based routing are supported to allow flexibility in triggering Lambda functions.

Amazon API Gateway added support for AWS WAF. You can use AWS WAF for your Amazon API Gateway APIs to protect from attacks such as SQL injection and cross-site scripting (XSS).

API Gateway has also improved parameter support by adding support for multi-value parameters. You can now pass multiple values for the same key in the header and query string when calling the API. Returning multiple headers with the same name in the API response is also supported. For example, you can send multiple Set-Cookie headers.

In October, API Gateway relaunched the Serverless Developer Portal. It provides a catalog of published APIs and associated documentation that enable self-service discovery and onboarding. You can customize it for branding through either custom domain names or logo/styling updates. In November, we made it easier to launch the developer portal from the Serverless Application Repository.

In the continuous effort to decrease customer costs, API Gateway introduced tiered pricing. The tiered pricing model allows the cost of API Gateway at scale with an API Requests price as low as $1.51 per million requests at the highest tier.

Last but definitely not least, API Gateway released support for WebSocket APIs in mid-December as a final holiday gift. With this new feature, developers can build bidirectional communication applications without having to provision and manage any servers. This has been a long-awaited and highly anticipated announcement for the serverless community.

AWS Step Functions added eight new service integrations. With this release, the steps of your workflow can exist on Amazon ECS, AWS Fargate, Amazon DynamoDB, Amazon SNS, Amazon SQS, AWS Batch, AWS Glue, and Amazon SageMaker. This is in addition to the services that Step Functions already supports: AWS Lambda and Amazon EC2.

Step Functions expanded by announcing availability in the EU (Paris) and South America (São Paulo) Regions.

The AWS Serverless Application Repository increased its functionality by supporting more resources in the repository. The Serverless Application Repository now supports Application Auto Scaling, Amazon Athena, AWS AppSync, AWS Certificate Manager, Amazon CloudFront, AWS CodeBuild, AWS CodePipeline, AWS Glue, AWS dentity and Access Management, Amazon SNS, Amazon SQS, AWS Systems Manager, and AWS Step Functions.

The Serverless Application Repository also released support for nested applications. Nested applications enable you to build highly sophisticated serverless architectures by reusing services that are independently authored and maintained but easily composed using AWS SAM and the Serverless Application Repository.

AWS SAM made authorization simpler by introducing SAM support for authorizers. Enabling authorization for your APIs is as simple as defining an Amazon Cognito user pool or an API Gateway Lambda authorizer as a property of your API in your SAM template.

AWS SAM CLI introduced two new commands. First, you can now build locally with the sam build command. This functionality allows you to compile deployment artifacts for Lambda functions written in Python. Second, the sam publish command allows you to publish your SAM application to the Serverless Application Repository.

Our SAM tooling team also released the AWS Toolkit for PyCharm, which provides an integrated experience for developing serverless applications in Python.

The AWS Toolkits for Visual Studio Code (Developer Preview) and IntelliJ (Developer Preview) are still in active development and will include similar features when they become generally available.

AWS SAM and the AWS SAM CLI implemented support for Lambda layers. Using a SAM template, you can manage your layers, and using the AWS SAM CLI, you can develop and debug Lambda functions that are dependent on layers.

Amazon DynamoDB added support for transactions, allowing developer to enforce all-or-nothing operations. In addition to transaction support, Amazon DynamoDB Accelerator also added support for DynamoDB transactions.

Amazon DynamoDB also announced Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance costs and performance.

AWS Amplify released the Amplify Console, which is a continuous deployment and hosting service for modern web applications with serverless backends. Modern web applications include single-page app frameworks such as React, Angular, and Vue and static-site generators such as Jekyll, Hugo, and Gatsby.

Amazon SQS announced support for Amazon VPC Endpoints using PrivateLink.

Serverless blogs

October

November

December

Tech talks

We hold several Serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. Here are the three tech talks that we delivered in Q4:

Twitch

We’ve been so busy livestreaming on Twitch that you’re most certainly missing out if you aren’t following along!

For information about upcoming broadcasts and recent livestreams, keep an eye on AWS on Twitch for more Serverless videos and on the Join us on Twitch AWS page.

New Home for SAM Docs

This quarter, we moved all SAM docs to https://docs.aws.amazon.com/serverless-application-model. Everything you need to know about SAM is there. If you don’t find what you’re looking for, let us know!

In other news

The schedule is out for 2019 AWS Global Summits in cities around the world. AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Summits are held in major cities around the world. They attract technologists from all industries and skill levels who want to discover how AWS can help them innovate quickly and deliver flexible, reliable solutions at scale. Get notified when to register and learn more at the AWS Global Summits website.

Still looking for more?

The Serverless landing page has lots of information. The resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

Announcing WebSocket APIs in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/

This post is courtesy of Diego Magalhaes, AWS Senior Solutions Architect – World Wide Public Sector-Canada & JT Thompson, AWS Principal Software Development Engineer – Amazon API Gateway

Starting today, you can build bidirectional communication applications using WebSocket APIs in Amazon API Gateway without having to provision and manage any servers.

HTTP-based APIs use a request/response model with a client sending a request to a service and the service responding synchronously back to the client. WebSocket-based APIs are bidirectional in nature. This means that a client can send messages to a service and services can independently send messages to its clients.

This bidirectional behavior allows for richer types of client/service interactions because services can push data to clients without a client needing to make an explicit request. WebSocket APIs are often used in real-time applications such as chat applications, collaboration platforms, multiplayer games, and financial trading platforms.

This post explains how to build a serverless, real-time chat application using WebSocket API and API Gateway.

Overview

Historically, building WebSocket APIs required setting up fleets of hosts that were responsible for managing the persistent connections that underlie the WebSocket protocol. Now, with API Gateway, this is no longer necessary. API Gateway handles the connections between the client and service. It lets you build your business logic using HTTP-based backends such as AWS Lambda, Amazon Kinesis, or any other HTTP endpoint.

Before you begin, here are a couple of the concepts of a WebSocket API in API Gateway. The first is a new resource type called a route. A route describes how API Gateway should handle a particular type of client request, and includes a routeKey parameter, a value that you provide to identify the route.

A WebSocket API is composed of one or more routes. To determine which route a particular inbound request should use, you provide a route selection expression. The expression is evaluated against an inbound request to produce a value that corresponds to one of your route’s routeKey values.

There are three special routeKey values that API Gateway allows you to use for a route:

  • $default—Used when the route selection expression produces a value that does not match any of the other route keys in your API routes. This can be used, for example, to implement a generic error handling mechanism.
  • $connect—The associated route is used when a client first connects to your WebSocket API.
  • $disconnect—The associated route is used when a client disconnects from your API. This call is made on a best-effort basis.

You are not required to provide routes for any of these special routeKey values.

Build a serverless, real-time chat application

To understand how you can use the new WebSocket API feature of API Gateway, I show you how to build a real-time chat app. To simplify things, assume that there is only a single chat room in this app. Here are the capabilities that the app includes:

  • Clients join the chat room as they connect to the WebSocket API.
  • The backend can send messages to specific users via a callback URL that is provided after the user is connected to the WebSocket API.
  • Users can send messages to the room.
  • Disconnected clients are removed from the chat room.

Here’s an overview of the real-time chat application:

A serverless real-time chat application using WebSocket API on Amazon API Gateway

The application is composed of the WebSocket API in API Gateway that handles the connectivity between the client and servers (1). Two AWS Lambda functions react when clients connect (2) or disconnect (5) from the API. The sendMessage function (3) is invoked when the clients send messages to the server. The server sends the message to all connected clients (4) using the new API Gateway Management API. To track each of the connected clients, use a DynamoDB table to persist the connection identifiers.

To help you see this in action more quickly, I’ve provided an AWS SAM application that includes the Lambda functions, DynamoDB table definition, and IAM roles. I placed it into the AWS Serverless Application Repository. For more information, see the simple-websockets-chat-app repo on GitHub.

Create a new WebSocket API

  1. In the API Gateway console, choose Create API, New API.
  2. Under Choose the protocol, choose WebSocket.
  3. For API name, enter My Chat API.
  4. For Route Selection Expression, enter $request.body.action.
  5. Enter a description if you’d like and click Create API.

The attribute described by the Route Selection Expression value should be present in every message that a client sends to the API. Here’s one example of a request message:

{
    "action": "sendmessage",
    “data”: "Hello, I am using WebSocket APIs in API Gateway.”
}

To route messages based on the message content, the WebSocket API expects JSON-formatted messages with a property serving as a routing key.

Manage routes

Now, configure your new API to respond to incoming requests. For this app, create three routes as described earlier. First, configure the sendmessage route with a Lambda integration.

  1. In the API Gateway console, under My Chat API, choose Routes.
  2. For New Route Key, enter sendmessage and confirm it.

Each route includes route-specific information such as its model schemas, as well as a reference to its target integration. With the sendmessage route created, use a Lambda function as an integration that is responsible for sending messages.

At this point, you’ve either deployed the AWS Serverless Application Repository backend or deployed it yourself using AWS SAM. In the Lambda console, look at the sendMessage function. This function receives the data sent by one of the clients, looks up all the currently connected clients, and sends the provided data to each of them. Here’s a snippet from the Lambda function:

DDB.scan(scanParams, function (err, data) {
  // some code omitted for brevity

  var apigwManagementApi = new AWS.APIGatewayManagementAPI({
    apiVersion: "2018-11-29",
    endpoint: event.requestContext.domainName " /" + event.requestContext.stage
  });
  var postParams = {
    Data: JSON.parse(event.body).data
  };

  data.Items.forEach(function (element) {
    postParams.ConnectionId = element.connectionId.S;
    apigwManagementApi.postToConnection(postParams, function (err, data) {
      // some code omitted for brevity
    });
  });

When one of the clients sends a message with an action of “sendmessage,” this function scans the DynamoDB table to find all of the currently connected clients. For each of them, it makes a postToConnection call with the provided data.

To send messages properly, you must know the API to which your clients are connected. This means explicitly setting the SDK endpoint to point to your API.

What’s left is to properly track the clients that are connected to the chat room. For this, take advantage of two of the special routeKey values and implement routes for $connect and $disconnect.

The onConnect function inserts the connectionId value from requestContext to the DynamoDB table.

exports.handler = function(event, context, callback) {
  var putParams = {
    TableName: process.env.TABLE_NAME,
    Item: {
      connectionId: { S: event.requestContext.connectionId }
    }
  };

  DDB.putItem(putParams, function(err, data) {
    callback(null, {
      statusCode: err ? 500 : 200,
      body: err ? "Failed to connect: " + JSON.stringify(err) : "Connected"
    });
  });
};

The onDisconnect function removes the record corresponding with the specified connectionId value.

exports.handler = function(event, context, callback) {
  var deleteParams = {
    TableName: process.env.TABLE_NAME,
    Key: {
      connectionId: { S: event.requestContext.connectionId }
    }
  };

  DDB.deleteItem(deleteParams, function(err) {
    callback(null, {
      statusCode: err ? 500 : 200,
      body: err ? "Failed to disconnect: " + JSON.stringify(err) : "Disconnected."
    });
  });
};

As I mentioned earlier, the onDisconnect function may not be called. To keep from retrying connections that are never going to be available again, add some additional logic in case the postToConnection call does not succeed:

if (err.statusCode === 410) {
  console.log("Found stale connection, deleting " + postParams.connectionId);
  DDB.deleteItem({ TableName: process.env.TABLE_NAME,
                   Key: { connectionId: { S: postParams.connectionId } } });
} else {
  console.log("Failed to post. Error: " + JSON.stringify(err));
}

API Gateway returns a status of 410 GONE when the connection is no longer available. If this happens, delete the identifier from the DynamoDB table.

To make calls to your connected clients, your application needs a new permission: “execute-api:ManageConnections”. This is handled for you in the AWS SAM template.

Deploy the WebSocket API

The next step is to deploy the WebSocket API. Because it’s the first time that you’re deploying the API, create a stage, such as “dev,” and give a sample description.

The Stage editor screen displays all the information for managing your WebSocket API, such as Settings, Logs/Tracing, Stage Variables, and Deployment History. Also, make sure that you check your limits and read more about API Gateway throttling.

Make sure that you monitor your tests using the dashboard available for your newly created WebSocket API.

Test the chat API

To test the WebSocket API, you can use wscat, an open-source, command line tool.

  1. Install NPM.
  2. Install wscat:
    $ npm install -g wscat
  3. On the console, connect to your published API endpoint by executing the following command:
    $ wscat -c wss://{YOUR-API-ID}.execute-api.{YOUR-REGION}.amazonaws.com/{STAGE}
  4. To test the sendMessage function, send a JSON message like the following example. The Lambda function sends it back using the callback URL:
    $ wscat -c wss://{YOUR-API-ID}.execute-api.{YOUR-REGION}.amazonaws.com/dev
    connected (press CTRL+C to quit)
    > {"action":"sendmessage", "data":"hello world"}
    < hello world

If you run wscat in multiple consoles, each will receive the “hello world”.

You’re now ready to send messages and get responses back and forth from your WebSocket API!

Conclusion

In this post, I showed you how to use WebSocket APIs in API Gateway, including a message exchange between client and server using routes and the route selection expression. You used Lambda and DynamoDB to create a serverless real-time chat application. While this post only scratched the surface of what is possible, you did all of this without needing to manage any servers at all.

You can get started today by creating your own WebSocket APIs in API Gateway. To learn more, see the Amazon API Gateway Developer Guide. You can also watch the Building Real-Time Applications using WebSocket APIs Supported by Amazon API Gateway webinar hosted by George Mao.

I’m excited to see what new applications come up with your use of the new WebSocket API. I welcome any feedback here, in the AWS forums, or on Twitter.

Happy coding!

Announcing nested applications for AWS SAM and the AWS Serverless Application Repository

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-nested-applications-for-aws-sam-and-the-aws-serverless-application-repository/

Serverless application architectures enable you to break large projects into smaller, more manageable services that are highly reusable and independently scalable, secured, and evolved over time. As serverless architectures grow, we have seen common patterns that get reimplemented across companies, teams, and projects, hurting development velocity and leading to wasted effort. We have made it easier to develop new serverless architectures that meet organizational best practices by supporting nested applications in AWS SAM and the AWS Serverless Application Repository.

How it works

Nested applications build off a concept in AWS CloudFormation called nested stacks. With nested applications, serverless applications are deployed as stacks, or collections of resources, that contain one or more other serverless application stacks. You can reference resources created in these nested templates to either the parent stack or other nested stacks to manage these collections of resources more easily.

This enables you to build sophisticated serverless architectures by reusing services that are authored and maintained independently but easily composed via AWS SAM and the AWS Serverless Application Repository. These applications can be either publicly or privately available in the AWS Serverless Application Repository. It’s just as easy to create your own serverless applications that you then consume again later in other nested applications. You can access the application code, configure parameters exposed via the nested application’s template, and later manage its configuration completely.

Building a nested application

Suppose that I want to build an API powered by a serverless application architecture made up of AWS Lambda and Amazon API Gateway. I can use AWS SAM to create Lambda functions, configure API Gateway, and deploy and manage them both. To start building, I can use the sam init command.

$ sam init -r python2.7
[+] Initializing project structure...
[SUCCESS] - Read sam-app/README.md for further instructions on how to proceed
[*] Project initialization is now complete

The sam-app directory has everything that I need to start building a serverless application.

$ tree sam-app/
sam-app/
├── hello_world
│   ├── app.py
│   ├── app.pyc
│   ├── __init__.py
│   ├── __init__.pyc
│   └── requirements.txt
├── README.md
├── template.yaml
└── tests
    └── unit
        ├── __init__.py
        ├── __init__.pyc
        ├── test_handler.py
        └── test_handler.pyc

3 directories, 11 files

The README in the sam-app directory points me to using the new sam build command to install any requirements of my application.

$ cd sam-app/
$ sam build
2018-11-21 20:41:23 Building resource 'HelloWorldFunction'
2018-11-21 20:41:23 Running PythonPipBuilder:ResolveDependencies
2018-11-21 20:41:24 Running PythonPipBuilder:CopySource

Build Succeeded

Built Artifacts  : .aws-sam/build
Built Template   : .aws-sam/build/template.yaml
...

At this point, I have a fully functioning serverless application based on Lambda that I can test and debug using the AWS SAM CLI locally. For example, I can invoke my Lambda function directly, as in the following code.

$ cd .aws-sam/build/
$ sam local invoke --no-event
2018-11-21 20:43:52 Invoking app.lambda_handler (python2.7)

....trimmed output....

{"body": "{\"message\": \"hello world\", \"location\": \"34.239.158.3\"}", "statusCode": 200}

Or I can use the actual API Gateway interface for it with the sam local start-api command. I can also now use the sam package and sam deploy commands to get this application running on Lambda and begin letting my customers consume it. What I really want to do, though, is expand on this API by adding an authorization mechanism to add some security. API Gateway supports several methods for doing this, but in this case, I want to leverage a basic form of HTTP Basic Auth. Although not the best way to secure an API, this example highlights the power of nested applications.

To start, I search an existing serverless application that meets my needs. I can access the AWS Serverless Application Repository either directly or via the AWS Lambda console. Then I search for “http basic auth,” as shown in the following image.

There is already an application for HTTP Basic Auth that someone else made.

I can review the AWS SAM template, license, and permissions created in the AWS Serverless Application Repository. Reading through the linked GitHub repository and README, I find that this application meets my needs, and I can review its code. The application enables me to store a user’s username and password in Amazon DynamoDB and use them to provide authorization to an API.

I could deploy this application directly from the console and then plug the launched Lambda function into my API Gateway configuration manually. This would work, but it doesn’t give me a way to more directly relate the two applications together as one application, which to me it is. If I remove the authorizer application, my API will break. If I decide to shut down my API, I might forget about the authorizer. Logically thinking about them as a single application is difficult. This is where nested applications come in.

The launch of nested applications comes with a new AWS SAM resource, AWS::Serverless::Application, which you can use to refer to serverless applications that you want to nest inside another application. The specification for this resource is straightforward and at a minimum resembles the following template.

application-alias-name:
  Type: AWS::Serverless::Application
  Properties:
    Location:
    Parameters:

For the full resource specification, see AWS::Serverless::Application on the GitHub website.

Currently, the location can be one of two places. It can be in the AWS Serverless Application Repository.

Location:
  ApplicationId: arn:aws:serverlessrepo:region:account-id:applications/application-name
  SemanticVersion: 1.0.0

Or it can be in Amazon S3.

Location: https://s3.region.amazonaws.com/bucket-name/sam-template-object

The sam-template-object is the name of a packaged AWS SAM template.

Parameters is the parameters that the nested application requires at launch as defined by its template. You can discover them via the template that the nested application references or in the console under Application Settings. For this Basic HTTP Auth application, there are no parameters, and so there is nothing for me to do here.

We have included a feature to help you to figure out what you need in your template. First, navigate to the Review, configure and deploy page by choosing Deploy from the Application details page or when you choose an application via the Lambda console’s view into the app repository. Then choose Copy as SAM Resource.

This button copies to your clipboard exactly what you need to start nesting this application. For this application, choosing Copy as SAM Resource resulted in the following template.

lambdaauthorizerbasicauth:
  Type: AWS::Serverless::Application
  Properties:
    Location:
      ApplicationId: arn:aws:serverlessrepo:us-east-1:560348900601:applications/lambda-authorizer-basic-auth
      SemanticVersion: 0.2.0

Note: The name of this resource defaults to lambdaauthorizerbasicauth, but best practice is to give it a name that is more descriptive for your use case.

If I stopped here and launched the application, this second application’s stack would launch as a nested application off my application. Before I do that, I want to use the function that this application creates as the authorizer for the API Gateway endpoint created in my existing AWS SAM template. To reference this Lambda function, I can check the template for this application to see what (if any) outputs are created.

I can get the Amazon Resource Name (ARN) from this function by referencing the output named LambdaAuthorizerBasicAuthFunction directly as an attribute of the nested application.

!GetAtt lambdaauthorizerbasicauth.Outputs.LambdaAuthorizerBasicAuthFunction

To configure the authorizer for my existing function, I create an AWS::Serverless::Api resource and then configure the authorizer with the FunctionArn attribute set to this value. The following template shows the entire syntax.

MyApi:
  Type: AWS::Serverless::Api
  Properties:
    StageName: Prod      
    Auth:
      DefaultAuthorizer: MyLambdaRequestAuthorizer
      Authorizers:
        MyLambdaRequestAuthorizer:
          FunctionPayloadType: REQUEST
          FunctionArn: !GetAtt lambdaauthorizerbasicauth.Outputs.LambdaAuthorizerBasicAuthFunction
          Identity:
            Headers:
              - Authorization

This creates a new API Gateway stage, configures the authorizer to point to my nested stack’s Lambda function, and specifies what information is required from clients of the API. In my actual function definition, I refer to the new API stage definition and the authorizer. The last three lines of the following template are new.

Events:
   HelloWorld:
      Type: Api
      Properties:
        Method: get
        Path: /hello
        RestApiId: !Ref MyApi
        Auth:
          Authorizers: MyLambdaRequestAuthorizer

Finally, because of the code created by the sam init command that I ran earlier, I need to update the HelloWorldApi output to use my new API resource.

Outputs:
  HelloWorldApi:
    Description: API Gateway endpoint URL for Prod stage for Hello World function
    Value:
      Fn::Sub: https://${MyApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/

At this point, my combined template file is 70 lines of YAML that define the following:

  • My Lambda function
  • The API Gateway endpoint configuration that I’ll use to interface with my business logic
  • The nested application that represents my authorizer
  • Other bits such as the outputs, globals, and parameters of my template

I can use the sam package and sam deploy commands to launch this whole stack of resources.

sam package --template-file template.yaml --output-template-file packaged-template.yaml --s3-bucket my-bucket

Successfully packaged artifacts and wrote output template to file packaged-template.yaml.
Execute the following command to deploy the packaged template
aws cloudformation deploy --template-file packaged-template.yaml --stack-name <YOUR STACK NAME>

To deploy applications that have nested applications that come from the app repository, I use a new capability in AWS CloudFormation called auto-expand. I pass it in by adding CAPABILITY_AUTO_EXPAND to the –capabilities flag of the deploy command.

sam deploy --template-file packaged-template.yaml --stack-name SimpleAuthExample --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - SimpleAuthExample

With the successful creation of my application stack, I can find what I created. The stack can be in one of two places: the AWS CloudFormation console or the Lambda console’s Application view. Because I launched this application via AWS SAM, I can use the Application view to manage serverless applications. That page displays two application stacks named SimpleAuthExample.

The second application stack has the appended name of the nested application that I launched. Because it was also a serverless application launched with AWS SAM, the Application view enables me to manage them independently if I want. Selecting the SimpleAuthExample application shows me its details page, where I can get more information, including the various launched resources of this application.

There are four top-level resources or resource groupings:

  • My Lambda function, which expands to show the related permissions and role for the function
  • The nested application stack for my authorizer
  • The API Gateway resource and related deployment and stage
  • The Lambda permission that AWS SAM created to allow API Gateway to use my nested application’s Lambda function as an authorizer

Selecting the logical ID of my nested application opens the AWS CloudFormation console, where I note its relation to the root stack.

Testing the authorizer

With my application and authorizer set up, I want to confirm its functionality. The Application view for the authorizer application shows the DynamoDB table that stores my user credentials.

Selecting the logical ID of the table opens the DynamoDB console. In the Item view, I can create a user and its password.

Back in the Application view for my main application, expanding the API Gateway resource displays the physical ID link to the API Gateway stage for this application. It represents the API endpoint that I would point my clients at.

I can copy that destination URL to my clipboard, and with a tool such as curl, I can test my application.

curl -u foo:bar https://dw0wr24jwg.execute-api.us-east-1.amazonaws.com/Prod/hello
{"message": "hello world", "location": "34.239.119.48"}

To confirm that my authorizer is working, I test with bad credentials.

curl -u bar:foo https://dw0wr24jwg.execute-api.us-east-1.amazonaws.com/Prod/hello
{"Message":"User is not authorized to access this resource with an explicit deny"}

Everything works!

To delete this entire setup, I delete the main stack of this application, which deletes all of its resources and the nested application as well.

Conclusion

With the growth of serverless applications, we’re finding that developers can be reusing and sharing common patterns, both publicly and privately. This led to the release of the AWS Serverless Application Repository in early 2018. Now nested applications bring the additional benefit of simplifying the consumption of these reusable application components alongside your own applications. This blog post demonstrates how to find and discover applications in the AWS Serverless Application Repository and nest them in your own applications. We have shown how you can modify an AWS SAM template to refer to these other components, launch your new nested applications, and get their improved management and organizational capabilities.

We’re excited to see what this enables you to do, and we welcome any feedback here, in the AWS forums, and on Twitter.

Happy coding!

Introducing the C++ Lambda Runtime

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/introducing-the-c-lambda-runtime/

This post is courtesy of Marco Magdy, AWS Software Development Engineer – AWS SDKs and Tools

Today, AWS Lambda announced the availability of the Runtime API. The Runtime API allows you to write your Lambda functions in any language, provided that you bundle it with your application artifact or as a Lambda layer that your application uses.

As an example of using this API and based on the customer demand, AWS is releasing a reference implementation of a C++ runtime for Lambda. This C++ runtime brings the simplicity and expressiveness of interpreted languages while maintaining the superiority of C++ performance and low memory footprint. These are benefits that align well with the event-driven, function-based, development model of Lambda applications.

Hello World

Start by writing a Hello World Lambda function in C++ using this runtime.

Prerequisites

You need a Linux-based environment (I recommend Amazon Linux), with the following packages installed:

  • A C++11 compiler, either GCC 5.x or later or Clang 3.3 or later. On Amazon Linux, run the following commands:
    $ yum install gcc64-c++ libcurl-devel
    $ export CC=gcc64
    $ export CXX=g++64
  • CMake v.3.5 or later. On Amazon Linux, run the following command:
    $ yum install cmake3
  • Git

Download and compile the runtime

The first step is to download & compile the runtime:

$ cd ~ 
$ git clone https://github.com/awslabs/aws-lambda-cpp.git
$ cd aws-lambda-cpp
$ mkdir build
$ cd build
$ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF \
   -DCMAKE_INSTALL_PREFIX=~/out
$ make && make install

This builds and installs the runtime as a static library under the directory ~/out.

Create your C++ function

The next step is to build the Lambda C++ function.

  1. Create a new directory for this project:
    $ mkdir hello-cpp-world
    $ cd hello-cpp-world
  2. In that directory, create a file named main.cpp with the following content:
    // main.cpp
    #include <aws/lambda-runtime/runtime.h>
    
    using namespace aws::lambda_runtime;
    
    invocation_response my_handler(invocation_request const& request)
    {
       return invocation_response::success("Hello, World!", "application/json");
    }
    
    int main()
    {
       run_handler(my_handler);
       return 0;
    }
  3. Create a file named CMakeLists.txt in the same directory, with the following content:
    cmake_minimum_required(VERSION 3.5)
    set(CMAKE_CXX_STANDARD 11)
    project(hello LANGUAGES CXX)
    
    find_package(aws-lambda-runtime REQUIRED)
    add_executable(${PROJECT_NAME} "main.cpp")
    target_link_libraries(${PROJECT_NAME} PUBLIC AWS::aws-lambda-runtime)
    aws_lambda_package_target(${PROJECT_NAME})
  4. To build this executable, create a build directory and run CMake from there:
    $ mkdir build
    $ cd build
    $ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=~/out
    $ make

    This compiles and links the executable in release mode.

  5. To package this executable along with all its dependencies, run the following command:
    $ make aws-lambda-package-hello

    This creates a zip file in the same directory named after your project, in this case hello.zip.

Create the Lambda function

Using the AWS CLI, you create the Lambda function. First, create a role for the Lambda function to execute under.

  1. Create the following JSON file for the trust policy and name it trust-policy.json.
    {
     "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": ["lambda.amazonaws.com"]
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. Using the AWS CLI, run the following command:
    $ aws iam create-role \
    --role-name lambda-cpp-demo \
    --assume-role-policy-document file://trust-policy.json

    This should output JSON that contains the newly created IAM role information. Make sure to note down the “Arn” value from that JSON. You need it later. The Arn looks like the following:

    “Arn”: “arn:aws:iam::<account_id>:role/lambda-cpp-demo”

  3. Create the Lambda function:
    $ aws lambda create-function \
    --function-name hello-world \
    --role <specify the role arn from the previous step> \
    --runtime provided \
    --timeout 15 \
    --memory-size 128 \
    --handler hello \
    --zip-file fileb://hello.zip
  4. Invoke the function using the AWS CLI:
    <bash>

    $ aws lambda invoke --function-name hello-world --payload '{ }' output.txt

    You should see the following output:

    {
      "StatusCode": 200
    }

    A file named output.txt containing the words “Hello, World!” should be in the current directory.

Beyond Hello

OK, well that was exciting, but how about doing something slightly more interesting?

The following example shows you how to download a file from Amazon S3 and do some basic processing of its contents. To interact with AWS, you need the AWS SDK for C++.

Prerequisites

If you don’t have them already, install the following libraries:

  • zlib-devel
  • openssl-devel
  1. Build the AWS SDK for C++:
    $ cd ~
    $ git clone https://github.com/aws/aws-sdk-cpp.git
    $ cd aws-sdk-cpp
    $ mkdir build
    $ cd build
    $ cmake3 .. -DBUILD_ONLY=s3 \
     -DBUILD_SHARED_LIBS=OFF \
     -DENABLE_UNITY_BUILD=ON \
     -DCMAKE_BUILD_TYPE=Release \
     -DCMAKE_INSTALL_PREFIX=~/out
    
    $ make && make install

    This builds the S3 SDK as a static library and installs it in ~/out.

  2. Create a directory for the new application’s logic:
    $ cd ~
    $ mkdir cpp-encoder-example
    $ cd cpp-encoder-example
  3. Now, create the following main.cpp:
    // main.cpp
    #include <aws/core/Aws.h>
    #include <aws/core/utils/logging/LogLevel.h>
    #include <aws/core/utils/logging/ConsoleLogSystem.h>
    #include <aws/core/utils/logging/LogMacros.h>
    #include <aws/core/utils/json/JsonSerializer.h>
    #include <aws/core/utils/HashingUtils.h>
    #include <aws/core/platform/Environment.h>
    #include <aws/core/client/ClientConfiguration.h>
    #include <aws/core/auth/AWSCredentialsProvider.h>
    #include <aws/s3/S3Client.h>
    #include <aws/s3/model/GetObjectRequest.h>
    #include <aws/lambda-runtime/runtime.h>
    #include <iostream>
    #include <memory>
    
    using namespace aws::lambda_runtime;
    
    std::string download_and_encode_file(
        Aws::S3::S3Client const& client,
        Aws::String const& bucket,
        Aws::String const& key,
        Aws::String& encoded_output);
    
    std::string encode(Aws::String const& filename, Aws::String& output);
    char const TAG[] = "LAMBDA_ALLOC";
    
    static invocation_response my_handler(invocation_request const& req, Aws::S3::S3Client const& client)
    {
        using namespace Aws::Utils::Json;
        JsonValue json(req.payload);
        if (!json.WasParseSuccessful()) {
            return invocation_response::failure("Failed to parse input JSON", "InvalidJSON");
        }
    
        auto v = json.View();
    
        if (!v.ValueExists("s3bucket") || !v.ValueExists("s3key") || !v.GetObject("s3bucket").IsString() ||
            !v.GetObject("s3key").IsString()) {
            return invocation_response::failure("Missing input value s3bucket or s3key", "InvalidJSON");
        }
    
        auto bucket = v.GetString("s3bucket");
        auto key = v.GetString("s3key");
    
        AWS_LOGSTREAM_INFO(TAG, "Attempting to download file from s3://" << bucket << "/" << key);
    
        Aws::String base64_encoded_file;
        auto err = download_and_encode_file(client, bucket, key, base64_encoded_file);
        if (!err.empty()) {
            return invocation_response::failure(err, "DownloadFailure");
        }
    
        return invocation_response::success(base64_encoded_file, "application/base64");
    }
    
    std::function<std::shared_ptr<Aws::Utils::Logging::LogSystemInterface>()> GetConsoleLoggerFactory()
    {
        return [] {
            return Aws::MakeShared<Aws::Utils::Logging::ConsoleLogSystem>(
                "console_logger", Aws::Utils::Logging::LogLevel::Trace);
        };
    }
    
    int main()
    {
        using namespace Aws;
        SDKOptions options;
        options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Trace;
        options.loggingOptions.logger_create_fn = GetConsoleLoggerFactory();
        InitAPI(options);
        {
            Client::ClientConfiguration config;
            config.region = Aws::Environment::GetEnv("AWS_REGION");
            config.caFile = "/etc/pki/tls/certs/ca-bundle.crt";
    
            auto credentialsProvider = Aws::MakeShared<Aws::Auth::EnvironmentAWSCredentialsProvider>(TAG);
            S3::S3Client client(credentialsProvider, config);
            auto handler_fn = [&client](aws::lambda_runtime::invocation_request const& req) {
                return my_handler(req, client);
            };
            run_handler(handler_fn);
        }
        ShutdownAPI(options);
        return 0;
    }
    
    std::string encode(Aws::IOStream& stream, Aws::String& output)
    {
        Aws::Vector<unsigned char> bits;
        bits.reserve(stream.tellp());
        stream.seekg(0, stream.beg);
    
        char streamBuffer[1024 * 4];
        while (stream.good()) {
            stream.read(streamBuffer, sizeof(streamBuffer));
            auto bytesRead = stream.gcount();
    
            if (bytesRead > 0) {
                bits.insert(bits.end(), (unsigned char*)streamBuffer, (unsigned char*)streamBuffer + bytesRead);
            }
        }
        Aws::Utils::ByteBuffer bb(bits.data(), bits.size());
        output = Aws::Utils::HashingUtils::Base64Encode(bb);
        return {};
    }
    
    std::string download_and_encode_file(
        Aws::S3::S3Client const& client,
        Aws::String const& bucket,
        Aws::String const& key,
        Aws::String& encoded_output)
    {
        using namespace Aws;
    
        S3::Model::GetObjectRequest request;
        request.WithBucket(bucket).WithKey(key);
    
        auto outcome = client.GetObject(request);
        if (outcome.IsSuccess()) {
            AWS_LOGSTREAM_INFO(TAG, "Download completed!");
            auto& s = outcome.GetResult().GetBody();
            return encode(s, encoded_output);
        }
        else {
            AWS_LOGSTREAM_ERROR(TAG, "Failed with error: " << outcome.GetError());
            return outcome.GetError().GetMessage();
        }
    }

    This Lambda function expects an input payload to contain an S3 bucket and S3 key. It then downloads that resource from S3, encodes it as base64, and sends it back as the response of the Lambda function. This can be useful to display an image in a webpage, for example.

  4. Next, create the following CMakeLists.txt file in the same directory.
    cmake_minimum_required(VERSION 3.5)
    set(CMAKE_CXX_STANDARD 11)
    project(encoder LANGUAGES CXX)
    
    find_package(aws-lambda-runtime REQUIRED)
    find_package(AWSSDK COMPONENTS s3)
    
    add_executable(${PROJECT_NAME} "main.cpp")
    target_link_libraries(${PROJECT_NAME} PUBLIC
                          AWS::aws-lambda-runtime
                           ${AWSSDK_LINK_LIBRARIES})
    
    aws_lambda_package_target(${PROJECT_NAME})
  5. Follow the same build steps as before:
    $ mkdir build
    $ cd build
    $ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=~/out
    $ make
    $ make aws-lambda-package-encoder

    Notice how the target name for packaging has changed to aws-lambda-package-encoder. The CMake function aws_lambda_package_target() always creates a target based on its input name.

    You should now have a file named “encoder.zip” in your build directory.

  6. Before you create the Lambda function, modify the IAM role that you created earlier to allow it to access S3.
    $ aws iam attach-role-policy \
    --role-name lambda-cpp-demo \
    --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
  7. Using the AWS CLI, create the new Lambda function:
    $ aws lambda create-function \
    --function-name encode-file \
    --role <specify the same role arn used in the prior Lambda> \
    --runtime provided \
    --timeout 15 \
    --memory-size 128 \
    --handler encoder \
    --zip-file fileb://encoder.zip
  8. Using the AWS CLI, run the function. Make sure to use a S3 bucket in the same Region as the Lambda function:
    $ aws lambda invoke --function-name encode-file --payload '{"s3bucket": "your_bucket_name", "s3key":"your_file_key" }' base64_image.txt

    You can use an online base64 image decoder and paste the contents of the output file to verify that everything is working. In a real-world scenario, you would inject the output of this Lambda function in an HTML img tag, for example.

Conclusion

With the new Lambda Runtime API, a new door of possibilities is open. This C++ runtime enables you to do more with Lambda than you ever could have before.

More in-depth details, along with examples, can be found on the GitHub repository. With it, you can start writing Lambda functions with C++ today. AWS will continue evolving the contents of this repository with additional enhancements and samples. I’m so excited to see what you build using this runtime. I appreciate feedback sent via issues in GitHub.

Happy hacking!

Announcing Ruby Support for AWS Lambda

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-ruby-support-for-aws-lambda/

This post is courtesy of Xiang Shen Senior – AWS Solutions Architect and Alex Wood Software Development Engineer – AWS SDKs and Tools

Ruby remains a popular programming language for AWS customers. In the summer of 2011, AWS introduced the initial release of AWS SDK for Ruby, which has helped Ruby developers to better integrate and use AWS resources. The SDK is now in its third major version and it continues to improve and deliver AWS API updates.

Today, AWS is excited to announce Ruby as a supported language for AWS Lambda.

Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default. That makes it easy to interact with the AWS resources directly from your functions. In this post, we walk you through how it works, using examples:

  • Creating a Hello World example
  • Including dependencies
  • Migrating a Sinatra application

Creating a Hello World example

If you are new to Lambda, it’s simple to create a function using the console.

  1. Open the Lambda console.
  2. Choose Create function.
  3. Select Author from scratch.
  4. Name your function something like hello_ruby.
  5. For Runtime, choose Ruby 2.5.
  6. For Role, choose Create a new role from one or more templates.
  7. Name your role something like hello_ruby_role.
  8. Choose Create function.

Your function is created and you are directed to your function’s console page.

You can modify all aspects of your function, such editing the function’s code, assigning one or more triggering services, or configuring additional services that your function can interact with. From the Monitoring tab, you can view various metrics about your function’s usage as well as a link to CloudWatch Logs.

As you can see in the code editor, the Ruby code for this Hello World example is basic. It has a single handler function named lambda_handler and returns an HTTP status code of 200 and the text “Hello from Lambda!” in a JSON structure. You can learn more about the programming model for Lambda functions.

Next, test this Lambda function and confirm that it is working.

  1. On your function console page, choose Test.
  2. Name the test HelloRubyTest and clear out the data in the brackets. This function takes no input.
  3. Choose Save.
  4. Choose Test.

You should now see the results of a success invocation of your Ruby Lambda function.

Including dependencies

When developing Lambda functions with Ruby, you probably need to include other dependencies in your code. To achieve this, use the tool bundle to download the needed RubyGems to a local directory and create a deployable application package. All dependencies need to be included in either this package or in a Lambda layer.

Do this with a Lambda function that is using the gem aws-record to save data into an Amazon DynamoDB table.

  1. Create a directory for your new Ruby application in your development environment:
    mkdir hello_ruby
    cd hello_ruby
  2. Inside of this directory, create a file Gemfile and add aws-record to it:
    source 'https://rubygems.org'
    gem 'aws-record', '~> 2'
  3. Create a hello_ruby_record.rb file with the following code. In the code, put_item is the handler method, which expects an event object with a body attribute. After it’s invoked, it saves the value of the body attribute along with a UUID to the table.
    # hello_ruby_record.rb
    require 'aws-record'
    
    class DemoTable
      include Aws::Record
      set_table_name ENV[‘DDB_TABLE’]
      string_attr :id, hash_key: true
      string_attr :body
    end
    
    def put_item(event:,context:)
      body = event["body"]
      item = DemoTable.new(id: SecureRandom.uuid, body: body)
      item.save! # raise an exception if save fails
      item.to_h
    end 
  4. Next, bring in the dependencies for this application. Bundler is a tool used to manage RubyGems. From your application directory, run the following two commands. They create the Gemfile.lock file and download the gems to the local directory instead of to the local systems Ruby directory. This way, they ensure that all your dependencies are included in the function deployment package.
    bundle install
    bundle install --deployment
  5. AWS SAM is a templating tool that you can use to create and manage serverless applications. With it, you can define the structure of your Lambda application, define security policies and invocation sources, and manage or create almost any AWS resource. Use it now to help define the function and its policy, create your DynamoDB table and then deploy the application.
    Create a new file in your hello_ruby directory named template.yaml with the following contents:

    AWSTemplateFormatVersion: '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
    Description: 'sample ruby application'
    
    Resources:
      HelloRubyRecordFunction:
        Type: AWS::Serverless::Function
        Properties:
          Handler: hello_ruby_record.put_item
          Runtime: ruby2.5
          Policies:
          - DynamoDBCrudPolicy:
              TableName: !Ref RubyExampleDDBTable 
          Environment:
            Variables:
              DDB_TABLE: !Ref RubyExampleDDBTable
    
      RubyExampleDDBTable:
        Type: AWS::Serverless::SimpleTable
        Properties:
          PrimaryKey:
            Name: id
            Type: String
    
    Outputs:
      HelloRubyRecordFunction:
        Description: Hello Ruby Record Lambda Function ARN
        Value:
          Fn::GetAtt:
          - HelloRubyRecordFunction
          - Arn

    In this template file, you define Serverless::Function and Serverless::SimpleTable as resources, which correspond to a Lambda function and DynamoDB table.

    The line Policies in the function and the following line DynamoDBCrudPolicy refer to an AWS SAM policy template, which greatly simplifies granting permissions to Lambda functions. The DynamoDBCrudPolicy allows you to create, read, update, and delete DynamoDB resources and items in tables.

    In this example, you limit permissions by specifying TableName and passing a reference to Serverless::SimpleTable that the template creates. Next in importance is the Environment section of this template, where you create a variable named DDB_TABLE and also pass it a reference to Serverless::SimpleTable. Lastly, the Outputs section of the template allows you to easily find the function that was created.

    The directory structure now should look like the following:

    $ tree -L 2 -a
    .
    ├── .bundle
    │   └── config
    ├── Gemfile
    ├── Gemfile.lock
    ├── hello_ruby_record.rb
    ├── template.yaml
    └── vendor
        └── bundle
  6. Now use the template file to package and deploy your application. An AWS SAM template can be deployed using the AWS CloudFormation console, AWS CLI, or AWS SAM CLI. The AWS SAM CLI is a tool that simplifies serverless development across the lifecycle of your application. That includes the initial creation of a serverless project, to local testing and debugging, to deployment up to AWS. Follow the steps for your platform to get the AWS SAM CLI Installed.
  7. Create an Amazon S3 bucket to store your application code. Run the following AWS CLI command to create an S3 bucket with a custom name:
    aws s3 mb s3://<bucketname>
  8. Use the AWS SAM CLI to package your application:
    sam package --template-file template.yaml \
    --output-template-file packaged-template.yaml \
    --s3-bucket <bucketname>

    This creates a new template file named packaged-template.yaml.

  9. Use the AWS SAM CLI to deploy your application. Use any stack-name value:
    sam deploy --template-file packaged-template.yaml \
    --stack-name helloRubyRecord \
    --capabilities CAPABILITY_IAM
    
    Waiting for changeset to be created...
    Waiting for stack create/update to complete
    Successfully created/updated stack - helloRubyRecord

    This can take a few moments to create all of the resources from the template. After you see the output “Successfully created/updated stack,” it has completed.

  10. In the Lambda Serverless Applications console, you should see your application listed:

  11. To see the application dashboard, choose the name of your application stack. Because this application was deployed with either AWS SAM or AWS CloudFormation, the dashboard allows you to manage the resources as a single group with a number of features. You can view the stack resources, its template, recent deployments, metrics, including any custom dashboards you might make.

Now test the Lambda function and confirm that it is working.

  1. Choose Overview. Under Resources, select the Lambda function created in this application:

  2. In the Lambda function console, configure a test as you did earlier. Use the following JSON:
    {"body": "hello lambda"}
  3. Execute the test and you should see a success message:

  4. In the Lambda Serverless Applications console for this stack, select the DynamoDB table created:

  5. Choose Items

The id and body should match the output from the Lambda function test run.

You just created a Ruby-based Lambda application using AWS SAM!

Migrating a Sinatra application

Sinatra is a popular open source framework for Ruby that launched over a decade ago. It allows you to quickly create powerful web applications with minimal effort. Until today, you still would have needed servers to run those applications. Now, you can just deploy a Sinatra app to Lambda and move to a serverless world!

Thanks to Rack, a Ruby webserver interface, you only need to create a simple Lambda function to bridge the gap between the HTTP requests and the serverless Sinatra application. You don’t need to make additional changes to other Sinatra files at all. Paired with Amazon API Gateway and DynamoDB, your Sinatra application runs completely serverless!

For this post, take an existing Sinatra application and make it function in Lambda.

  1. Clone the serverless-sinatra-sample GitHub repository into your local environment.
    Under the app directory, find the Sinatra application files. The files enable you to specify routes to return either JSON or HTML that is generated from ERB templates in the server.rb file.

    ├── app
    │   ├── config.ru
    │   ├── server.rb
    │   └── views
    │       ├── feedback.erb
    │       ├── index.erb
    │       └── layout.erb

    In the root of the directory, you also find the template.yaml and lambda.rb files. The template.yaml includes four resources:

    • Serverless::Function
    • Serverless::API
    • Serverless::SimpleTable
    • Lambda::Permission

    In the lambda.rb file, you find the main handler for this function, which calls Rack to interface with the Sinatra application.

  2. This application has several dependencies, so use bundle to install them:
    bundle install
    bundle install --deployment
  3. Package this Lambda function and the related application components using the AWS SAM CLI:
    sam package --template-file template.yaml \
    --output-template-file packaged-template.yaml \
    --s3-bucket <bucketname>
  4. Next, deploy the application:
    sam deploy --template-file packaged-template.yaml \
    --stack-name LambdaSinatra \
    --capabilities CAPABILITY_IAM                                                                                                        
    
    Waiting for changeset to be created..
    Waiting for stack create/update to complete
    Successfully created/updated stack - LambdaSinatra
  5. In the Lambda Serverless Applications console, select your application:

  6. Choose Overview. Under Resources, find the ApiGateway RestApi entry and select the Logical ID. Below it is SinatraAPI:

  7. In the API Gateway console, in the left navigation pane, choose Dashboard. Copy the URL from Invoke this API and paste it in another browser tab:

  8. Add on to the URL a route from the Sinatra application, as seen in the server.rb.

For example, this is the hello-world GET route:

And this is the /feedback route:

Congratulations, you’ve just successfully deployed a Sinatra-based Ruby application inside of a Lambda function!

Conclusion

As you’ve seen in this post, getting started with Ruby on Lambda is made easy via either the AWS Management Console or the AWS SAM CLI.

You might even be able to easily port existing applications to Lambda without needing to change your code base. The new support for Ruby allows you to benefit from the greatly reduced operational overhead, scalability, availability, and pay–per-use pricing of Lambda.

If you are excited about this feature as well, there is even more information on writing Lambda functions in Ruby in the AWS Lambda Developer Guide.

Happy coding!

Python 3.7 runtime now available in AWS Lambda

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/python-3-7-runtime-now-available-in-aws-lambda/

This post is courtesy of Shivansh Singh, Partner Solutions Architect – AWS

We are excited to announce that you can now develop your AWS Lambda functions using the Python 3.7 runtime. Start using this new version today by specifying a runtime parameter value of “python3.7″ when creating or updating functions. AWS continues to support creating new Lambda functions on Python 3.6 and Python 2.7.

Here’s a quick primer on some of the major new features in Python 3.7:

  • Data classes
  • Customization of access to module attributes
  • Typing enhancements
  • Time functions with nanosecond resolution

Data classes

In object-oriented programming, if you have to create a class, it looks like the following in Python 3.6:

class Employee:
    def __init__(self, name: str, dept: int) -> None:
        self.name = name
        self.dept = dept
        
    def is_fulltime(self) -> bool:
        """Return True if employee is a Full-time employee, else False"""
        return self.dept > 25

The __init__ method receives multiple arguments to initialize a call. These arguments are set as class instance attributes.

With Python 3.7, you have dataclasses, which make class declarations easier and more readable. You can use the @dataclass decorator on the class declaration and self-assignment is taken care of automatically. It generates __init__, __repr__, __eq__, __hash__, and other special methods. In Python 3.7, the Employee class defined earlier looks like the following:

@dataclass
Class Employee:
    name: str
    dept: int
    
    def is_fulltime(self) -> bool:
        """Return True if employee is a Full-time employee, else False"""
        return self.dept > 25

Customization of access to module attributes

Attributes are widely used in Python. Most commonly, they used in classes. However, attributes can be put on functions and modules as well. Attributes are retrieved using the dot notation: something.attribute. You can also get attributes that are named at runtime using the getattr() function.

For classes, something.attr first looks for attr defined on something. If it’s not found, then the special method something.__getattr__(“attr”) is called. The .getattr__() function can be used to customize access to attributes on objects. This customization is not easily available for module attributes, until Python 3.7. But PEP 562 provides __getattr__() on modules, along with a corresponding __dir__() function.

Typing enhancements

Type annotations are commonly used for code hints. However, there were two common issues with using type hints extensively in the code:

  • Annotations could only use names that were already available in the current scope. In other words, they didn’t support forward references.
  • Annotating source code had adverse effects on the startup time of Python programs.

Both of these issues are fixed in Python 3.7, by postponing the evaluation of annotations. Instead of compiling code that executes expressions in annotations at their definition time, the compiler stores the annotation in a string form equivalent to the AST of the expression in question.

For example, the following code fails, as spouse cannot be defined as type Employee, given that Employee is not defined yet.

class Employee:
    def __init__(self, name: str, spouse: Employee) --> None
        pass

In Python 3.7, the evaluation of annotation is postponed. It gets stored as a string and optionally evaluated as needed. You do need to import __future__, which breaks the backward compatibility of the code with previous versions of Python.

from __future__ import annotations

class Employee:
    def __init__(self, name: str, spouse: Employee) --> None
        pass

Time functions with nanosecond resolution

The time module gets some new functions in Python 3.7. The clock resolution can exceed the limited precision of a floating point number returned by the time.time() function and its variants. The following new functions are being added:

  • clock_gettime_ns()
  • clock_settime_ns()
  • monotonic_ns()
  • perf_counter_ns()
  • process_time_ns()
  • time_ns()

These functions are similar to already existing functions without the _ns suffix. The difference is that the above functions return a number of nanoseconds as an int instead of a number of seconds as a float.

For more information, see the AWS Lambda Developer Guide.

Hope you enjoy… go build with Python 3.7!

Amazon API Gateway adds support for AWS WAF

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/amazon-api-gateway-adds-support-for-aws-waf/

This post courtesy of Heitor Lessa, AWS Specialist Solutions Architect – Serverless

Today, I’m excited to tell you about the Amazon API Gateway native integration with AWS WAF. Previously, if you wanted to secure your API in Amazon API Gateway with AWS WAF, you had to deploy a Regional API endpoint and use your own Amazon CloudFront distribution. This new feature now enables you to provision any­­ API Gateway endpoint and secure it with AWS WAF without having to configure your own CloudFront distribution to add that capability.

In Part 1 of this series, I described how to protect your API provided by API Gateway using AWS WAF.

In Part 2 of this series, I described how to use API keys as a shared secret between a CloudFront distribution and API Gateway to secure public access to your API in API Gateway. This new AWS WAF integration means that the method described in Part 2 is no longer necessary.

The following image describes methods to secure your API in API Gateway before and after this feature was made available.

Where:

  1. AWS WAF securing CloudFront endpoint only.
  2. AWS WAF securing both CloudFront and API Gateway endpoints natively.
  3. AWS WAF securing Amazon API Gateway endpoints natively where global presence is not needed.

Enabling AWS WAF for an API managed by Amazon API Gateway

For this walkthrough, you can use an existing Pet Store API or any API in API Gateway that you may already have deployed. You create a new AWS WAF web ACL that is later associated with your API Gateway stage.

Follow these steps to create a web ACL:

  1. Open the AWS WAF console.
  2. Choose Create web ACL.
  3. For Web ACL Name, enter ApiGateway-HTTP-Flood-Sample.
  4. For Region, choose US East (N. Virginia).
  5. Choose Next until you reach Step 3: Create rules.
  6. Choose Create rule and enter HTTP Flood Sample.
  7. For Rule type, choose Rate-based rule.
  8. For Rate limit, enter 2000 and choose Create.
  9. For Default action, choose Allow all requests that don’t match any rules.
  10. Choose Review and create.
  11. Confirm that your options look similar to the following image and choose Confirm and create next.

You can now follow the steps to enable the AWS WAF web ACL for an existing API in API Gateway:

  1. Open the Amazon API Gateway console.
  2. Choose Stages, prod.
  3. Under Web Application Firewall (WAF), choose ApiGateway-HTTP-Flood-Sample (or the web ACL that you just created).
  4. Choose Save Changes.

Testing your API in API Gateway now secured by AWS WAF

AWS WAF provides HTTP flood protection that is a rate-based rule. The rate-based rule is automatically triggered when web requests from a client exceed a configurable threshold. The threshold is defined by the maximum number of incoming requests allowed from a single IP address within a five-minute period.

After this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold. For this example, you defined 2000 requests as a threshold for the HTTP flood rate–based rule.

Artillery, an open source modern load testing toolkit, is used to send a large number of requests directly to the API Gateway Invoke URL to test whether your AWS WAF native integration is working correctly.

Firstly, follow these steps to retrieve the correct Invoke URL of your Pet Store API:

  1. Open the API Gateway console.
  2. In the left navigation pane, open the PetStore API.
  3. Choose Stages, select prod, and copy the Invoke URL value.

Secondly, use cURL to query your distribution and see the API output before the rate limit rule is triggered:

$ curl -s INVOKE_URL/pets

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
] 

Then, use Artillery to send a large number of requests in a short period of time to trigger your rate limit rule:

$ artillery quick -n 2000 --count 10 INVOKE_URL/pets

With this command, Artillery sends 2000 requests to your PetStore API from 10 concurrent users. By doing so, you trigger the rate limit rule in less than the 5-minute threshold. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try re-running the cURL command. You should no longer see a list of pets:

{“message”:”Forbidden”}

As you can see from the output, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

As you can see, with the AWS WAF native integration with Amazon API Gateway, you no longer have to manage your own Amazon CloudFront distribution in order to secure your API with AWS WAF. The AWS WAF native integration makes this process seamless.

I hope that you found the information in this post helpful. Remember that you can use this integration today with all Amazon API Gateway endpoints (Edge, Regional, and Private). It is available in the following Regions:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • US West (N. California)
  • EU (Ireland)
  • EU (Frankfurt)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)

Windows @ AWS re:Invent 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/windows-aws-reinvent-2018/

This post is courtesy of Rodney Bozo, Senior Solutions Architect – Microsoft Technologies – AWS

Windows has been a first-class citizen at AWS for over a decade. More enterprises run Windows workloads today on AWS than any other cloud—according to IDC, it’s over 57%, 2X than the next provider. Over this period, we’ve worked with customers across the globe and taken their feedback to build the solutions that best support their Microsoft workloads.

Since 2008, the Microsoft ecosystem on AWS has grown to much more than just running virtual machines. We have solutions for SQL, Active Directory, .NET developers and more, as well as options to bring your licenses to extend the value of your existing investments.

Over the course of the week at AWS re:Invent, we are offering over 75 sessions covering Microsoft technologies in AWS, with a combination of breakout sessions, workshops, chalk talks, and builder sessions.

Find the entire list of Windows and .NET sessions on the session catalog. Here are some you should try not to miss:

Leadership and Management

Windows and Active Directory

SQL

.NET

Looking to get hands-on with Microsoft?

Still looking for more?

We have an extensive list of curated content on the AWS for Microsoft Workloads Self-Study Guide, including case studies, whitepapers, previous re:Invent presentations, reference architectures, and how-to instructional videos. Check it out!

Support for multi-value parameters in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/support-for-multi-value-parameters-in-amazon-api-gateway/

This post is courtesy of Akash Jain, Partner Solutions Architect – AWS

The new multi-value parameter support feature for Amazon API Gateway allows you to pass multiple values for the same key in the header and query string as part of your API request. It also allows you to pass multi-value headers in the API response to implement things like sending multiple Set-Cookie headers.

As part of this feature, AWS added two new keys:

  • multiValueQueryStringParameters—Used in the API Gateway request to support a multi-valued parameter in the query string.
  • multiValueHeaders—Used in both the request and response to support multi-valued headers.

In this post, I walk you through examples for using this feature by extending the PetStore API. You add pet search functionality to the PetStore API and then add personalization by setting the language and UI theme cookies.

The following AWS CloudFormation stack creates all resources for both examples. The stack:

  • Creates the extended PetStore API represented using the OpenAPI 3.0 standard.
  • Creates three AWS Lambda functions. One each for implementing the pet search, get user profile, and set user profile functionality.
  • Creates two IAM roles. The Lambda function assumes one and API Gateway assumes the other to call the Lambda function.
  • Deploys the PetStore API to Staging.

Add pet search functionality to the PetStore API

As part of adding the search feature, I demonstrate how you can use multiValueQueryStringParameters for sending and retrieving multi-valued parameters in a query string.

Use the PetStore example API available in the API Gateway console and create a /search resource type under the /pets resource type with GET (read) access. Then, configure the GET method to use AWS Lambda proxy integration.

The CloudFormation stack launched earlier gives you the PetStore API staging endpoint as an output that you can use for testing the search functionality. Assume that the user has an interface to enter the pet types for searching and wants to search for “dog” and “fish.” The pet search API request looks like the following where the petType parameter is multi-valued:

https://xxxx.execute-api.us-east-1.amazonaws.com/staging/pets/search?petType=dog&petType=fish

When you invoke the pet search API action, you get a successful response with both the dog and fish details:

 [
  {
    "id": 11212,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 31231
    "type": "fish",
    "price": 0.99
  }
]

Processing multi-valued query string parameters

Here’s how the multi-valued parameter petType with value “petType=dog&petType=fish” gets processed by API Gateway. To demonstrate, here’s the input event sent by API Gateway to the Lambda function. The log details follow. As it was a long input, a few keys have been removed for brevity.

{ resource: '/pets/search',
path: '/pets/search',
httpMethod: 'GET',
headers: 
{ Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9',
Host: 'xyz.execute-api.us-east-1.amazonaws.com',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36',
Via: '1.1 382909590d138901660243559bc5e346.cloudfront.net (CloudFront)',
'X-Amz-Cf-Id': 'motXi0bgd4RyV--wvyJnKpJhLdgp9YEo7_9NeS4L6cbgHkWkbn0KuQ==',
'X-Amzn-Trace-Id': 'Root=1-5bab7b8b-f1333fbc610288d200cd6224',
'X-Forwarded-Proto': 'https' },
queryStringParameters: { petType: 'fish' },
multiValueQueryStringParameters: { petType: [ 'dog', 'fish' ] },
pathParameters: null,
stageVariables: null,
requestContext: 
{ resourceId: 'jy2rzf',
resourcePath: '/pets/search',
httpMethod: 'GET',
extendedRequestId: 'N1A9yGxUoAMFWMA=',
requestTime: '26/Sep/2018:12:28:59 +0000',
path: '/staging/pets/search',
protocol: 'HTTP/1.1',
stage: 'staging',
requestTimeEpoch: 1537964939459,
requestId: 'be70816e-c187-11e8-9d99-eb43dd4b0381',
apiId: 'xxxx' },
body: null,
isBase64Encoded: false }

There is a new key, multiValueQueryStringParameters, available in the input event. This key is added as part of the multi-value parameter feature to retain multiple values for the same parameter in the query string.

Before this change, API Gateway used to retain only the last value and drop everything else for a multi-valued parameter. You can see the original behavior in the queryStringParameters parameter in the above input, where only the “fish” value is retained.

Accessing the new multiValueQueryStringParameters key in a Lambda function

Use the new multiValueQueryStringParameters key available in the event context of the Lambda function to retrieve the multi-valued query string parameter petType that you passed in the query string of the search API request. You can use that value for searching pets. Retrieve the parameter values from the event context by parsing the event event.multiValueQueryStringParameters.petType.

exports.handler = (event, context, callback) => {
    
    //Log the input event
    console.log(event)
    
    //Extract multi-valued parameter from input
    var petTypes = event.multiValueQueryStringParameters.petType;
    
    //call search pets functionality
    var searchResults = searchPets(petTypes)
    
    const response = {
        statusCode: 200,
        body: searchResults
        
    };
    callback(null, response);
};

The multiValueQueryStringParameters key is present in the input request regardless of whether the request contains keys with multiple values. You don’t have to change your APIs to enable this feature, unless you are using a key of the same name as multiValueQueryStringParameters.

Add personalization

Here’s how the newly added multiValueHeaders key is useful for sending cookies with multiple Set-Cookie headers in an API response.

Personalize the Pet Store web application by setting a user-specific theme and language settings. Usually, web developers use cookies to store this kind of information. To accomplish this, send cookies that store theme and language information as part of the API response. The browser then stores them and the web application uses them to get the required information.

To set the cookies, you need the new API resources /users and /profiles, with read and write access in the PetStore API. Then, configure the GET and POST methods of the /profile resource to use AWS Lambda proxy integration. The CloudFormation stack that you launched earlier has already created the necessary resources.

Invoke the POST profile API call with the following request body. It is passing the user preferences for language and theme and returning the Set-Cookie headers:

https://XXXXXX.execute-api.us-east-1.amazonaws.com/staging/users/profile

The request body looks like the following:

{
    "userid" : 123456456,
    "preferences" : {"language":"en-US", "theme":"blue moon"}
}

You get a successful response with the “200 OK” status code and two Set-Cookie headers for setting the language and theme cookies.

Passing multiple Set-Cookie headers in the response

The following code example is the setUserProfile Lambda function code that processes the input request and sends the language and theme cookies:

exports.handler = (event, context, callback) => {

    //Get the request body
    var requestBody = JSON.parse(event.body);
 
    //Retrieve the language and theme values
    var language = requestBody.preferences.language;
    var theme = requestBody.preferences.theme; 
    
    const response = {
        isBase64Encoded: true,
        statusCode: 200,
        multiValueHeaders : {"Set-Cookie": [`language=${language}`, `theme=${theme}`]},
        body: JSON.stringify('User profile set successfully')
    };
    callback(null, response);
};

You can see the newly added multiValueHeaders key passes multiple cookies as a list in the response. The multiValueHeaders header is translated to multiple Set-Cookie headers by API Gateway and appears to the API client as the following:

Set-Cookie →language=en-US
Set-Cookie →theme=blue moon

You can also pass the header key along with the multiValueHeaders key. In that case, API Gateway merges the multiValueHeaders and headers maps while processing the integration response into a single Map<String, List<String>> value. If the same key-value pair is sent in both, it isn’t duplicated.

Retrieving the headers from the request

If you use the Postman tool (or something similar) to invoke the GET profile API call, you can send cookies as part of the request. The theme and language cookie that you set above in the POST /profile API request can now be sent as part of the GET /profile request.

https://xxx.execute-api.us-east-1.amazonaws.com/staging/users/profile

You get the following response with the details of the user preferences:

{
    "userId": 12343,
    "preferences": {
        "language": "en-US",
        "theme": "beige"
    }
}

When you log the input event of the getUserProfile Lambda function, you can see both newly added keys multiValueQueryStringParameters and multiValueHeaders. These keys are present in the event context of the Lambda function regardless of whether there is a value. You can retrieve the value by parsing the event context event.multiValueHeaders.Cookie.

{ resource: '/users/profile',
path: '/users/profile',
httpMethod: 'GET',
headers: 
{ Accept: '*/*',
'Accept-Encoding': 'gzip, deflate',
'cache-control': 'no-cache',
'CloudFront-Forwarded-Proto': 'https',
'CloudFront-Is-Desktop-Viewer': 'true',
'CloudFront-Is-Mobile-Viewer': 'false',
'CloudFront-Is-SmartTV-Viewer': 'false',
'CloudFront-Is-Tablet-Viewer': 'false',
'CloudFront-Viewer-Country': 'IN',
Cookie: 'language=en-US; theme=blue moon',
Host: 'xxxx.execute-api.us-east-1.amazonaws.com',
'Postman-Token': 'acd5b6b3-df97-44a3-8ea8-2d929efadd96',
'User-Agent': 'PostmanRuntime/7.3.0',
Via: '1.1 6bf9df28058e9b2a0034a51c5f555669.cloudfront.net (CloudFront)',
'X-Amz-Cf-Id': 'VRePX_ktEpyTYh4NGyk90D4lMUEL-LBYWNpwZEMoIOS-9KN6zljA7w==',
'X-Amzn-Trace-Id': 'Root=1-5bb6111e-e67c17ea657ed03d5dddf869',
'X-Forwarded-For': 'xx.xx.xx.xx, yy.yy.yy.yy',
'X-Forwarded-Port': '443',
'X-Forwarded-Proto': 'https' },
multiValueHeaders: 
{ Accept: [ '*/*' ],
'Accept-Encoding': [ 'gzip, deflate' ],
'cache-control': [ 'no-cache' ],
'CloudFront-Forwarded-Proto': [ 'https' ],
'CloudFront-Is-Desktop-Viewer': [ 'true' ],
'CloudFront-Is-Mobile-Viewer': [ 'false' ],
'CloudFront-Is-SmartTV-Viewer': [ 'false' ],
'CloudFront-Is-Tablet-Viewer': [ 'false' ],
'CloudFront-Viewer-Country': [ 'IN' ],
Cookie: [ 'language=en-US; theme=blue moon' ],
Host: [ 'xxxx.execute-api.us-east-1.amazonaws.com' ],
'Postman-Token': [ 'acd5b6b3-df97-44a3-8ea8-2d929efadd96' ],
'User-Agent': [ 'PostmanRuntime/7.3.0' ],
Via: [ '1.1 6bf9df28058e9b2a0034a51c5f555669.cloudfront.net (CloudFront)' ],
'X-Amz-Cf-Id': [ 'VRePX_ktEpyTYh4NGyk90D4lMUEL-LBYWNpwZEMoIOS-9KN6zljA7w==' ],
'X-Amzn-Trace-Id': [ 'Root=1-5bb6111e-e67c17ea657ed03d5dddf869' ],
'X-Forwarded-For': [ 'xx.xx.xx.xx, yy.yy.yy.yy' ],
'X-Forwarded-Port': [ '443' ],
'X-Forwarded-Proto': [ 'https' ] },
queryStringParameters: null,
multiValueQueryStringParameters: null,
pathParameters: null,
stageVariables: null,
requestContext: 
{ resourceId: '90cr24',
resourcePath: '/users/profile',
httpMethod: 'GET',
extendedRequestId: 'OPecvEtWoAMFpzg=',
requestTime: '04/Oct/2018:13:09:50 +0000',
path: '/staging/users/profile',
accountId: 'xxxxxx',
protocol: 'HTTP/1.1',
stage: 'staging',
requestTimeEpoch: 1538658590316,
requestId: 'c691746f-c7d6-11e8-83b8-176659b7d74d',
identity: 
{ cognitoIdentityPoolId: null,
accountId: null,
cognitoIdentityId: null,
caller: null,
sourceIp: 'xx.xx.xx.xx',
accessKey: null,
cognitoAuthenticationType: null,
cognitoAuthenticationProvider: null,
userArn: null,
userAgent: 'PostmanRuntime/7.3.0',
user: null },
apiId: 'xxxx' },
body: null,
isBase64Encoded: false }

HTTP proxy integration

For the API Gateway HTTP proxy integration, the headers and query string are proxied to the downstream HTTP call in the same way that they are invoked. The newly added keys aren’t present in the request or response. For example, the petType parameter from the search API goes as a list to the HTTP endpoint.

Similarly, for setting multiple Set-Cookie headers, you can set them the way that you would usually. For Node.js, it looks like the following:

res.setHeader("Set-Cookie", ["theme=beige", "language=en-US"]);

Mapping requests and responses for new keys

For mapping requests, you can access new keys by parsing the method request like method.request.multivaluequerystring.<KeyName> and method.request.multivalueheader.<HeaderName>. For mapping responses, you would parse the integration response like integration.response.multivalueheaders.<HeaderName>.

For example, the following is the search pet API request example:

curl https://{hostname}/pets/search?petType=dog&petType=fish \
-H 'cookie: language=en-US' \
-H 'cookie: theme=beige'

The new request mapping looks like the following:

"requestParameters" : {
    "integration.request.querystring.petType" : "method.request.multivaluequerystring.petType",
    "integration.request.header.cookie" : "method.request.multivalueheader.cookie"
}

The new response mapping looks like the following:

"responseParameters" : { 
 "method.response.header.Set-Cookie" : "integration.response.multivalueheaders.Set-Cookie", 
 ... 
}

For more information, see Set up Lambda Proxy Integrations in API Gateway.

Cleanup

To avoid incurring ongoing charges for the resources that you’ve used, delete the CloudFormation stack that you created earlier.

Conclusion

Multi-value parameter support enables you to pass multi-valued parameters in the query string and multi-valued headers as part of your API response. Both the multiValueQueryStringParameters and multiValueHeaders keys are present in the input request to Lambda function regardless of whether there is a value. These keys are also accessible for mapping requests and responses.

Overriding request/response parameters and response status in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/overriding-request-response-parameters-and-response-status-in-amazon-api-gateway/

This post is courtesy of Akash Jain, Partner Solutions Architect – AWS

APIs are driving modern architectures. Many customers are moving their traditional architecture to service-oriented or microservices-based architectures.

As part of this transition, you may face a few complex situations. For example, a backend API returns an HTTP status code of 2XX that must be changed to 4XX because of a new API architecture standard in the organization. Or you may want to override the headers or query string of the API to differentiate the client request without doing any API code changes.

Developers are looking for easier ways to handle such situations. With the new override feature of Amazon API Gateway, you can set the required status and parameter mappings and handle these kinds of situations.

As part of the override feature, AWS introduced new context variables for overriding both request and response parameters.

Request Body Mapping TemplateResponse Body Mapping Template
$context.requestOverride.header.<header_name>$context.responseOverride.header.<header_name>
$context.requestOverride.path.<path_name>$context.responseOverrider.status
$context.requestOverride.querystring.<querystring_name>

Using these context variables, you can:

  • Create a new header (or overwrite an existing header) as a concatenation of two parameters.
  • Override the response code to a success or failure code based on the contents of the body.
  • Conditionally remap a parameter based on its contents or the contents of some other parameter.
  • Iterate over the contents of a JSON body and remap key-value pairs to headers or query strings.

In this post, I demonstrate an example for overriding the status code without modifying the actual API code. Use the PetStore API, which is available as a sample API under Amazon API Gateway.

Override responses

Invoke the GET method on the /pets/{petId} resource by passing -1 as the petId value. You get the following response.

You can see that the status code is 200 and the error message is “The value is out of range”. You may challenge this response by asking why the status code is not “400 – Bad Request”, as the petId parameter value passed by the client is incorrect. Both approaches have merit. For this post, use “400 – Bad request” as 400 is the right code for a client error.

Your requirement is to implement this change without modifying the API code because of the risks involved or because there is no control over the legacy API. To implement this scenario, use the newly introduced override status feature with mapping templates.

To access mapping templates, under the GET request for the /pets/{petID} resource, choose Integration response.

Under Body Mapping templates, for Content-Type, enter application/json. Choose Add mapping template.

Paste the following code in the template area and choose Save.

#set($inputRoot = $input.path('$'))
$input.json("$")
#if($inputRoot.toString().contains("error"))
    #set($context.responseOverride.status = 400)
#end

When you run the test again by passing -1 for petId, you get the following response.

The status code changed from 200 to 400 without modifying the actual API.

In this PetStore API example, you use the $context.response.override.status variable to override the response status when you see an error key in the body content. The following code snippet searches the error string in the response body and does the status overriding.

#if($inputRoot.toString().contains("error"))
    #set($context.responseOverride.status = 400)

Override request headers, paths, and query strings

You can override the response status using the new response status override feature. Similarly, you can override request headers, paths, and query strings by using the $context.requestOverride variable.

Consider a situation where you have launched a new version of your API that is highly performant and much improved. However, your existing API clients are not willing to transition because of the changes that would be involved to their existing systems. You want to maintain a single API version for low maintenance, performance, and agility. What choices do you have?

The override request feature can come to the rescue. For example, you have an older version of an API endpoint that looks like the following, with “v1” in the path and a productId parameter in the query string with value=1234.

https://xxxx.execute-api.us-east-1.amazonaws.com/prod/v1/products?productId=1234

The new version of the endpoint looks like the following, with “v2” in the path and a productId parameter in the query string that requires the client to send value=4567. It also accepts a header to check if the request is from a client using the old API. To demonstrate this feature, create a request path parameter appversion that can accept any string as the version path.

https://xxxx.execute-api.us-east-1.amazonaws.com/prod/v2/pets?productId=4567

Using the new $context.requestOverride variable in the body mapping template, you can now override the requested path value from “v1” to “v2” and the query string parameter value from “1234” to “4567”. You can also add a new header to pass a flag for requests from an old API version. For example, see the following mapping template.

#set($version = "v2")
#set($newProductKey = "4567")
#set($customerFlag = "1")
## Perform normal body mapping. In this case, just pass the body mapping through **
$input.json("$")
## Override path parameter **
#set($context.requestOverride.path.appversion = $version)
## Override query string parameters **
#set($context.requestOverride.querystring.productId = $ newProductKey)
## Add new header parameter **
#set($context.requestOverride.header.flag = $customerFlag

In the above mapping template, first set the values of the required path, query string, and header for your new backend API. Then, use $input.json(“$”) to do normal body mapping. Finally, override the API path, query string, and header with the values set earlier.

When you run your API with the updated body template, the API Gateway logs show that overrides have been successfully applied.

Overrides are final. An override may only be applied to each parameter one time. Trying to override the same parameter multiple times results in 5XX responses from API Gateway.

If you must override the same parameter multiple times to build it throughout the template, I recommend creating a variable and applying the override at the end of the template. For more information, see Use a Mapping Template to Override an API’s Request and Response Parameters and Status Codes.

Conclusion

In this post, I showed you how to change the response status code of an API with the new override feature of API Gateway without modifying the API itself. You also saw how to transform requests coming to your new API from existing or legacy clients, without asking your customer to do any code changes.

The override feature enables you to have more control over the request and response parameters and status. You can now integrate legacy APIs to API Gateway, even when you want a different status from the API or to pass a different set of parameters based on the client call.

ICYMI: Serverless Q3 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/icymi-serverless-q3-2018/

Welcome to the third edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

If you didn’t see them, catch our Q1 ICYMI and Q2 ICYMI posts for what happened then.

So, what might you have missed this past quarter? Here’s the recap.

AWS Amplify CLI

In August, AWS Amplify launched the AWS Amplify Command Line Interface (CLI) toolchain for developers.

The AWS Amplify CLI enables developers to build, test, and deploy full web and mobile applications based on AWS Amplify directly from their CLI. It has built-in helpers for configuring AWS services such as Amazon Cognito for Auth , Amazon S3 and Amazon DynamoDB for storage, and Amazon API Gateway for APIs. With these helpers, developers can configure AWS services to interact with applications built in popular web frameworks such as React.

Get started with the AWS Amplify CLI toolchain.

New features

Rejoice Microsoft application developers: AWS Lambda now supports .NET Core 2.1 and PowerShell Core!

AWS SAM had a few major enhancements to help in both testing and debugging functions. The team launched support to locally emulate an endpoint for Lambda so that you can run automated tests against your functions. This differs from the existing functionality that emulated a proxy similar to API Gateway in front of your function. Combined with the new improved support for ‘sam local generate-event’ to generate over 50 different payloads, you can now test Lambda function code that would be invoked by almost all of the various services that interface with Lambda today. On the operational front, AWS SAM can now fetch, tail, and filter logs generated by your functions running live on AWS. Finally, with integration with Delve, a debugger for the Go programming language, you can more easily debug your applications locally.

If you’re part of an organization that uses AWS Service Catalog, you can now launch applications based on AWS SAM, too.

The AWS Serverless Application Repository launched new search improvements to make it even faster to find serverless applications that you can deploy.

In July, AWS AppSync added HTTP resolvers so that now you can query your REST APIs via GraphQL! API Inception! AWS AppSync also added new built-in scalar types to help with data validation at the GraphQL layer instead of having to do this in code that you write yourself. For building your GraphQL-based applications on AWS AppSync, an enhanced no-code GraphQL API builder enables you to model your data, and the service generates your GraphQL schema, Amazon DynamoDB tables, and resolvers for your backend. The team also published a Quick Start for using Amazon Aurora as a data source via a Lambda function. Finally, the service is now available in the Asia Pacific (Seoul) Region.

Amazon API Gateway announced support for AWS X-Ray!

With X-Ray integrated in API Gateway, you can trace and profile application workflows starting at the API layer and going through the backend. You can control the sample rates at a granular level.

API Gateway also announced improvements to usage plans that allow for method level throttling, request/response parameter and status overrides, and higher limits for the number of APIs per account for regional, private, and edge APIs. Finally, the team added support for the OpenAPI 3.0 API specification, the next generation of OpenAPI 2, formerly known as Swagger.

AWS Step Functions is now available in the Asia Pacific (Mumbai) Region. You can also build workflows visually with Step Functions and trigger them directly with AWS IoT Rules.

AWS [email protected] now makes the HTTP Request Body for POST and PUT requests available.

AWS CloudFormation announced Macros, a feature that enables customers to extend the functionality of AWS CloudFormation templates by calling out to transformations that Lambda powers. Macros are the same technology that enables SAM to exist.

Serverless posts

July:

August:

September:

Tech Talks

We hold several Serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. Here are the three tech talks that we delivered in Q3:

Twitch

We’ve been busy streaming deeply technical content to you the past few months! Check out awesome sessions like this one by AWS’s Heitor Lessa and Jason Barto diving deep into Continuous Learning for ML and the entire “Build on Serverless” playlist.

For information about upcoming broadcasts and recent live streams, keep an eye on AWS on Twitch for more Serverless videos and on the Join us on Twitch AWS page.

For AWS Partners

In September, we announced the AWS Serverless Navigate program for AWS APN Partners. Via this program, APN Partners can gain a deeper understanding of the AWS Serverless Platform, including many of the services mentioned in this post. The program’s phases help partners learn best practices such as the Well-Architected Framework, business and technical concepts, and growing their business’s ability to better support AWS customers in their serverless projects.

Check out more at AWS Serverless Navigate.

In other news

AWS re:Invent 2018 is coming in just a few weeks! For November 26–30 in Las Vegas, Nevada, join tens of thousands of AWS customers to learn, share ideas, and see exciting keynote announcements. The agenda for Serverless talks contains over 100 sessions where you can hear about serverless applications and technologies from fellow AWS customers, AWS product teams, solutions architects, evangelists, and more.

Register for AWS re:Invent now!

Want to get a sneak peek into what you can expect at re:Invent this year? Check out the awesome re:Invent Guides put out by AWS Community Heroes. AWS Community Hero Eric Hammond (@esh on Twitter) published one for advanced serverless attendees that you will want to read before the big event.

What did we do at AWS re:Invent 2017? Check out our recap: Serverless @ re:Invent 2017.

Still looking for more?

The Serverless landing page has lots of information. The resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

Investigating spikes in AWS Lambda function concurrency

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/investigating-spikes-in-aws-lambda-function-concurrency/

This post is courtesy of Ian Carlson, Principal Solutions Architect – AWS

As mentioned in an earlier post, a key benefit of serverless applications is the ease with which they can scale to meet traffic demands or requests. AWS Lambda is at the core of this platform.

Although this flexibility is hugely beneficial for our customers, sometimes an errant bit of code or upstream scaling can lead to spikes in concurrency. Unwanted usage can increase costs, pressure downstream systems, and throttle other functions in the account. Administrators new to serverless technologies need to leverage different metrics to manage their environment. In this blog, I walk through a sample scenario where I’m getting API errors. I trace those errors up through Amazon CloudWatch logs and leverage CloudWatch Metrics to understand what is happening in my environment. Finally, I show you how to set up alerts to reduce throttling surprises.

In my environment, I have a few Lambda functions configured. The first function is from Chris Munns’ concurrency blog, called concurrencyblog. I set up that function to execute behind an API hosted on Amazon API Gateway. In the background, I’m simulating activity with another function. This exercise uses the services in the following image.

To start, I make an API Gateway call to invoke the concurrencyblog function.

curl -i -XGET https://XXXXXXXX.execute-api.us-east-2.amazonaws.com/prod/concurrencyblog

I get the following output.

HTTP/2 502
content-type: application/json
content-length: 36
date: Wed, 01 Aug 2018 14:46:03 GMT
x-amzn-requestid: 9d5eca92-9599-11e8-bb13-dddafe0dbaa3
x-amz-apigw-id: K8ocOG_iiYcFa_Q=
x-cache: Error from cloudfront
via: 1.1 cb9e55028a8e7365209ebc8f2737b69b.cloudfront.net (CloudFront)
x-amz-cf-id: fk-gvFwSan8hzBtrC1hC_V5idaSDAKL9EwKDq205iN2RgQjnmIURYg==
 
{"message": "Internal server error"}

Hmmm, a 502 error. That shouldn’t happen. I don’t know the cause, but I configured logging for my API, so I can search for the requestid in CloudWatch logs. I navigate to the logs, select Search Log Group, and enter the x-amzn-requestid, enclosed in double quotes.

My API is invoking a Lambda function, and it’s getting an error from Lambda called ConcurrentInvocationLimitExceeded. This means my Lambda function was throttled. If I navigate to the function in the Lambda console, I get a similar message at the top.

If I scroll down, I observe that I don’t have throttling configured, so this must be coming from a different function or functions.

Using CloudWatch forensics

Lambda functions report lots of metrics in CloudWatch to tell you how they’re doing. Three of the metrics that I investigate here are Invocations, Duration, and ConcurrentExecutions. Invocations is incremented any time a Lambda function executes and is recorded for all functions and by individual functions. Duration is recorded to tell you how long Lambda functions take to execute. ConcurrentExecutions reports how many Lambda functions are executing at the same time and is emitted for the entire account and for functions that have a concurrency reservation set. Lambda emits CloudWatch metrics whenever there is Lambda activity in the account.

Lambda reports concurrency metrics for my account under AWS/Lambda/ConcurrentExecutions. To begin, I navigate to the Metrics pane of the CloudWatch console and choose Lambda on the All metrics tab.

Next, I choose Across All Functions.

Then I choose ConcurrentExecutions.

I choose the Graphed metrics tab and change Statistic from Average to Maximum, which shows me the peak concurrent executions in my account. For Period, I recommend reviewing 1-minute period data over the previous 2 weeks. After 2 weeks, the precision is aggregated over 5 minutes, which is a long time for Lambda!

In my test account, I find a concurrency spike at 14:46 UTC with 1000 concurrent executions.

Next, I want to find the culprit for this spike. I go back to the All metrics tab, but this time I choose By Function Name and enter Invocations in the search field. Then I select all of the functions listed.

The following image shows that BadLambdaConcurrency is the culprit.

It seems odd that there are only 331 invocations during that sample in the graph, so let’s dig in. Using the same method as before, I add the Duration metric for BadLambdaConcurrency. On average, this function is taking 30 seconds to complete, as shown in the following image.

Because there are 669 invocations the previous minute and the function is taking, on average, 30 seconds to complete, the next minute’s invocations (331) drives the concurrency up to 1000. Lambda functions can execute very quickly, so exact precision can be challenging, even over a 1-minute time period. However, this gives you a reasonable indication of the troublesome function in the account.

Automating this process

Investigating via the Lambda and CloudWatch console works fine if you have a few functions, but when you have tens or hundreds it can be pretty time consuming. Fortunately CloudWatch metrics are also available via API. To speed up this process I’ve written a script in Python that will go back over the last 7 days of metrics, find the minute with the highest concurrency, and output the Invocations and Average Duration for all functions for six minutes prior to that spike. You can download the script here. To execute, make sure you have rights to CloudWatch metrics, or are running from an EC2 instance that has those rights. Then you can execute:

sudo yum install python3
pip install boto3 --user
curl https://raw.githubusercontent.com/aws-samples/aws-lambda-concurrency-hunt/master/lambda-con-hunt.py -o lambda-con-hunt.py
python3 lambda-con-hunt.py

or, to output it to a file:

python3 lambda-con-hunt.py > output.csv

You should get output similar to the following image.

You can import this data into a spreadsheet program and sort it, or you can confirm visually that BadLambdaConcurrency is driving the concurrency.

Getting to the root cause

Now I want to understand what is driving that spike in Invocations for BadLambdaConcurrency, so I go to the Lambda console. It shows that API Gateway is triggering this Lambda function.

I choose API Gateway and scroll down to discover which API is triggering. Choosing the name (ConcurrencyTest) takes me to that API.

It’s the same API that I set up for concurrencyblog, but a different method. Because I already set up logging for this API, I can search the log group to check for interesting behavior. Perusing the logs, I check the method request headers for any insights as to who is calling this API. In real life I wouldn’t leave an API open without authentication, so I’ll have to do some guessing.

(a915ba7f-9591-11e8-8f19-a7737a1fb2d7) Method request headers: {CloudFront-Viewer-Country=US, CloudFront-Forwarded-Proto=https, CloudFront-Is-Tablet-Viewer=false, CloudFront-Is-Mobile-Viewer=false, User-Agent=hey/0.0.1, X-Forwarded-Proto=https, CloudFront-Is-SmartTV-Viewer=false, Host=xxxxxxxxx.execute-api.us-east-2.amazonaws.com, Accept-Encoding=gzip, X-Forwarded-Port=443, X-Amzn-Trace-Id=Root=1-5b61ba53-1958c6e2022ef9df9aac7bdb, Via=1.1 a0286f15cb377e35ea96015406919392.cloudfront.net (CloudFront), X-Amz-Cf-Id=O0GQ_V_eWRe5KydZNc46-aPSz7dfI19bmyhWCsbTBMoety73q0AtZA==, X-Forwarded-For=f.f.f.f, a.a.a.a, CloudFront-Is-Desktop-Viewer=true, Content-Type=text/html}

The method request headers have a user agent called hey. Hey, that’s a load testing utility! I bet that someone is load-testing this API, but it shouldn’t be allowed to consume all of my resources.

Applying rate and concurrency limiting

To keep this from happening, I place a throttle on the API method. In API Gateway console, in the APIs navigation pane, I choose Stages, choose prod, choose the Settings tab, and select the Enable throttling check box. Then I set a rate of 20 requests per second. It doesn’t sound like much, but with an average function duration of 30 seconds, 20 requests per second can use 600 concurrent Lambda executions.

I can also set a concurrency reservation on the function itself, as Chris pointed out in his blog.

If this is a bad function running amok or an emergency, I can throttle it directly, sometimes referred to as flipping a kill switch. I can do that quickly by choosing Throttle on the Lambda console.

I recommend throttling to zero only in emergency situations.

Investigating the duration

The other and larger problem is this function is taking 30 seconds to execute. That is a long time for an API, and the API Gateway integration timeout is 29 seconds. I wonder what is making it time out, so I check the traces in AWS X-Ray.

It initializes quickly enough, and I don’t find any downstream processes called. This function is a simple one, and the code is available from the Lambda console window. There I find my timeout culprit, a 30-second sleep call.

Not sure how that got through testing!

Setting up ongoing monitoring and alerting

To ensure that I’m not surprised again, let’s create a CloudWatch alert. In the CloudWatch console’s navigation pane, I choose Alarms and then choose Create Alarm.

When prompted, I choose Lambda and the ConcurrentExecutions metric across all functions, as shown in the following image.

Under Alarm Threshold, I give the alarm a name and description and enter 800 for is, as shown in the following image. I treat missing data as good because Lambda won’t publish a metric if there is no activity. I make sure that my period is 1 minute and use Maximum as the statistic. I want to be alerted only if this happens for any 2 minutes out of a 5-minute period. Finally, I can set up an Amazon SNS notification to alert me via email or text if this threshold is reached. This enables me to troubleshoot or request a limit increase for my account. Individual functions should be able to handle a throttling event through client-side retry and exponential backoff, but it’s still something that I want to know about.

Conclusion

In this blog, I walked through a method to investigate concurrency issues with Lambda, remediate those issues, and set up alerting. Managing concurrency is going to be new for a lot of people. As you deploy more applications, it’s especially important to segment them, monitor them, and understand how they are reporting their health. I hope you enjoyed this blog and start monitoring your functions today!

Developing .NET Core AWS Lambda functions

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/developing-net-core-aws-lambda-functions/

This post is courtesy of Mark Easton, Senior Solutions Architect – AWS

One of the biggest benefits of Lambda functions is that they isolate you from the underlying infrastructure. While that makes it easy to deploy and manage your code, it’s critical to have a clearly defined approach for testing, debugging, and diagnosing problems.

There’s a variety of best practices and AWS services to help you out. When developing Lambda functions in .NET, you can follow a four-pronged approach:

This post demonstrates the approach by creating a simple Lambda function that can be called from a gateway created by Amazon API Gateway and which returns the current UTC time. The post shows you how to design your code to allow for easy debugging, logging and tracing.

If you haven’t created Lambda functions with .NET Core before, then the following posts can help you get started:

Unit testing Lambda functions

One of the easiest ways to create a .NET Core Lambda function is to use the .NET Core CLI and create a solution using the Lambda Empty Serverless template.

If you haven’t already installed the Lambda templates, run the following command:

dotnet new -i Amazon.Lambda.Templates::*

You can now use the template to create a serverless project and unit test project, and then add them to a .NET Core solution by running the following commands:

dotnet new serverless.EmptyServerless -n DebuggingExample
cd DebuggingExample
dotnet new sln -n DebuggingExample\
dotnet sln DebuggingExample.sln add */*/*.csproj

Although you haven’t added any code yet, you can validate that everything’s working by executing the unit tests. Run the following commands:

cd test/DebuggingExample.Tests/
dotnet test

One of the key principles to effective unit testing is ensuring that units of functionality can be tested in isolation. It’s good practice to de-couple the Lambda function’s actual business logic from the plumbing code that handles the actual Lambda requests.

Using your favorite editor, create a new file, ITimeProcessor.cs, in the src/DebuggingExample folder, and create the following basic interface:

using System;

namespace DebuggingExample
{
    public interface ITimeProcessor
    {
        DateTime CurrentTimeUTC();
    }
}

Then, create a new TimeProcessor.cs file in the src/DebuggingExample folder. The file contains a concrete class implementing the interface.

using System;

namespace DebuggingExample
{
    public class TimeProcessor : ITimeProcessor
    {
        public DateTime CurrentTimeUTC()
        {
            return DateTime.UtcNow;
        }
    }
} 

Now add a TimeProcessorTest.cs file to the src/DebuggingExample.Tests folder. The file should contain the following code:

using System;
using Xunit;

namespace DebuggingExample.Tests
{
    public class TimeProcessorTest
    {
        [Fact]
        public void TestCurrentTimeUTC()
        {
            // Arrange
            var processor = new TimeProcessor();
            var preTestTimeUtc = DateTime.UtcNow;

            // Act
            var result = processor.CurrentTimeUTC();

            // Assert time moves forwards 
            var postTestTimeUtc = DateTime.UtcNow;
            Assert.True(result >= preTestTimeUtc);
            Assert.True(result <= postTestTimeUtc);
        }
    }
}

You can then execute all the tests. From the test/DebuggingExample.Tests folder, run the following command:

dotnet test

Surfacing business logic in a Lambda function

Now that you have your business logic written and tested, you can surface it as a Lambda function. Edit the src/DebuggingExample/Function.cs file so that it calls the CurrentTimeUTC method:

using System;
using System.Collections.Generic;
using System.Net;
using Amazon.Lambda.Core;
using Amazon.Lambda.APIGatewayEvents;
using Newtonsoft.Json;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(
typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] 

namespace DebuggingExample
{
    public class Functions
    {
        ITimeProcessor processor = new TimeProcessor();

        public APIGatewayProxyResponse Get(
APIGatewayProxyRequest request, ILambdaContext context)
        {
            var result = processor.CurrentTimeUTC();

            return CreateResponse(result);
        }

APIGatewayProxyResponse CreateResponse(DateTime? result)
{
    int statusCode = (result != null) ? 
        (int)HttpStatusCode.OK : 
        (int)HttpStatusCode.InternalServerError;

    string body = (result != null) ? 
        JsonConvert.SerializeObject(result) : string.Empty;

    var response = new APIGatewayProxyResponse
    {
        StatusCode = statusCode,
        Body = body,
        Headers = new Dictionary<string, string>
        { 
            { "Content-Type", "application/json" }, 
            { "Access-Control-Allow-Origin", "*" } 
        }
    };
    
    return response;
}
    }
}

First, an instance of the TimeProcessor class is instantiated, and a Get() method is then defined to act as the entry point to the Lambda function.

By default, .NET Core Lambda function handlers expect their input in a Stream. This can be overridden by declaring a customer serializer, and then defining the handler’s method signature using a custom request and response type.

Because the project was created using the serverless.EmptyServerless template, it already overrides the default behavior. It does this by including a using reference to Amazon.Lambda.APIGatewayEvents and then declaring a custom serializer. For more information about using custom serializers in .NET, see the AWS Lambda for .NET Core repository on GitHub.

Get() takes a couple of parameters:

  • The APIGatewayProxyRequest parameter contains the request from the API Gateway fronting the Lambda function
  • The optional ILambdaContext parameter contains details of the execution context.

The Get() method calls CurrentTimeUTC() to retrieve the time from the business logic.

Finally, the result from CurrentTimeUTC() is passed to the CreateResponse() method, which converts the result into an APIGatewayResponse object to be returned to the caller.

Because the updated Lambda function no longer passes the unit tests, update the TestGetMethod in test/DebuggingExample.Tests/FunctionTest.cs file. Update the test by removing the following line:

Assert.Equal("Hello AWS Serverless", response.Body);

This leaves your FunctionTest.cs file as follows:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Xunit;
using Amazon.Lambda.Core;
using Amazon.Lambda.TestUtilities;
using Amazon.Lambda.APIGatewayEvents;
using DebuggingExample;

namespace DebuggingExample.Tests
{
    public class FunctionTest
    {
        public FunctionTest()
        {
        }

        [Fact]
        public void TetGetMethod()
        {
            TestLambdaContext context;
            APIGatewayProxyRequest request;
            APIGatewayProxyResponse response;

            Functions functions = new Functions();

            request = new APIGatewayProxyRequest();
            context = new TestLambdaContext();
            response = functions.Get(request, context);
            Assert.Equal(200, response.StatusCode);
        }
    }
}

Again, you can check that everything is still working. From the test/DebuggingExample.Tests folder, run the following command:

dotnet test

Local integration testing with the AWS SAM CLI

Unit testing is a great start for testing thin slices of functionality. But to test that your API Gateway and Lambda function integrate with each other, you can test locally by using the AWS SAM CLI, installed as described in the AWS Lambda Developer Guide.

Unlike unit testing, which allows you to test functions in isolation outside of their runtime environment, the AWS SAM CLI executes your code in a locally hosted Docker container. It can also simulate a locally hosted API gateway proxy, allowing you to run component integration tests.

After you’ve installed the AWS SAM CLI, you can start using it by creating a template that describes your Lambda function by saving a file named template.yaml in the DebuggingExample directory with the following contents:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Sample SAM Template for DebuggingExample

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
    Function:
        Timeout: 10

Resources:

    DebuggingExampleFunction:
        Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
        Properties:
            FunctionName: DebuggingExample
			CodeUri: src/DebuggingExample/bin/Release/netcoreapp2.1/publish
            Handler: DebuggingExample::DebuggingExample.Functions::Get
            Runtime: dotnetcore2.1
            Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
                Variables:
                    PARAM1: VALUE
            Events:
                DebuggingExample:
                    Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
                    Properties:
                        Path: /
                        Method: get

Outputs:

    DebuggingExampleApi:
      Description: "API Gateway endpoint URL for Prod stage for Debugging Example function"
      Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/DebuggingExample/"

    DebuggingExampleFunction:
      Description: "Debugging Example Lambda Function ARN"
      Value: !GetAtt DebuggingExampleFunction.Arn

    DebuggingExampleFunctionIamRole:
      Description: "Implicit IAM Role created for Debugging Example function"
      Value: !GetAtt DebuggingExampleFunctionRole.Arn

Now that you have an AWS SAM CLI template, you can test your code locally. Because the Lambda function expects a request from API Gateway, create a sample API Gateway request. Run the following command:

sam local generate-event api > testApiRequest.json

You can now publish your DebuggingExample code locally and invoke it by passing in the sample request as follows:

dotnet publish -c Release
sam local invoke "DebuggingExampleFunction" --event testApiRequest.json

The first time that you run it, it might take some time to pull down the container image in which to host the Lambda function. After you’ve invoked it one time, the container image is cached locally, and execution speeds up.

Finally, rather than testing your function by sending it a sample request, test it with a real API gateway request by running API Gateway locally:

sam local start-api

If you now navigate to http://127.0.0.1:3000/ in your browser, you can get the API gateway to send a request to your locally hosted Lambda function. See the results in your browser.

Logging events with CloudWatch

Having a test strategy allows you to execute, test, and debug Lambda functions. After you’ve deployed your functions to AWS, you must still log what the functions are doing so that you can monitor their behavior.

The easiest way to add logging to your Lambda functions is to add code that writes events to CloudWatch. To do this, add a new method, LogMessage(), to the src/DebuggingExample/Function.cs file.

void LogMessage(ILambdaContext ctx, string msg)
{
    ctx.Logger.LogLine(
        string.Format("{0}:{1} - {2}", 
            ctx.AwsRequestId, 
            ctx.FunctionName,
            msg));
}

This takes in the context object from the Lambda function’s Get() method, and sends a message to CloudWatch by calling the context object’s Logger.Logline() method.

You can now add calls to LogMessage in the Get() method to log events in CloudWatch. It’s also a good idea to add a Try… Catch… block to ensure that exceptions are logged as well.

        public APIGatewayProxyResponse Get(APIGatewayProxyRequest request, ILambdaContext context)
        {
            LogMessage(context, "Processing request started");

            APIGatewayProxyResponse response;
            try
            {
                var result = processor.CurrentTimeUTC();
                response = CreateResponse(result);

                LogMessage(context, "Processing request succeeded.");
            }
            catch (Exception ex)
            {
                LogMessage(context, string.Format("Processing request failed - {0}", ex.Message));
                response = CreateResponse(null);
            }

            return response;
        }

To validate that the changes haven’t broken anything, you can now execute the unit tests again. Run the following commands:

cd test/DebuggingExample.Tests/
dotnet test

Tracing execution with X-Ray

Your code now logs events in CloudWatch, which provides a solid mechanism to help monitor and diagnose problems.

However, it can also be useful to trace your Lambda function’s execution to help diagnose performance or connectivity issues, especially if it’s called by or calling other services. X-Ray provides a variety of features to help analyze and trace code execution.

To enable active tracing on your function you need to modify the SAM template we created earlier to add a new attribute to the function resource definition. With SAM this is as easy as adding the Tracing attribute and specifying it as Active below the Timeout attribute in the Globals section of the template.yaml file:

Globals:
    Function:
        Timeout: 10
        Tracing: Active

To call X-Ray from within your .NET Core code, you must add the AWSSDKXRayRecoder to your solution by running the following command in the src/DebuggingExample folder:

dotnet add package AWSXRayRecorder –-version 2.2.1-beta

Then, add the following using statement at the top of the src/DebuggingExample/Function.cs file:

using Amazon.XRay.Recorder.Core;

Add a new method to the Function class, which takes a function and name and then records an X-Ray subsegment to trace the execution of the function.

        private T TraceFunction<T>(Func<T> func, string subSegmentName)
        {
            AWSXRayRecorder.Instance.BeginSubsegment(subSegmentName);
            T result = func();
            AWSXRayRecorder.Instance.EndSubsegment();

            return result;
        } 

You can now update the Get() method by replacing the following line:

var result = processor.CurrentTimeUTC();

Replace it with this line:

var result = TraceFunction(processor.CurrentTimeUTC, "GetTime");

The final version of Function.cs, in all its glory, is now:

using System;
using System.Collections.Generic;
using System.Net;
using Amazon.Lambda.Core;
using Amazon.Lambda.APIGatewayEvents;
using Newtonsoft.Json;
using Amazon.XRay.Recorder.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(
typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace DebuggingExample
{
    public class Functions
    {
        ITimeProcessor processor = new TimeProcessor();

        public APIGatewayProxyResponse Get(APIGatewayProxyRequest request, ILambdaContext context)
        {
            LogMessage(context, "Processing request started");

            APIGatewayProxyResponse response;
            try
            {
                var result = TraceFunction(processor.CurrentTimeUTC, "GetTime");
                response = CreateResponse(result);

                LogMessage(context, "Processing request succeeded.");
            }
            catch (Exception ex)
            {
                LogMessage(context, string.Format("Processing request failed - {0}", ex.Message));
                response = CreateResponse(null);
            }

            return response;
        }

        APIGatewayProxyResponse CreateResponse(DateTime? result)
        {
            int statusCode = (result != null) ?
                (int)HttpStatusCode.OK :
                (int)HttpStatusCode.InternalServerError;

            string body = (result != null) ?
                JsonConvert.SerializeObject(result) : string.Empty;

            var response = new APIGatewayProxyResponse
            {
                StatusCode = statusCode,
                Body = body,
                Headers = new Dictionary<string, string>
        {
            { "Content-Type", "application/json" },
            { "Access-Control-Allow-Origin", "*" }
        }
            };

            return response;
        }

        private void LogMessage(ILambdaContext context, string message)
        {
            context.Logger.LogLine(string.Format("{0}:{1} - {2}", context.AwsRequestId, context.FunctionName, message));
        }

        private T TraceFunction<T>(Func<T> func, string actionName)
        {
            AWSXRayRecorder.Instance.BeginSubsegment(actionName);
            T result = func();
            AWSXRayRecorder.Instance.EndSubsegment();

            return result;
        }
    }
}

Since AWS X-Ray requires an agent to collect trace information, if you want to test the code locally you should now install the AWS X-Ray agent. Once it’s installed, confirm the changes haven’t broken anything by running the unit tests again:

cd test/DebuggingExample.Tests/
dotnet test

For more information about using X-Ray from .NET Core, see the AWS X-Ray Developer Guide. For information about adding support for X-Ray in Visual Studio, see the New AWS X-Ray .NET Core Support post.

Deploying and testing the Lambda function remotely

Having created your Lambda function and tested it locally, you’re now ready to package and deploy your code.

First of all you need an Amazon S3 bucket to deploy the code into. If you don’t already have one, create a suitable S3 bucket.

You can now package the .NET Lambda Function and copy it to Amazon S3.

sam package \
  --template-file template.yaml \
  --output-template debugging-example.yaml \
  --s3-bucket debugging-example-deploy

Finally, deploy the Lambda function by running the following command:

sam deploy \
   --template-file debugging-example.yaml \
   --stack-name DebuggingExample \
   --capabilities CAPABILITY_IAM \
   --region eu-west-1

After your code has deployed successfully, test it from your local machine by running the following command:

dotnet lambda invoke-function DebuggingExample -–region eu-west-1

Diagnosing the Lambda function

Having run the Lambda function, you can now monitor its behavior by logging in to the AWS Management Console and then navigating to CloudWatch LogsCloudWatch Logs Console

You can now click on the /aws/lambda/DebuggingExample log group to view all the recorded log streams for your Lambda function.

If you open one of the log streams, you see the various messages recorded for the Lambda function, including the two events explicitly logged from within the Get() method.Lambda CloudWatch Logs

To review the logs locally, you can also use the AWS SAM CLI to retrieve CloudWatch logs and then display them in your terminal.

sam logs -n DebuggingExample --region eu-west-1

As a final alternative, you can also execute the Lambda function by choosing Test on the Lambda console. The execution results are displayed in the Log output section. Lambda Console Execution

In the X-Ray console, the Service Map page shows a map of the Lambda function’s connections.

Your Lambda function is essentially standalone. However, the Service Map page can be critical in helping to understand performance issues when a Lambda function is connected with a number of other services.X-Ray Service Map

If you open the Traces screen, the trace list showing all the trace results that it’s recorded. Open one of the traces to see a breakdown of the Lambda function performance.

X-Ray Traces UI

Conclusion

In this post, I showed you how to develop Lambda functions in .NET Core, how unit tests can be used, how to use the AWS SAM CLI for local integration tests, how CloudWatch can be used for logging and monitoring events, and finally how to use X-Ray to trace Lambda function execution.

Put together, these techniques provide a solid foundation to help you debug and diagnose your Lambda functions effectively. Explore each of the services further, because when it comes to production workloads, great diagnosis is key to providing a great and uninterrupted customer experience.

ICYMI: Serverless Q2 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/icymi-serverless-q2-2018/

The better-late-than-never edition!

Welcome to the second edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

The second quarter of 2018 flew by so fast that we didn’t get a chance to get out this post! We’re playing catch up, and making sure that the Q3 post launches a bit sooner.

Missed our Q1 ICYMI? Catch up on everything you missed.

So, what might you have missed this past quarter? Here’s the recap….

AWS AppSync

In April, AWS AppSync went generally available (GA)!

AWS AppSync provides capabilities to build real-time, collaborative mobile and web applications. It uses GraphQL, an open standard query language that makes it easy to request data from the cloud. When AWS AppSync went GA, several features also launched. These included better in-console testing with mock data, Amazon CloudWatch support, AWS CloudFormation support, and console log access.

AWS Amplify then also launched support for AWS AppSync to make it even easier for developers to build JavaScript-based applications that can integrate with several AWS services via its simplified GraphQL interface. Click here for the documentation.

AppSync expanded to more Regions and added OIDC support in May.

New features

AWS Lambda made Node.js v8.10 available. 8.10 brings some significant improvements in supporting async/await calls that simplify the traditional callback style common in Node.js applications. Developers can also see performance improvements and lower memory consumption.

In June, the long-awaited support for Amazon SQS as a trigger for Lambda launched! With this launch, customers can easily create Lambda functions that directly consume from SQS queues without needing to manage scheduling for the invocations to poll a queue. Today, SQS is one of the most popular AWS services. It’s used by hundreds of thousands of customers at massive scales as one of the fundamental building blocks of many applications.

AWS Lambda gained support for AWS Config. With AWS Config, you can track changes to the Lambda function, runtime environments, tags, handler name, code size, memory allocation, timeout settings, and concurrency settings. You can also record changes to Lambda IAM execution roles, subnets, and security group associations. Even more fun, you can use AWS Lambda functions in AWS Config Rules to check if your Lambda functions conform to certain standards as decided by you. Inception!

Amazon API Gateway announced the availability of private API endpoints! With private API endpoints, you can now create APIs that are completely inside your own virtual private clouds (VPCs). You can use awesome API Gateway features such as Lambda custom authorizers and Amazon Cognito integration. Back your APIs with Lambda, containers running in Amazon ECS, ECS supporting AWS Fargate, and Amazon EKS, as well as on Amazon EC2.

Amazon API Gateway also launched two really useful features; support for Resource Polices for APIs and Cross-Account AWS Lambda Authorizers and Integrations. Both features offer capabilities to help developers secure their APIs whether they are public or private.

AWS SAM went open source and the AWS SAM Local tool has now been relaunched as AWS SAM CLI! As part of the relaunch, AWS SAM CLI has gained numerous capabilities, such as helping you start a brand new serverless project and better template validation. With version 0.4.0, released in June, we added Python 3.6 support. You can now perform new project creation, local development and testing, and then packaging and deployment of serverless applications for all actively supported Lambda languages.

AWS Step Functions expanded into more Regions, increased default limits, became HIPPA eligible, and is also now available in AWS GovCloud (US).

AWS [email protected] added support for Node.js v8.10.

Serverless posts

April:

May:

June:

Webinars

Here are the three webinars we delivered in Q2. We hold several Serverless webinars throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page:

Twitch

We’ve been so busy livestreaming on Twitch that you are most certainly missing out if you aren’t following along!

Here are links to all of the Serverless Twitch sessions that we’ve done.

Keep an eye on AWS on Twitch for more Serverless videos and on the Join us on Twitch AWS page for information about upcoming broadcasts and recent live streams.

Worthwhile reading

Serverless: Changing the Face of Business Economics, A Venture Capital and Startup Perspective
In partnership with three prominent venture capitalists—Greylock Partners, Madrona Venture Group, and Accel—AWS released a whitepaper on the business benefits to serverless. Check it out to hear about opportunities for companies in the space and how several have seen significant benefits from a serverless approach.

Serverless Streaming Architectures and Best Practices
Streaming workloads are some of the biggest workloads for AWS Lambda. Customers of all shapes and sizes are using streaming workloads for near real-time processing of data from services such as Amazon Kinesis Streams. In this whitepaper, we explore three stream-processing patterns using a serverless approach. For each pattern, we describe how it applies to a real-world use case, the best practices and considerations for implementation, and cost estimates. Each pattern also includes a template that enables you to quickly and easily deploy these patterns in your AWS accounts.

In other news

AWS re:Invent 2018 is coming! From November 26—30 in Las Vegas, Nevada, join tens of thousands of AWS customers to learn, share ideas, and see exciting keynote announcements. The agenda for Serverless talks is just starting to show up now and there are always lots of opportunities to hear about serverless applications and technologies from fellow AWS customers, AWS product teams, solutions architects, evangelists, and more.

Register for AWS re:Invent now!

What did we do at AWS re:Invent 2017? Check out our recap here: Serverless @ re:Invent 2017

Attend a Serverless event!

“ServerlessDays are a family of events around the world focused on fostering a community around serverless technologies.” —https://serverlessdays.io/

The events are run by local volunteers as vendor-agnostic events with a focus on community, accessibility, and local representation. Dozens of cities around the world have folks interested in these events, with more popping up regularly.

Find a ServerlessDays event happening near you. Come ready to learn and connect with other developers, architects, hobbyists, and practitioners. AWS has members from our team at every event to connect with and share ideas and content. Maybe, just maybe, we’ll even hand out cool swag!

AWS Serverless Apps for Social Good Hackathon

Our AWS Serverless Apps for Social Good hackathon invites you to publish serverless applications for popular use cases. Your app can use Alexa skills, machine learning, media processing, monitoring, data transformation, notification services, location services, IoT, and more.

We’re looking for apps that can be used as standalone assets or as inputs that can be combined with other applications to add to the open-source serverless ecosystem. This supports the work being done by developers and nonprofit organizations around the world.

Winners will be awarded cash prizes and the opportunity to direct donations to the nonprofit partner of their choice.

Still looking for more?

The AWS Serverless landing page has lots of information. The resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

Protecting your API using Amazon API Gateway and AWS WAF — Part 2

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-2/

This post courtesy of Heitor Lessa, AWS Specialist Solutions Architect – Serverless

In Part 1 of this blog, we described how to protect your API provided by Amazon API Gateway using AWS WAF. In this blog, we show how to use API keys between an Amazon CloudFront distribution and API Gateway to secure access to your API in API Gateway in addition to your preferred authorization (AuthZ) mechanism already set up in API Gateway. For more information about AuthZ mechanisms in API Gateway, see Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway.

We also extend the AWS CloudFormation stack previously used to automate the creation of the following necessary resources of this solution:

The following are alternative solutions to using an API key, depending on your security requirements:

Using a randomly generated HTTP secret header in CloudFront and verifying by API Gateway request validation
Signing incoming requests with [email protected] and verifying with API Gateway Lambda authorizers

Requirements

To follow along, you need full permissions to create, update, and delete API Gateway, CloudFront, Lambda, and CloudWatch Events through AWS CloudFormation.

Extending the existing AWS CloudFormation stack

First, click here to download the full template. Then follow these steps to update the existing AWS CloudFormation stack:

  1. Go to the AWS Management Console and open the AWS CloudFormation console.
  2. Select the stack that you created in Part 1, right-click it, and select Update Stack.
  3. For option 2, choose Choose file and select the template that you downloaded.
  4. Fill in the required parameters as shown in the following image.

Here’s more information about these parameters:

  • API Gateway to send traffic to – We use the same API Gateway URL as in Part 1 except without the URL scheme (https://): cxm45444t9a.execute-api.us-east-2.amazonaws.com/prod
  • Rotating API Keys – We define Daily and use 2018-04-03 as the timestamp value to append to the API key name

Continue with the AWS CloudFormation console to complete the operation. It might take a couple of minutes to update the stack as CloudFront takes its time to propagate changes across all point of presences.

Enabling API Keys in the example Pet Store API

While the stack completes in the background, let’s enable the use of API Keys in the API that CloudFront will send traffic to.

  1. Go to the AWS Management Console and open the API Gateway console.
  2. Select the API that you created in Part 1 and choose Resources.
  3. Under /pets, choose GET and then choose Method Request.
  4. For API Key Required, choose the dropdown menu and choose true.
  5. To save this change, select the highlighted check mark as shown in the following image.

Next, we need to deploy these changes so that requests sent to /pets fail if an API key isn’t present.

  1. Choose Actions and select Deploy API.
  2. Choose the Deployment stage dropdown menu and select the stage you created in Part 1.
  3. Add a deployment description such as “Requires API Keys under /pets” and choose Deploy.

When the deployment succeeds, you’re redirected to the API Gateway Stage page. There you can use the Invoke URL to test if the following request fails due to not having an API key.

This failure is expected and proves that our deployed changes are working. Next, let’s try to access the same API but this time through our CloudFront distribution.

  1. From the AWS Management Console, open the AWS Cloudformation console.
  2. Select the stack that you created in Part 1 and choose Outputs at the bottom left.
  3. On the CFDistribution line, copy the URL. Before you paste in a new browser tab or window, append ‘/pets’ to it.

As opposed to our first attempt without an API key, we receive a JSON response from the PetStore API. This is because CloudFront is injecting an API key before it forwards the request to the PetStore API. The following image demonstrates both of these tests:

  1. Successful request when accessing the API through CloudFront
  2. Unsuccessful request when accessing the API directly through its Invoke URL

This works as a secret between CloudFront and API Gateway, which could be any agreed random secret that can be rotated like an API key. However, it’s important to know that the API key is a feature to track or meter API consumers’ usage. It’s not a secure authorization mechanism and therefore should be used only in conjunction with an API Gateway authorizer.

Rotating API keys

API keys are automatically rotated based on the schedule (e.g., daily or monthly) that you chose when updating the AWS CloudFormation stack. This requires no maintenance or intervention on your part. In this section, we explain how this process works under the hood and what you can do if you want to manually trigger an API key rotation.

The AWS CloudFormation template that we downloaded and used to update our stack does the following in addition to Part 1.

Introduce a Timestamp parameter that is appended to the API key name

Parameters:
  Timestamp:
    Type: String
    Description: Fill in this format <Year>-<Month>-<Day>
    Default: 2018-04-02

Create an API Gateway key, API Gateway usage plan, associate the new key with the API gateway given as a parameter, and configure the CloudFront distribution to send a custom header when forwarding traffic to API Gateway

CFDistribution:
  Type: AWS::CloudFront::Distribution
  Properties:
    DistributionConfig:
      Logging:
        IncludeCookies: 'false'
        Bucket: !Sub ${S3BucketAccessLogs}.s3.amazonaws.com
        Prefix: cloudfront-logs
      Enabled: 'true'
      Comment: API Gateway Regional Endpoint Blog post
      Origins:
        -
          Id: APIGWRegional
          DomainName: !Select [0, !Split ['/', !Ref ApiURL]]
          CustomOriginConfig:
            HTTPPort: 443
            OriginProtocolPolicy: https-only
          OriginCustomHeaders:
            - 
              HeaderName: x-api-key
              HeaderValue: !Ref ApiKey
              ...

ApiUsagePlan:
  Type: AWS::ApiGateway::UsagePlan
  Properties:
    Description: CloudFront usage only
    UsagePlanName: CloudFront_only
    ApiStages:
      - 
        ApiId: !Select [0, !Split ['.', !Ref ApiURL]]
        Stage: !Select [1, !Split ['/', !Ref ApiURL]]

ApiKey: 
  Type: "AWS::ApiGateway::ApiKey"
  Properties: 
    Name: !Sub "CloudFront-${Timestamp}"
    Description: !Sub "CloudFormation API Key ${Timestamp}"
    Enabled: true

ApiKeyUsagePlan:
  Type: "AWS::ApiGateway::UsagePlanKey"
  Properties:
    KeyId: !Ref ApiKey
    KeyType: API_KEY
    UsagePlanId: !Ref ApiUsagePlan

As shown in the ApiKey resource, we append the given Timestamp to Name as well as use it in the API Gateway usage plan key resource. This means that whenever the Timestamp parameter changes, AWS CloudFormation triggers a resource replacement and updates every resource that depends on that API key. In this case, that includes the AWS CloudFront configuration and API Gateway usage plan.

But what does the rotation schedule that you chose at the beginning of this blog mean in this example?

Create a scheduled activity to trigger a Lambda function on a given schedule

Parameters:
...
  ApiKeyRotationSchedule: 
    Description: Schedule to rotate API Keys e.g. Daily, Monthly, Bimonthly basis
    Type: String
    Default: Daily
    AllowedValues:
      - Daily
      - Fortnightly
      - Monthly
      - Bimonthly
      - Quarterly
    ConstraintDescription: Must be any of the available options

Mappings: 

  ScheduleMap: 
    CloudwatchEvents: 
      Daily: "rate(1 day)"
      Fortnightly: "rate(14 days)"
      Monthly: "rate(30 days)"
      Bimonthly: "rate(60 days)"
      Quarterly: "rate(90 days)"

Resources:
...
  RotateApiKeysScheduledJob: 
    Type: "AWS::Events::Rule"
    Properties: 
      Description: "ScheduledRule"
      ScheduleExpression: !FindInMap [ScheduleMap, CloudwatchEvents, !Ref ApiKeyRotationSchedule]
      State: "ENABLED"
      Targets: 
        - 
          Arn: !GetAtt RotateApiKeysFunction.Arn
          Id: "RotateApiKeys"

The resource RotateApiKeysScheduledJob shows that the schedule that you selected through a dropdown menu when updating the AWS CloudFormation stack is actually converted to a CloudWatch Events rule. This in turn triggers a Lambda function that is defined in the same template.

RotateApiKeysFunction:
      Type: "AWS::Lambda::Function"
      Properties:
        Handler: "index.lambda_handler"
        Role: !GetAtt RotateApiKeysFunctionRole.Arn
        Runtime: python3.6
        Environment:
          Variables:
            StackName: !Ref "AWS::StackName"
        Code:
          ZipFile: !Sub |
            import datetime
            import os

            import boto3
            from botocore.exceptions import ClientError

            session = boto3.Session()
            cfn = session.client('cloudformation')
            
            timestamp = datetime.date.today()            
            params = {
                'StackName': os.getenv('StackName'),
                'UsePreviousTemplate': True,
                'Capabilities': ["CAPABILITY_IAM"],
                'Parameters': [
                    {
                      'ParameterKey': 'ApiURL',
                      'UsePreviousValue': True
                    },
                    {
                      'ParameterKey': 'ApiKeyRotationSchedule',
                      'UsePreviousValue': True
                    },
                    {
                      'ParameterKey': 'Timestamp',
                      'ParameterValue': str(timestamp)
                    },
                ],                
            }

            def lambda_handler(event, context):
              """ Updates CloudFormation Stack with a new timestamp and returns CloudFormation response"""
              try:
                  response = cfn.update_stack(**params)
              except ClientError as err:
                  if "No updates are to be performed" in err.response['Error']['Message']:
                      return {"message": err.response['Error']['Message']}
                  else:
                      raise Exception("An error happened while updating the stack: {}".format(err))          
  
              return response

All this Lambda function does is trigger an AWS CloudFormation stack update via API (exactly what you did through the console but programmatically) and updates the Timestamp parameter. As a result, it rotates the API key and the CloudFront distribution configuration.

This gives you enough flexibility to change the API key rotation schedule at any time without maintaining or writing any code. You can also manually update the stack and rotate the keys by updating the AWS CloudFormation stack’s Timestamp parameter.

Next Steps

We hope you found the information in this blog helpful. You can use it to understand how to create a mechanism to allow traffic only from CloudFront to API Gateway and avoid bypassing the AWS WAF rules that Part 1 set up.

Keep the following important notes in mind about this solution:

  • It assumes that you already have a strong AuthZ mechanism, managed by API Gateway, to control access to your API.
  • The API Gateway usage plan and other resources created in this solution work only for APIs created in the same account (the ApiUrl parameter).
  • If you already use API keys for tracking API usage, consider using either of the following solutions as a replacement:
    • Use a random HTTP header value in CloudFront origin configuration and use an API Gateway request model validation to verify it instead of API keys alone.
    • Combine [email protected] and an API Gateway custom authorizer to sign and verify incoming requests using a shared secret known only to the two. This is a more advanced technique.