Tag Archives: apigateway

Protecting your API using Amazon API Gateway and AWS WAF — Part I

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-i/

This post courtesy of Thiago Morais, AWS Solutions Architect

When you build web applications or expose any data externally, you probably look for a platform where you can build highly scalable, secure, and robust REST APIs. As APIs are publicly exposed, there are a number of best practices for providing a secure mechanism to consumers using your API.

Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

In this post, I show you how to take advantage of the regional API endpoint feature in API Gateway, so that you can create your own Amazon CloudFront distribution and secure your API using AWS WAF.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

As you make your APIs publicly available, you are exposed to attackers trying to exploit your services in several ways. The AWS security team published a whitepaper solution using AWS WAF, How to Mitigate OWASP’s Top 10 Web Application Vulnerabilities.

Regional API endpoints

Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution created and managed by API Gateway. Before the launch of regional API endpoints, this was the default option when creating APIs using API Gateway. It primarily helped to reduce latency for API consumers that were located in different geographical locations than your API.

When API requests predominantly originate from an Amazon EC2 instance or other services within the same AWS Region as the API is deployed, a regional API endpoint typically lowers the latency of connections. It is recommended for such scenarios.

For better control around caching strategies, customers can use their own CloudFront distribution for regional APIs. They also have the ability to use AWS WAF protection, as I describe in this post.

Edge-optimized API endpoint

The following diagram is an illustrated example of the edge-optimized API endpoint where your API clients access your API through a CloudFront distribution created and managed by API Gateway.

Regional API endpoint

For the regional API endpoint, your customers access your API from the same Region in which your REST API is deployed. This helps you to reduce request latency and particularly allows you to add your own content delivery network, as needed.

Walkthrough

In this section, you implement the following steps:

  • Create a regional API using the PetStore sample API.
  • Create a CloudFront distribution for the API.
  • Test the CloudFront distribution.
  • Set up AWS WAF and create a web ACL.
  • Attach the web ACL to the CloudFront distribution.
  • Test AWS WAF protection.

Create the regional API

For this walkthrough, use an existing PetStore API. All new APIs launch by default as the regional endpoint type. To change the endpoint type for your existing API, choose the cog icon on the top right corner:

After you have created the PetStore API on your account, deploy a stage called “prod” for the PetStore API.

On the API Gateway console, select the PetStore API and choose Actions, Deploy API.

For Stage name, type prod and add a stage description.

Choose Deploy and the new API stage is created.

Use the following AWS CLI command to update your API from edge-optimized to regional:

aws apigateway update-rest-api \
--rest-api-id {rest-api-id} \
--patch-operations op=replace,path=/endpointConfiguration/types/EDGE,value=REGIONAL

A successful response looks like the following:

{
    "description": "Your first API with Amazon API Gateway. This is a sample API that integrates via HTTP with your demo Pet Store endpoints", 
    "createdDate": 1511525626, 
    "endpointConfiguration": {
        "types": [
            "REGIONAL"
        ]
    }, 
    "id": "{api-id}", 
    "name": "PetStore"
}

After you change your API endpoint to regional, you can now assign your own CloudFront distribution to this API.

Create a CloudFront distribution

To make things easier, I have provided an AWS CloudFormation template to deploy a CloudFront distribution pointing to the API that you just created. Click the button to deploy the template in the us-east-1 Region.

For Stack name, enter RegionalAPI. For APIGWEndpoint, enter your API FQDN in the following format:

{api-id}.execute-api.us-east-1.amazonaws.com

After you fill out the parameters, choose Next to continue the stack deployment. It takes a couple of minutes to finish the deployment. After it finishes, the Output tab lists the following items:

  • A CloudFront domain URL
  • An S3 bucket for CloudFront access logs
Output from CloudFormation

Output from CloudFormation

Test the CloudFront distribution

To see if the CloudFront distribution was configured correctly, use a web browser and enter the URL from your distribution, with the following parameters:

https://{your-distribution-url}.cloudfront.net/{api-stage}/pets

You should get the following output:

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Set up AWS WAF and create a web ACL

With the new CloudFront distribution in place, you can now start setting up AWS WAF to protect your API.

For this demo, you deploy the AWS WAF Security Automations solution, which provides fine-grained control over the requests attempting to access your API.

For more information about deployment, see Automated Deployment. If you prefer, you can launch the solution directly into your account using the following button.

For CloudFront Access Log Bucket Name, add the name of the bucket created during the deployment of the CloudFormation stack for your CloudFront distribution.

The solution allows you to adjust thresholds and also choose which automations to enable to protect your API. After you finish configuring these settings, choose Next.

To start the deployment process in your account, follow the creation wizard and choose Create. It takes a few minutes do finish the deployment. You can follow the creation process through the CloudFormation console.

After the deployment finishes, you can see the new web ACL deployed on the AWS WAF console, AWSWAFSecurityAutomations.

Attach the AWS WAF web ACL to the CloudFront distribution

With the solution deployed, you can now attach the AWS WAF web ACL to the CloudFront distribution that you created earlier.

To assign the newly created AWS WAF web ACL, go back to your CloudFront distribution. After you open your distribution for editing, choose General, Edit.

Select the new AWS WAF web ACL that you created earlier, AWSWAFSecurityAutomations.

Save the changes to your CloudFront distribution and wait for the deployment to finish.

Test AWS WAF protection

To validate the AWS WAF Web ACL setup, use Artillery to load test your API and see AWS WAF in action.

To install Artillery on your machine, run the following command:

$ npm install -g artillery

After the installation completes, you can check if Artillery installed successfully by running the following command:

$ artillery -V
$ 1.6.0-12

As the time of publication, Artillery is on version 1.6.0-12.

One of the WAF web ACL rules that you have set up is a rate-based rule. By default, it is set up to block any requesters that exceed 2000 requests under 5 minutes. Try this out.

First, use cURL to query your distribution and see the API output:

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets
[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Based on the test above, the result looks good. But what if you max out the 2000 requests in under 5 minutes?

Run the following Artillery command:

artillery quick -n 2000 --count 10  https://{distribution-name}.cloudfront.net/prod/pets

What you are doing is firing 2000 requests to your API from 10 concurrent users. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try to run the cURL request again and see what happens:

 

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Request blocked.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: [removed]
</PRE>
<ADDRESS>
</ADDRESS>
</BODY></HTML>

As you can see from the output above, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

In this first part, you saw how to use the new API Gateway regional API endpoint together with Amazon CloudFront and AWS WAF to secure your API from a series of attacks.

In the second part, I will demonstrate some other techniques to protect your API using API keys and Amazon CloudFront custom headers.

Serverless Dynamic Web Pages in AWS: Provisioned with CloudFormation

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/serverless-dynamic-web-pages-in-aws-provisioned-with-cloudformation/

***This blog is authored by Mike Okner of Monsanto, an AWS customer. It originally appeared on the Monsanto company blog. Minor edits were made to the original post.***

Recently, I was looking to create a status page app to monitor a few important internal services. I wanted this app to be as lightweight, reliable, and hassle-free as possible, so using a “serverless” architecture that doesn’t require any patching or other maintenance was quite appealing.

I also don’t deploy anything in a production AWS environment outside of some sort of template (usually CloudFormation) as a rule. I don’t want to have to come back to something I created ad hoc in the console after 6 months and try to recall exactly how I architected all of the resources. I’ll inevitably forget something and create more problems before solving the original one. So building the status page in a template was a requirement.

The Design
I settled on a design using two Lambda functions, both written in Python 3.6.

The first Lambda function makes requests out to a list of important services and writes their current status to a DynamoDB table. This function is executed once per minute via CloudWatch Event Rule.

The second Lambda function reads each service’s status & uptime information from DynamoDB and renders a Jinja template. This function is behind an API Gateway that has been configured to return text/html instead of its default application/json Content-Type.

The CloudFormation Template
AWS provides a Serverless Application Model template transformer to streamline the templating of Lambda + API Gateway designs, but it assumes (like everything else about the API Gateway) that you’re actually serving an API that returns JSON content. So, unfortunately, it won’t work for this use-case because we want to return HTML content. Instead, we’ll have to enumerate every resource like usual.

The Skeleton
We’ll be using YAML for the template in this example. I find it easier to read than JSON, but you can easily convert between the two with a converter if you disagree.

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Serverless status page app
Resources:
  # [...Resources]

The Status-Checker Lambda Resource
This one is triggered on a schedule by CloudWatch, and looks like:

# Status Checker Lambda
CheckerLambda:
  Type: AWS::Lambda::Function
  Properties:
    Code: ./lambda.zip
    Environment:
      Variables:
        TABLE_NAME: !Ref DynamoTable
    Handler: checker.handler
    Role:
      Fn::GetAtt:
      - CheckerLambdaRole
      - Arn
    Runtime: python3.6
    Timeout: 45
CheckerLambdaRole:
  Type: AWS::IAM::Role
  Properties:
    ManagedPolicyArns:
    - arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
    - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
    AssumeRolePolicyDocument:
      Version: '2012-10-17'
      Statement:
      - Action:
        - sts:AssumeRole
        Effect: Allow
        Principal:
          Service:
          - lambda.amazonaws.com
CheckerLambdaTimer:
  Type: AWS::Events::Rule
  Properties:
    ScheduleExpression: rate(1 minute)
    Targets:
    - Id: CheckerLambdaTimerLambdaTarget
      Arn:
        Fn::GetAtt:
        - CheckerLambda
        - Arn
CheckerLambdaTimerPermission:
  Type: AWS::Lambda::Permission
  Properties:
    Action: lambda:invokeFunction
    FunctionName: !Ref CheckerLambda
    SourceArn:
      Fn::GetAtt:
      - CheckerLambdaTimer
      - Arn
    Principal: events.amazonaws.com

Let’s break that down a bit.

The CheckerLambda is the actual Lambda function. The Code section is a local path to a ZIP file containing the code and its dependencies. I’m using CloudFormation’s packaging feature to automatically push the deployable to S3.

The CheckerLambdaRole is the IAM role the Lambda will assume which grants it access to DynamoDB in addition to the usual Lambda logging permissions.

The CheckerLambdaTimer is the CloudWatch Events Rule that triggers the checker to run once per minute.

The CheckerLambdaTimerPermission grants CloudWatch the ability to invoke the checker Lambda function on its interval.

The Web Page Gateway
The API Gateway handles incoming requests for the web page, invokes the Lambda, and then returns the Lambda’s results as HTML content. Its template looks like:

# API Gateway for Web Page Lambda
PageGateway:
  Type: AWS::ApiGateway::RestApi
  Properties:
    Name: Service Checker Gateway
PageResource:
  Type: AWS::ApiGateway::Resource
  Properties:
    RestApiId: !Ref PageGateway
    ParentId:
      Fn::GetAtt:
      - PageGateway
      - RootResourceId
    PathPart: page
PageGatewayMethod:
  Type: AWS::ApiGateway::Method
  Properties:
    AuthorizationType: NONE
    HttpMethod: GET
    Integration:
      Type: AWS
      IntegrationHttpMethod: POST
      Uri:
        Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${WebRenderLambda.Arn}/invocations
      RequestTemplates:
        application/json: |
          {
              "method": "$context.httpMethod",
              "body" : $input.json('$'),
              "headers": {
                  #foreach($param in $input.params().header.keySet())
                  "$param": "$util.escapeJavaScript($input.params().header.get($param))"
                  #if($foreach.hasNext),#end
                  #end
              }
          }
      IntegrationResponses:
      - StatusCode: 200
        ResponseParameters:
          method.response.header.Content-Type: "'text/html'"
        ResponseTemplates:
          text/html: "$input.path('$')"
    ResourceId: !Ref PageResource
    RestApiId: !Ref PageGateway
    MethodResponses:
    - StatusCode: 200
      ResponseParameters:
        method.response.header.Content-Type: true
PageGatewayProdStage:
  Type: AWS::ApiGateway::Stage
  Properties:
    DeploymentId: !Ref PageGatewayDeployment
    RestApiId: !Ref PageGateway
    StageName: Prod
PageGatewayDeployment:
  Type: AWS::ApiGateway::Deployment
  DependsOn: PageGatewayMethod
  Properties:
    RestApiId: !Ref PageGateway
    Description: PageGateway deployment
    StageName: Stage

There’s a lot going on here, but the real meat is in the PageGatewayMethod section. There are a couple properties that deviate from the default which is why we couldn’t use the SAM transformer.

First, we’re passing request headers through to the Lambda in theRequestTemplates section. I’m doing this so I can validate incoming auth headers. The API Gateway can do some types of auth, but I found it easier to check auth myself in the Lambda function since the Gateway is designed to handle API calls and not browser requests.

Next, note that in the IntegrationResponses section we’re defining the Content-Type header to be ‘text/html’ (with single-quotes) and defining the ResponseTemplate to be $input.path(‘$’). This is what makes the request render as a HTML page in your browser instead of just raw text.

Due to the StageName and PathPart values in the other sections, your actual page will be accessible at https://someId.execute-api.region.amazonaws.com/Prod/page. I have the page behind an existing reverse-proxy and give it a saner URL for end-users. The reverse proxy also attaches the auth header I mentioned above. If that header isn’t present, the Lambda will render an error page instead so the proxy can’t be bypassed.

The Web Page Rendering Lambda
This Lambda is invoked by calls to the API Gateway and looks like:

# Web Page Lambda
WebRenderLambda:
  Type: AWS::Lambda::Function
  Properties:
    Code: ./lambda.zip
    Environment:
      Variables:
        TABLE_NAME: !Ref DynamoTable
    Handler: web.handler
    Role:
      Fn::GetAtt:
      - WebRenderLambdaRole
      - Arn
    Runtime: python3.6
    Timeout: 30
WebRenderLambdaRole:
  Type: AWS::IAM::Role
  Properties:
    ManagedPolicyArns:
    - arn:aws:iam::aws:policy/AmazonDynamoDBReadOnlyAccess
    - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
    AssumeRolePolicyDocument:
      Version: '2012-10-17'
      Statement:
      - Action:
        - sts:AssumeRole
        Effect: Allow
        Principal:
          Service:
          - lambda.amazonaws.com
WebRenderLambdaGatewayPermission:
  Type: AWS::Lambda::Permission
  Properties:
    FunctionName: !Ref WebRenderLambda
    Action: lambda:invokeFunction
    Principal: apigateway.amazonaws.com
    SourceArn:
      Fn::Sub:
      - arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${__ApiId__}/*/*/*
      - __ApiId__: !Ref PageGateway

The WebRenderLambda and WebRenderLambdaRole should look familiar.

The WebRenderLambdaGatewayPermission is similar to the Status Checker’s CloudWatch permission, only this time it allows the API Gateway to invoke this Lambda.

The DynamoDB Table
This one is straightforward.

# DynamoDB table
DynamoTable:
  Type: AWS::DynamoDB::Table
  Properties:
    AttributeDefinitions:
    - AttributeName: name
      AttributeType: S
    ProvisionedThroughput:
      WriteCapacityUnits: 1
      ReadCapacityUnits: 1
    TableName: status-page-checker-results
    KeySchema:
    - KeyType: HASH
      AttributeName: name

The Deployment
We’ve made it this far defining every resource in a template that we can check in to version control, so we might as well script the deployment as well rather than manually manage the CloudFormation Stack via the AWS web console.

Since I’m using the packaging feature, I first run:

$ aws cloudformation package \
    --template-file template.yaml \
    --s3-bucket <some-bucket-name> \
    --output-template-file template-packaged.yaml
Uploading to 34cd6e82c5e8205f9b35e71afd9e1548 1922559 / 1922559.0 (100.00%) Successfully packaged artifacts and wrote output template to file template-packaged.yaml.

Then to deploy the template (whether new or modified), I run:

$ aws cloudformation deploy \
    --region '<aws-region>' \
    --template-file template-packaged.yaml \
    --stack-name '<some-name>' \
    --capabilities CAPABILITY_IAM
Waiting for changeset to be created.. Waiting for stack create/update to complete Successfully created/updated stack - <some-name>

And that’s it! You’ve just created a dynamic web page that will never require you to SSH anywhere, patch a server, recover from a disaster after Amazon terminates your unhealthy EC2, or any other number of pitfalls that are now the problem of some ops person at AWS. And you can reproduce deployments and make changes with confidence because everything is defined in the template and can be tracked in version control.

Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/

This post written by: Magnus Bjorkman – Solutions Architect

Many customers are looking to run their services at global scale, deploying their backend to multiple regions. In this post, we describe how to deploy a Serverless API into multiple regions and how to leverage Amazon Route 53 to route the traffic between regions. We use latency-based routing and health checks to achieve an active-active setup that can fail over between regions in case of an issue. We leverage the new regional API endpoint feature in Amazon API Gateway to make this a seamless process for the API client making the requests. This post does not cover the replication of your data, which is another aspect to consider when deploying applications across regions.

Solution overview

Currently, the default API endpoint type in API Gateway is the edge-optimized API endpoint, which enables clients to access an API through an Amazon CloudFront distribution. This typically improves connection time for geographically diverse clients. By default, a custom domain name is globally unique and the edge-optimized API endpoint would invoke a Lambda function in a single region in the case of Lambda integration. You can’t use this type of endpoint with a Route 53 active-active setup and fail-over.

The new regional API endpoint in API Gateway moves the API endpoint into the region and the custom domain name is unique per region. This makes it possible to run a full copy of an API in each region and then use Route 53 to use an active-active setup and failover. The following diagram shows how you do this:

Active/active multi region architecture

  • Deploy your Rest API stack, consisting of API Gateway and Lambda, in two regions, such as us-east-1 and us-west-2.
  • Choose the regional API endpoint type for your API.
  • Create a custom domain name and choose the regional API endpoint type for that one as well. In both regions, you are configuring the custom domain name to be the same, for example, helloworldapi.replacewithyourcompanyname.com
  • Use the host name of the custom domain names from each region, for example, xxxxxx.execute-api.us-east-1.amazonaws.com and xxxxxx.execute-api.us-west-2.amazonaws.com, to configure record sets in Route 53 for your client-facing domain name, for example, helloworldapi.replacewithyourcompanyname.com

The above solution provides an active-active setup for your API across the two regions, but you are not doing failover yet. For that to work, set up a health check in Route 53:

Route 53 Health Check

A Route 53 health check must have an endpoint to call to check the health of a service. You could do a simple ping of your actual Rest API methods, but instead provide a specific method on your Rest API that does a deep ping. That is, it is a Lambda function that checks the status of all the dependencies.

In the case of the Hello World API, you don’t have any other dependencies. In a real-world scenario, you could check on dependencies as databases, other APIs, and external dependencies. Route 53 health checks themselves cannot use your custom domain name endpoint’s DNS address, so you are going to directly call the API endpoints via their region unique endpoint’s DNS address.

Walkthrough

The following sections describe how to set up this solution. You can find the complete solution at the blog-multi-region-serverless-service GitHub repo. Clone or download the repository locally to be able to do the setup as described.

Prerequisites

You need the following resources to set up the solution described in this post:

  • AWS CLI
  • An S3 bucket in each region in which to deploy the solution, which can be used by the AWS Serverless Application Model (SAM). You can use the following CloudFormation templates to create buckets in us-east-1 and us-west-2:
    • us-east-1:
    • us-west-2:
  • A hosted zone registered in Amazon Route 53. This is used for defining the domain name of your API endpoint, for example, helloworldapi.replacewithyourcompanyname.com. You can use a third-party domain name registrar and then configure the DNS in Amazon Route 53, or you can purchase a domain directly from Amazon Route 53.

Deploy API with health checks in two regions

Start by creating a small “Hello World” Lambda function that sends back a message in the region in which it has been deployed.


"""Return message."""
import logging

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the hello world message."""

    region = context.invoked_function_arn.split(':')[3]

    logger.info("message: " + "Hello from " + region)
    
    return {
		"message": "Hello from " + region
    }

Also create a Lambda function for doing a health check that returns a value based on another environment variable (either “ok” or “fail”) to allow for ease of testing:


"""Return health."""
import logging
import os

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the health."""

    logger.info("status: " + os.environ['STATUS'])
    
    return {
		"status": os.environ['STATUS']
    }

Deploy both of these using an AWS Serverless Application Model (SAM) template. SAM is a CloudFormation extension that is optimized for serverless, and provides a standard way to create a complete serverless application. You can find the full helloworld-sam.yaml template in the blog-multi-region-serverless-service GitHub repo.

A few things to highlight:

  • You are using inline Swagger to define your API so you can substitute the current region in the x-amazon-apigateway-integration section.
  • Most of the Swagger template covers CORS to allow you to test this from a browser.
  • You are also using substitution to populate the environment variable used by the “Hello World” method with the region into which it is being deployed.

The Swagger allows you to use the same SAM template in both regions.

You can only use SAM from the AWS CLI, so do the following from the command prompt. First, deploy the SAM template in us-east-1 with the following commands, replacing “<your bucket in us-east-1>” with a bucket in your account:


> cd helloworld-api
> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-east-1> --region us-east-1
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-east-1

Second, do the same in us-west-2:


> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-west-2> --region us-west-2
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-west-2

The API was created with the default endpoint type of Edge Optimized. Switch it to Regional. In the Amazon API Gateway console, select the API that you just created and choose the wheel-icon to edit it.

API Gateway edit API settings

In the edit screen, select the Regional endpoint type and save the API. Do the same in both regions.

Grab the URL for the API in the console by navigating to the method in the prod stage.

API Gateway endpoint link

You can now test this with curl:


> curl https://2wkt1cxxxx.execute-api.us-west-2.amazonaws.com/prod/helloworld
{"message": "Hello from us-west-2"}

Write down the domain name for the URL in each region (for example, 2wkt1cxxxx.execute-api.us-west-2.amazonaws.com), as you need that later when you deploy the Route 53 setup.

Create the custom domain name

Next, create an Amazon API Gateway custom domain name endpoint. As part of using this feature, you must have a hosted zone and domain available to use in Route 53 as well as an SSL certificate that you use with your specific domain name.

You can create the SSL certificate by using AWS Certificate Manager. In the ACM console, choose Get started (if you have no existing certificates) or Request a certificate. Fill out the form with the domain name to use for the custom domain name endpoint, which is the same across the two regions:

Amazon Certificate Manager request new certificate

Go through the remaining steps and validate the certificate for each region before moving on.

You are now ready to create the endpoints. In the Amazon API Gateway console, choose Custom Domain Names, Create Custom Domain Name.

API Gateway create custom domain name

A few things to highlight:

  • The domain name is the same as what you requested earlier through ACM.
  • The endpoint configuration should be regional.
  • Select the ACM Certificate that you created earlier.
  • You need to create a base path mapping that connects back to your earlier API Gateway endpoint. Set the base path to v1 so you can version your API, and then select the API and the prod stage.

Choose Save. You should see your newly created custom domain name:

API Gateway custom domain setup

Note the value for Target Domain Name as you need that for the next step. Do this for both regions.

Deploy Route 53 setup

Use the global Route 53 service to provide DNS lookup for the Rest API, distributing the traffic in an active-active setup based on latency. You can find the full CloudFormation template in the blog-multi-region-serverless-service GitHub repo.

The template sets up health checks, for example, for us-east-1:


HealthcheckRegion1:
  Type: "AWS::Route53::HealthCheck"
  Properties:
    HealthCheckConfig:
      Port: "443"
      Type: "HTTPS_STR_MATCH"
      SearchString: "ok"
      ResourcePath: "/prod/healthcheck"
      FullyQualifiedDomainName: !Ref Region1HealthEndpoint
      RequestInterval: "30"
      FailureThreshold: "2"

Use the health check when you set up the record set and the latency routing, for example, for us-east-1:


Region1EndpointRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    Region: us-east-1
    HealthCheckId: !Ref HealthcheckRegion1
    SetIdentifier: "endpoint-region1"
    HostedZoneId: !Ref HostedZoneId
    Name: !Ref MultiregionEndpoint
    Type: CNAME
    TTL: 60
    ResourceRecords:
      - !Ref Region1Endpoint

You can create the stack by using the following link, copying in the domain names from the previous section, your existing hosted zone name, and the main domain name that is created (for example, hellowordapi.replacewithyourcompanyname.com):

The following screenshot shows what the parameters might look like:
Serverless multi region Route 53 health check

Specifically, the domain names that you collected earlier would map according to following:

  • The domain names from the API Gateway “prod”-stage go into Region1HealthEndpoint and Region2HealthEndpoint.
  • The domain names from the custom domain name’s target domain name goes into Region1Endpoint and Region2Endpoint.

Using the Rest API from server-side applications

You are now ready to use your setup. First, demonstrate the use of the API from server-side clients. You can demonstrate this by using curl from the command line:


> curl https://hellowordapi.replacewithyourcompanyname.com/v1/helloworld/
{"message": "Hello from us-east-1"}

Testing failover of Rest API in browser

Here’s how you can use this from the browser and test the failover. Find all of the files for this test in the browser-client folder of the blog-multi-region-serverless-service GitHub repo.

Use this html file:


<!DOCTYPE HTML>
<html>
<head>
    <meta charset="utf-8"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <meta name="viewport" content="width=device-width, initial-scale=1"/>
    <title>Multi-Region Client</title>
</head>
<body>
<div>
   <h1>Test Client</h1>

    <p id="client_result">

    </p>

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="settings.js"></script>
    <script src="client.js"></script>
</body>
</html>

The html file uses this JavaScript file to repeatedly call the API and print the history of messages:


var messageHistory = "";

(function call_service() {

   $.ajax({
      url: helloworldMultiregionendpoint+'v1/helloworld/',
      dataType: "json",
      cache: false,
      success: function(data) {
         messageHistory+="<p>"+data['message']+"</p>";
         $('#client_result').html(messageHistory);
      },
      complete: function() {
         // Schedule the next request when the current one's complete
         setTimeout(call_service, 10000);
      },
      error: function(xhr, status, error) {
         $('#client_result').html('ERROR: '+status);
      }
   });

})();

Also, make sure to update the settings in settings.js to match with the API Gateway endpoints for the DNS-proxy and the multi-regional endpoint for the Hello World API: var helloworldMultiregionendpoint = "https://hellowordapi.replacewithyourcompanyname.com/";

You can now open the HTML file in the browser (you can do this directly from the file system) and you should see something like the following screenshot:

Serverless multi region browser test

You can test failover by changing the environment variable in your health check Lambda function. In the Lambda console, select your health check function and scroll down to the Environment variables section. For the STATUS key, modify the value to fail.

Lambda update environment variable

You should see the region switch in the test client:

Serverless multi region broker test switchover

During an emulated failure like this, the browser might take some additional time to switch over due to connection keep-alive functionality. If you are using a browser like Chrome, you can kill all the connections to see a more immediate fail-over: chrome://net-internals/#sockets

Summary

You have implemented a simple way to do multi-regional serverless applications that fail over seamlessly between regions, either being accessed from the browser or from other applications/services. You achieved this by using the capabilities of Amazon Route 53 to do latency based routing and health checks for fail-over. You unlocked the use of these features in a serverless application by leveraging the new regional endpoint feature of Amazon API Gateway.

The setup was fully scripted using CloudFormation, the AWS Serverless Application Model (SAM), and the AWS CLI, and it can be integrated into deployment tools to push the code across the regions to make sure it is available in all the needed regions. For more information about cross-region deployments, see Building a Cross-Region/Cross-Account Code Deployment Solution on AWS on the AWS DevOps blog.

Using Enhanced Request Authorizers in Amazon API Gateway

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-enhanced-request-authorizers-in-amazon-api-gateway/

Recently, AWS introduced a new type of authorizer in Amazon API Gateway, enhanced request authorizers. Previously, custom authorizers received only the bearer token included in the request and the ARN of the API Gateway method being called. Enhanced request authorizers receive all of the headers, query string, and path parameters as well as the request context. This enables you to make more sophisticated authorization decisions based on parameters such as the client IP address, user agent, or a query string parameter alongside the client bearer token.

Enhanced request authorizer configuration

From the API Gateway console, you can declare a new enhanced request authorizer by selecting the Request option as the AWS Lambda event payload:

Create enhanced request authorizer

 

Just like normal custom authorizers, API Gateway can cache the policy returned by your Lambda function. With enhanced request authorizers, however, you can also specify the values that form the unique key of a policy in the cache. For example, if your authorization decision is based on both the bearer token and the IP address of the client, both values should be part of the unique key in the policy cache. The identity source parameter lets you specify these values as mapping expressions:

  • The bearer token appears in the Authorization header
  • The client IP address is stored in the sourceIp parameter of the request context.

Configure identity sources

 

Using enhanced request authorizers with Swagger

You can also define enhanced request authorizers in your Swagger (Open API) definitions. In the following example, you can see that all of the options configured in the API Gateway console are available as custom extensions in the API definition. For example, the identitySource field is a comma-separated list of mapping expressions.

securityDefinitions:
  IpAuthorizer:
    type: "apiKey"
    name: "IpAuthorizer"
    in: "header"
    x-amazon-apigateway-authtype: "custom"
    x-amazon-apigateway-authorizer:
      authorizerResultTtlInSeconds: 300
      identitySource: "method.request.header.Authorization, context.identity.sourceIp"
      authorizerUri: "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:XXXXXXXXXX:function:py-ip-authorizer/invocations"
      type: "request"

After you have declared your authorizer in the security definitions section, you can use it in your API methods:

---
swagger: "2.0"
info:
  title: "request-authorizer-demo"
basePath: "/dev"
paths:
  /hello:
    get:
      security:
      - IpAuthorizer: []
...

Enhanced request authorizer Lambda functions

Enhanced request authorizer Lambda functions receive an event object that is similar to proxy integrations. It contains all of the information about a request, excluding the body.

{
    "methodArn": "arn:aws:execute-api:us-east-1:XXXXXXXXXX:xxxxxx/dev/GET/hello",
    "resource": "/hello",
    "requestContext": {
        "resourceId": "xxxx",
        "apiId": "xxxxxxxxx",
        "resourcePath": "/hello",
        "httpMethod": "GET",
        "requestId": "9e04ff18-98a6-11e7-9311-ef19ba18fc8a",
        "path": "/dev/hello",
        "accountId": "XXXXXXXXXXX",
        "identity": {
            "apiKey": "",
            "sourceIp": "58.240.196.186"
        },
        "stage": "dev"
    },
    "queryStringParameters": {},
    "httpMethod": "GET",
    "pathParameters": {},
    "headers": {
        "cache-control": "no-cache",
        "x-amzn-ssl-client-hello": "AQACJAMDAAAAAAAAAAAAAAAAAAAAAAAAAAAA…",
        "Accept-Encoding": "gzip, deflate",
        "X-Forwarded-For": "54.240.196.186, 54.182.214.90",
        "Accept": "*/*",
        "User-Agent": "PostmanRuntime/6.2.5",
        "Authorization": "hello"
    },
    "stageVariables": {},
    "path": "/hello",
    "type": "REQUEST"
}

The following enhanced request authorizer snippet is written in Python and compares the source IP address against a list of valid IP addresses. The comments in the code explain what happens in each step.

...
VALID_IPS = ["58.240.195.186", "201.246.162.38"]

def lambda_handler(event, context):

    # Read the client’s bearer token.
    jwtToken = event["headers"]["Authorization"]
    
    # Read the source IP address for the request form 
    # for the API Gateway context object.
    clientIp = event["requestContext"]["identity"]["sourceIp"]
    
    # Verify that the client IP address is allowed.
    # If it’s not valid, raise an exception to make sure
    # that API Gateway returns a 401 status code.
    if clientIp not in VALID_IPS:
        raise Exception('Unauthorized')
    
    # Only allow hello users in!
    if not validate_jwt(userId):
        raise Exception('Unauthorized')

    # Use the values from the event object to populate the 
    # required parameters in the policy object.
    policy = AuthPolicy(userId, event["requestContext"]["accountId"])
    policy.restApiId = event["requestContext"]["apiId"]
    policy.region = event["methodArn"].split(":")[3]
    policy.stage = event["requestContext"]["stage"]
    
    # Use the scopes from the bearer token to make a 
    # decision on which methods to allow in the API.
    policy.allowMethod(HttpVerb.GET, '/hello')

    # Finally, build the policy.
    authResponse = policy.build()

    return authResponse
...

Conclusion

API Gateway customers build complex APIs, and authorization decisions often go beyond the simple properties in a JWT token. For example, users may be allowed to call the “list cars” endpoint but only with a specific subset of filter parameters. With enhanced request authorizers, you have access to all request parameters. You can centralize all of your application’s access control decisions in a Lambda function, making it easier to manage your application security.

Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Post Syndicated from Ed Lima original https://aws.amazon.com/blogs/compute/secure-api-access-with-amazon-cognito-federated-identities-amazon-cognito-user-pools-and-amazon-api-gateway/

Ed Lima, Solutions Architect

 

Our identities are what define us as human beings. Philosophical discussions aside, it also applies to our day-to-day lives. For instance, I need my work badge to get access to my office building or my passport to travel overseas. My identity in this case is attached to my work badge or passport. As part of the system that checks my access, these documents or objects help define whether I have access to get into the office building or travel internationally.

This exact same concept can also be applied to cloud applications and APIs. To provide secure access to your application users, you define who can access the application resources and what kind of access can be granted. Access is based on identity controls that can confirm authentication (AuthN) and authorization (AuthZ), which are different concepts. According to Wikipedia:

 

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that “you are who you say you are,” authorization is the process of verifying that “you are permitted to do what you are trying to do.” This does not mean authorization presupposes authentication; an anonymous agent could be authorized to a limited action set.

Amazon Cognito allows building, securing, and scaling a solution to handle user management and authentication, and to sync across platforms and devices. In this post, I discuss the different ways that you can use Amazon Cognito to authenticate API calls to Amazon API Gateway and secure access to your own API resources.

 

Amazon Cognito Concepts

 

It’s important to understand that Amazon Cognito provides three different services:

Today, I discuss the use of the first two. One service doesn’t need the other to work; however, they can be configured to work together.
 

Amazon Cognito Federated Identities

 
To use Amazon Cognito Federated Identities in your application, create an identity pool. An identity pool is a store of user data specific to your account. It can be configured to require an identity provider (IdP) for user authentication, after you enter details such as app IDs or keys related to that specific provider.

After the user is validated, the provider sends an identity token to Amazon Cognito Federated Identities. In turn, Amazon Cognito Federated Identities contacts the AWS Security Token Service (AWS STS) to retrieve temporary AWS credentials based on a configured, authenticated IAM role linked to the identity pool. The role has appropriate IAM policies attached to it and uses these policies to provide access to other AWS services.

Amazon Cognito Federated Identities currently supports the IdPs listed in the following graphic.

 



Continue reading Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

How to remove boilerplate validation logic in your REST APIs with Amazon API Gateway request validation

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/how-to-remove-boilerplate-validation-logic-in-your-rest-apis-with-amazon-api-gateway-request-validation/


Ryan Green, Software Development Engineer

Does your API suffer from code bloat or wasted developer time due to implementation of simple input validation rules? One of the necessary but least exciting aspects of building a robust REST API involves implementing basic validation of input data to your API. In addition to increasing the size of the code base, validation logic may require taking on extra dependencies and requires diligence in ensuring the API implementation doesn’t get out of sync with API request/response models and SDKs.

Amazon API Gateway recently announced the release of request validators, a simple but powerful new feature that should help to liberate API developers from the undifferentiated effort of implementing basic request validation in their API backends.

This feature leverages API Gateway models to enable the validation of request payloads against the specified schema, including validation rules as defined in the JSON-Schema Validation specification. Request validators also support basic validation of required HTTP request parameters in the URI, query string, and headers.

When a validation failure occurs, API Gateway fails the API request with an HTTP 400 error, skips the request to the backend integration, and publishes detailed error results in Amazon CloudWatch Logs.

In this post, I show two examples using request validators, validating the request body and the request parameters.

Example: Validating the request body

For this example, you build a simple API for a simulated stock trading system. This API has a resource, "/orders", that represents stock purchase orders. An HTTP POST to this resource allows the client to initiate one or more orders.

A sample request might look like this:

POST /orders

[
  {
    "account-id": "abcdef123456",
    "type": "STOCK",
    "symbol": "AMZN",
    "shares": 100,
    "details": {
      "limit": 1000
    }
  },
  {
    "account-id": "zyxwvut987654",
    "type": "STOCK",
    "symbol": "BA",
    "shares": 250,
    "details": {
      "limit": 200
    }
  }
]

The JSON-Schema for this request body might look something like this:

{
  "$schema": "http://json-schema.org/draft-04/schema#",
  "title": "Create Orders Schema",
  "type": "array",
  "minItems": 1,
  "items": {
    "type": "object",
    "required": [
      "account-id",
      "type",
      "symbol",
      "shares",
      "details"
    ],
    "properties": {
      "account_id": {
        "type": "string",
        "pattern": "[A-Za-z]{6}[0-9]{6}"
      },
      "type": {
        "type": "string",
        "enum": [
          "STOCK",
          "BOND",
          "CASH"
        ]
      },
      "symbol": {
        "type": "string",
        "minLength": 1,
        "maxLength": 4
      },
      "shares": {
        "type": "number",
        "minimum": 1,
        "maximum": 1000
      },
      "details": {
        "type": "object",
        "required": [
          "limit"
        ],
        "properties": {
          "limit": {
            "type": "number"
          }
        }
      }
    }
  }
}

This schema defines the "shape" of the request model but also defines several constraints on the various properties. Here are the validation rules for this schema:

  • The root array must have at least 1 item
  • All properties are required
  • Account ID must match the regular expression format "[A-Za-z]{6}[0-9]{6}"
  • Type must be one of STOCK, BOND, or CASH
  • Symbol must be a string between 1 and 4 characters
  • Shares must be a number between 1 and 1000

I’m sure you can imagine how this would look in your validation library of choice, or at worst, in a hand-coded implementation.

Now, try this out with API Gateway request validators. The Swagger definition below defines the REST API, models, and request validators. Its two operations define simple mock integrations to simulate behavior of the stock trading API.

Note the request validator definitions under the "x-amazon-apigateway-request-validators" extension, and the references to these validators defined on the operation and on the API.

{
  "swagger": "2.0",
  "info": {
    "title": "API Gateway - Request Validation Demo - [email protected]"
  },
  "schemes": [
    "https"
  ],
  "produces": [
    "application/json"
  ],
  "x-amazon-apigateway-request-validators" : {
    "full" : {
      "validateRequestBody" : true,
      "validateRequestParameters" : true
    },
    "body-only" : {
      "validateRequestBody" : true,
      "validateRequestParameters" : false
    }
  },
  "x-amazon-apigateway-request-validator" : "full",
  "paths": {
    "/orders": {
      "post": {
        "x-amazon-apigateway-request-validator": "body-only",
        "parameters": [
          {
            "in": "body",
            "name": "CreateOrders",
            "required": true,
            "schema": {
              "$ref": "#/definitions/CreateOrders"
            }
          }
        ],
        "responses": {
          "200": {
            "schema": {
              "$ref": "#/definitions/Message"
            }
          },
          "400" : {
            "schema": {
              "$ref": "#/definitions/Message"
            }
          }
        },
        "x-amazon-apigateway-integration": {
          "responses": {
            "default": {
              "statusCode": "200",
              "responseTemplates": {
                "application/json": "{\"message\" : \"Orders successfully created\"}"
              }
            }
          },
          "requestTemplates": {
            "application/json": "{\"statusCode\": 200}"
          },
          "passthroughBehavior": "never",
          "type": "mock"
        }
      },
      "get": {
        "parameters": [
          {
            "in": "header",
            "name": "Account-Id",
            "required": true
          },
          {
            "in": "query",
            "name": "type",
            "required": false
          }
        ],
        "responses": {
          "200" : {
            "schema": {
              "$ref": "#/definitions/Orders"
            }
          },
          "400" : {
            "schema": {
              "$ref": "#/definitions/Message"
            }
          }
        },
        "x-amazon-apigateway-integration": {
          "responses": {
            "default": {
              "statusCode": "200",
              "responseTemplates": {
                "application/json": "[{\"order-id\" : \"qrx987\",\n   \"type\" : \"STOCK\",\n   \"symbol\" : \"AMZN\",\n   \"shares\" : 100,\n   \"time\" : \"1488217405\",\n   \"state\" : \"COMPLETED\"\n},\n{\n   \"order-id\" : \"foo123\",\n   \"type\" : \"STOCK\",\n   \"symbol\" : \"BA\",\n   \"shares\" : 100,\n   \"time\" : \"1488213043\",\n   \"state\" : \"COMPLETED\"\n}\n]"
              }
            }
          },
          "requestTemplates": {
            "application/json": "{\"statusCode\": 200}"
          },
          "passthroughBehavior": "never",
          "type": "mock"
        }
      }
    }
  },
  "definitions": {
    "CreateOrders": {
      "$schema": "http://json-schema.org/draft-04/schema#",
      "title": "Create Orders Schema",
      "type": "array",
      "minItems" : 1,
      "items": {
        "type": "object",
        "$ref" : "#/definitions/Order"
      }
    },
    "Orders" : {
      "type": "array",
      "$schema": "http://json-schema.org/draft-04/schema#",
      "title": "Get Orders Schema",
      "items": {
        "type": "object",
        "properties": {
          "order_id": { "type": "string" },
          "time" : { "type": "string" },
          "state" : {
            "type": "string",
            "enum": [
              "PENDING",
              "COMPLETED"
            ]
          },
          "order" : {
            "$ref" : "#/definitions/Order"
          }
        }
      }
    },
    "Order" : {
      "type": "object",
      "$schema": "http://json-schema.org/draft-04/schema#",
      "title": "Schema for a single Order",
      "required": [
        "account-id",
        "type",
        "symbol",
        "shares",
        "details"
      ],
      "properties" : {
        "account-id": {
          "type": "string",
          "pattern": "[A-Za-z]{6}[0-9]{6}"
        },
        "type": {
          "type" : "string",
          "enum" : [
            "STOCK",
            "BOND",
            "CASH"]
        },
        "symbol" : {
          "type": "string",
          "minLength": 1,
          "maxLength": 4
        },
        "shares": {
          "type": "number",
          "minimum": 1,
          "maximum": 1000
        },
        "details": {
          "type": "object",
          "required": [
            "limit"
          ],
          "properties": {
            "limit": {
              "type": "number"
            }
          }
        }
      }
    },
    "Message": {
      "type": "object",
      "properties": {
        "message" : {
          "type" : "string"
        }
      }
    }
  }
}

To create the demo API, run the following commands (requires the AWS CLI):

git clone https://github.com/rpgreen/apigateway-validation-demo.git
cd apigateway-validation-demo
aws apigateway import-rest-api --body "file://validation-swagger.json" --region us-east-1
export API_ID=[API ID from last step]
aws apigateway create-deployment --rest-api-id $API_ID --stage-name test --region us-east-1

Make some requests to this API. Here’s the happy path with valid request body:

curl -v -H "Content-Type: application/json" -X POST -d ' [  
   { 
      "account-id":"abcdef123456",
      "type":"STOCK",
      "symbol":"AMZN",
      "shares":100,
      "details":{  
         "limit":1000
      }
   }
]' https://$API_ID.execute-api.us-east-1.amazonaws.com/test/orders

Response:

HTTP/1.1 200 OK

{"message" : "Orders successfully created"}

Put the request validator to the test. Notice the errors in the payload:

curl -v -H "Content-Type: application/json" -X POST -d '[
  {
    "account-id": "abcdef123456",
    "type": "foobar",
    "symbol": "thisstringistoolong",
    "shares": 999999,
    "details": {
       "limit": 1000
    }
  }
]' https://$API_ID.execute-api.us-east-1.amazonaws.com/test/orders

Response:

HTTP/1.1 400 Bad Request

{"message": "Invalid request body"}

When you inspect the CloudWatch Logs entries for this API, you see the detailed error messages for this payload. Run the following command:

pip install apilogs

apilogs get --api-id $API_ID --stage test --watch --region us-east-1`

The CloudWatch Logs entry for this request reveals the specific validation errors:

"Request body does not match model schema for content type application/json: [numeric instance is greater than the required maximum (maximum: 1000, found: 999999), string "thisstringistoolong" is too long (length: 19, maximum allowed: 4), instance value ("foobar") not found in enum (possible values: ["STOCK","BOND","CASH"])]"

Note on Content-Type: 

Request body validation is performed according to the configured request Model which is selected by the value of the request ‘Content-Type’ header. In order to enforce validation and restrict requests to explicitly-defined content types, it’s a good idea to use strict request passthrough behavior (‘"passthroughBehavior": "never"’), so that unsupported content types fail with 415 "Unsupported Media Type" response.

Example: Validating the request parameters

For the next example, add a GET method to the /orders resource that returns the list of purchase orders. This method has an optional query string parameter (type) and a required header parameter (Account-Id).

The request validator configured for the GET method is set to validate incoming request parameters. This performs basic validation on the required parameters, ensuring that the request parameters are present and non-blank.

Here are some example requests.

Happy path:

curl -v -H "Account-Id: abcdef123456" "https://$API_ID.execute-api.us-east-1.amazonaws.com/test/orders?type=STOCK"

Response:

HTTP/1.1 200 OK

[{"order-id" : "qrx987",
   "type" : "STOCK",
   "symbol" : "AMZN",
   "shares" : 100,
   "time" : "1488217405",
   "state" : "COMPLETED"
},
{
   "order-id" : "foo123",
   "type" : "STOCK",
   "symbol" : "BA",
   "shares" : 100,
   "time" : "1488213043",
   "state" : "COMPLETED"
}]

Omitting optional type parameter:

curl -v -H "Account-Id: abcdef123456" "https://$API_ID.execute-api.us-east-1.amazonaws.com/test/orders"

Response:

HTTP/1.1 200 OK

[{"order-id" : "qrx987",
   "type" : "STOCK",
   "symbol" : "AMZN",
   "shares" : 100,
   "time" : "1488217405",
   "state" : "COMPLETED"
},
{
   "order-id" : "foo123",
   "type" : "STOCK",
   "symbol" : "BA",
   "shares" : 100,
   "time" : "1488213043",
   "state" : "COMPLETED"
}]

Omitting required Account-Id parameter:

curl -v "https://$API_ID.execute-api.us-east-1.amazonaws.com/test/orders?type=STOCK"

Response:

HTTP/1.1 400 Bad Request

{"message": "Missing required request parameters: [Account-Id]"}

Conclusion

Request validators should help API developers to build better APIs by allowing them to remove boilerplate validation logic from backend implementations and focus on actual business logic and deep validation. This should further reduce the size of the API codebase and also help to ensure that API models and validation logic are kept in sync. 

Please forward any questions or feedback to the API Gateway team through AWS Support or on the AWS Forums.

How to Use Service Control Policies in AWS Organizations to Enforce Healthcare Compliance in Your AWS Account

Post Syndicated from Aaron Lima original https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-in-aws-organizations-to-enforce-healthcare-compliance-in-your-aws-account/

AWS customers with healthcare compliance requirements such as the U.S. Health Insurance Portability and Accountability Act (HIPAA) and Good Laboratory, Clinical, and Manufacturing Practices (GxP) might want to control access to the AWS services their developers use to build and operate their GxP and HIPAA systems. For example, customers with GxP requirements might approve AWS as a supplier on the basis of AWS’s SOC certification and therefore want to ensure that only the services in scope for SOC are available to developers of GxP systems. Likewise, customers with HIPAA requirements might want to ensure that only AWS HIPAA Eligible Services are available to store and process protected health information (PHI). Now with AWS Organizations—policy-based management for multiple AWS accounts—you can programmatically control access to the services within your AWS accounts.

In this blog post, I show how to restrict an AWS account to HIPAA Eligible Services as well as explain why you should include additional supporting AWS services with service control policies (SCPs) in AWS Organizations. Although this example is HIPAA related, you can repurpose it for GxP, a database of Genotypes and Phenotypes (dbGaP) solutions, or other healthcare compliance requirements for which you want to control developers’ access to a specific scope of services.

Managing an account hierarchy with AWS Organizations

Let’s say I manage four AWS accounts: a Payer account, a Development account, a Corporate IT account, and a fourth account that contains PHI. In accordance with AWS’s Business Associate Agreement (BAA), I want to be sure that only AWS HIPAA Eligible Services are allowed in the fourth account along with supporting AWS services that help encrypt and control access to the account. The following diagram shows a logical view of the associated account structure.

Diagram showing the logical view of the account structure

As illustrated in the preceding diagram, Organizations allows me to create this account hierarchy between the four AWS accounts I manage. Before I proceed to show how to create and apply an SCP to the HIPAA account in this hierarchy, I’ll define some Organizations terminology that I use in this post:

  • Organization – A consolidated set of AWS accounts that you manage. For the preceding example, I have already created my organization and invited my accounts. For more information about creating an organization and inviting accounts, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.
  • Master account – The management hub for Organizations. This is where I invite existing accounts, create new accounts and manage my SCPs. I run all commands demonstrated in this post from this master account. This is also my payer account in the preceding account structure diagram.
  • Service control policy (SCP) – A set of controls that the organization’s master account can apply to the organization, selected OUs, and selected accounts. SCPs allow me to whitelist or blacklist services and actions that I can delegate to the users and roles in the account to which the SCPs are applied. The resultant security permissions for a user and role are the union of the permissions in an SCP and the permissions in an AWS Identity and Access Management (IAM) policy. I refer to SCPs as a policy type in some of this post’s command-line arguments.
  • Organizational unit (OU) – A container for a set of AWS accounts. OUs can be arranged into a hierarchy that can be as many as five levels deep. The top of the hierarchy of OUs is also known as the administrative root. In the walkthrough, I create a HIPAA OU and apply my policy to that OU. I then move the account into the OU to have the policy applied. To manage the organization depicted above, I might create OUs for my Corporate IT account and my Development account.

To restrict services in the fourth account to HIPAA Eligible Services and required supporting services, I will show how to create and apply an SCP to the account with the following steps:

  1. Create a JSON document that lists HIPAA Eligible Services and supporting AWS services.
  2. Create an SCP with a JSON document.
  3. Create an OU for the HIPAA account, and move the account into the OU.
  4. Attach the SCP to the HIPAA OU.
  5. Verify which SCPs are attached to the HIPAA OU.
  6. Detach the default FullAWSAccess SCP from the OU.
  7. Verify SCP enforcement.

How to create and apply an SCP to an account

Let’s walk through the steps to create an SCP and apply it to an account. I can manage my organization by using the Organizations console, AWS CLI, or AWS API from my master account. For the purposes of this post, I will demonstrate the creation and application of an SCP to my account by using the AWS CLI.

1.  Create a JSON document that lists HIPAA Eligible Services and supporting AWS services

Creating an SCP will be familiar if you have experience writing an IAM policy because the grammar in crafting the policy is similar. I will create a JSON document that lists only the services I want to allow in my account, and I will use this JSON document to create my SCP via the command line. The SCP I create from this document allows all actions for all resources of the listed services, effectively turning on only these services in my account. I name the document HIPAAExample.json and save it to the directory from which I will demonstrate the CLI commands.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                 "dynamodb:*","rds:*","ec2:*","s3:*","elasticmapreduce:*",
                 "glacier:*","elasticloadbalancing:*", "cloudwatch:*",
                 "importexport:*", "cloudformation:*", "redshift:*",
                 "iam:*", "health:*", "config:*", "snowball:*",
                 "trustedadvisor:*", "kms:*", "apigateway:*",
                 "autoscaling:*", "directconnect:*",
	         "execute-api:*", "sts:*"
             ],
             "Effect": "Allow",
             "Resource": "*"
        }
    ]
}

Note that the SCP includes more than just the HIPAA Eligible Services.

Why include additional supporting services in a HIPAA SCP?

You can use any service in your account, but you can use only HIPAA Eligible Services to store and process PHI. Some services, such as IAM and AWS Key Management Service (KMS), can be used because these services do not directly store or process PHI, but they might still be needed for administrative and security purposes.

To those ends, I include the following supporting services in the SCP to help me with account administration and security:

  • Access controls – I include IAM to ensure that I can manage access to resources in the account. Though Organizations can limit whether a service is available, I still need the granularity of access control that IAM provides.
  • Encryption – I need a way to encrypt the data. The integration of AWS KMS with Amazon Redshift, Amazon RDS, and Amazon Elastic Block Store (Amazon EBS) helps with this security requirement.
  • Auditing – I also need to be able to demonstrate controls in practice, track changes, and discover any malicious activity in my account. You will note that AWS CloudTrail is not included in the SCP, which prohibits any mutating actions against CloudTrail from users within the account. However, when setting up the account, CloudTrail was set up to send logs to a logging account as recommended in AWS Multiple Account Security Strategy. The logs do not reside in the account, and no one has privileges to change the trail including root or administrators, which helps ensure the protection of the API logging of the account. This highlights how SCPs can be used to secure services in an account.
  • Automation – Automation can help me with my security controls as shown in How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series; therefore, I consider including AWS CloudFormation as a way to ensure that applications deployed in the account adhere to my security and compliance policies. Auto Scaling also is an important service to include to help me scale to meet demand and control cost.
  • Monitoring and support – The remaining services in the SCP such as Amazon CloudWatch are needed to make sure that I can monitor the environment and have visibility into the health of the workloads and applications in my AWS account, helping me maintain operational control. AWS Trusted Advisor is a service that helps to make sure that my cloud environment is well architected.

Now that I have created my JSON document with the services that I will include and explained in detail why I include them, I can create my SCP.

2.  Create an SCP with a JSON document

I will now create the SCP via the CLI with the aws organizations create-policy command. Using the name parameter, I name the SCP and define that I am creating an SCP, both of which are required parameters. I then provide a brief description of the SCP and specify the location of the JSON document I created in Step 1.

aws organizations create-policy --name hipaa-example-policy --type SERVICE_CONTROL_POLICY --description
 
"All HIPAA eligible services plus supporting AWS Services." --content file://./HIPAAExample.json

Output

{
    "policy": {
        "policySummary": {
            "type": "SERVICE_CONTROL_POLICY",
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "name": "hipaa-example-policy",
            "awsManaged": false,
            "id": "p-6ldl8bll", "description": "All HIPAA eligible services and supporting AWS services."

I take note of the policy-id because I need it to attach the SCP to my OU in Step 4. Note: Throughout this post, fictitious placeholder values are shown for the purposes of demonstrating this post’s solution.

3.  Create an OU for the HIPAA account, and move the account into the OU

Grouping accounts by function will make it easier to manage the organization and apply policies across multiple accounts. In this step, I create an OU for the HIPAA account and move the target account into the OU. To create an OU, I need to know the ID for the parent object under which I will be placing the OU. In this case, I will place it under the root and need the ID for the root. To get the root ID, I run the list-roots command.

aws organizations list-roots

Output

{
    "Roots": [
        {
            "PolicyTypes": [
                {
                    "Status": "ENABLED", 
                    "Type": "SERVICE_CONTROL_POLICY"
                }
            ], 
            "Id": "r-rth4", 
            "Arn": "arn:aws:organizations::012345678900:root/o-p9bx61i0h1/r-rth4", 
            "Name": "Root"
        }
    ]
}

With the root ID, I can proceed to create the OU under the root.

aws organizations create-organizational-unit --parent-id r-rth4 --name HIPAA-Accounts

Output

{
    "OrganizationalUnit": {
       "Id": "ou-rth4-ezo5wonz", 
        "Arn": "arn:aws:organizations::012345678900:ou/o-p9bx61i0h1/ou-rth4-ezo5wonz", 
        "Name": "HIPAA-Accounts"
    }
}

I take note of the OU ID in the output because I need it in the next command to move my target account. I will also need the root ID in the command because I am moving the target account from the root into the OU.

aws organizations move-account --account-id 098765432110 --source-parent-id r-rth4 --destination-parent-id 
ou-rth4-ezo5wonz

No Output

 

4.  Attach the SCP to the HIPAA OU

Even though you may have enabled All Features in your organization, you still need to enable SCPs at the root level of the organization to attach SCPs to objects. To do this in my case, I will run the enable-policy-type command and provide the root ID.

aws organizations enable-policy-type --root-id r-rth4 --policy-type SERVICE_CONTROL_POLICY

Output

{
    "Root": {
        "PolicyTypes": [], 
        "Id": "r-rth4", 
        "Arn": "arn:aws:organizations::012345678900:root/o-p9bx61i0h1/r-rth4", 
        "Name": "Root"
    }
}

Now, I will attach the SCP to the OU by using the aws organizations attach-policy command. I must include the target-id, which is the OU ID noted in the previous step and the policy-id from the output of the command in Step 2.

aws organizations attach-policy --target-id ou-rth4-ezo5wonz --policy-id p-6ldl8bll

No Output

 

5.  Verify which SCPs are attached to the HIPAA OU

I will now verify which SCPs are attached to my account by using the aws organization list-policies-for-target command. I must provide the OU ID with the target-id parameter and then filter for SERVICE_CONTROL_POLICY type.

aws organizations list-policies-for-target --target-id ou-rth4-ezo5wonz --filter SERVICE_CONTROL_POLICY

Output

{
    "policies": [
        {
            "awsManaged": false,
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "id": "p-6ldl8bll",
            "description": "All HIPAA eligible services plus supporting AWS Services.",
            "name": "hipaa-example-policy",
            "type": "SERVICE_CONTROL_POLICY"
        },
        {
            "awsManaged": true,
            "arn": "arn:aws:organizations::aws:policy/SERVICE_CONTROL_POLICY/p-FullAWSAccess",
            "id": "p-FullAWSAccess",
            "description": "Allows access to every operation",
            "name": "FullAWSAccess",
            "type": "SERVICE_CONTROL_POLICY"
        }
    ]
}

As the output shows, two SCPs are attached to this account. I want to detach the FullAWSAccess SCP so that the HIPAA SCP is properly in effect. The FullAWSAccess SCP is an Allow SCP that allows all AWS services. If I were to leave the default FullAWSAccess SCP in place, it would grant access to services I do not want to allow in my account. Detaching the FullAWSAccess SCP means that only the services I allow in the hipaa-example-policy are allowed in my account. Note that if I were to create a Deny SCP, the SCP would take precedence over an Allow SCP.

6.  Detach the default FullAWSAccess SCP from the OU

Before detaching the default FullAWSAccess SCP from my account, I run the aws workspaces describe-workspaces call from the Amazon WorkSpaces API. I am currently not running any WorkSpaces, so the output shows an empty list. However, I will test this again after I detach the FullAWSAccess SCP from my account and am left with only the HIPAA SCP attached to the account.

aws workspaces describe-workspaces

Output

{
    "Workspaces": []
}

In order to detach the FullAWSAccess SCP, I must run the aws organizations detach-policy command, providing it the policy-id and target-id of the OU.

aws organizations detach-policy --policy-id p-FullAWSAccess --target-id ou-rth4-ezo5wonz

No Output

 

If I rerun the list-policies-for-target command again, I see that only one SCP is attached to my account that allows HIPAA Eligible Services, as shown in the following output.

aws organizations list-policies-for-target --target-id ou-rth4-ezo5wonz --filter SERVICE_CONTROL_POLICY

Output

 

{
    "policies": [
        {
            "name": "hipaa-example-policy",
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "description": "All HIPAA eligible services plus supporting AWS Services.",
            "awsManaged": false,
            "id": "p-6ldl8bll",
            "type": "SERVICE_CONTROL_POLICY"
        }
    ]
}

Now I can test and verify the enforcement of this SCP.

7.  Verify SCP enforcement

Previously, the administrator of the account had full access to all AWS services, including Amazon WorkSpaces. His IAM policy for Amazon WorkSpaces allows all actions for Amazon WorkSpaces. However, after I apply the HIPAA SCP to the account, this changes the effect of the IAM policy to deny all actions for Amazon WorkSpaces because it is not an allowed service.

The following screenshot of the IAM policy simulator shows which permissions are set for the administrator after I apply the HIPAA SCP. Also, note that the IAM policy simulator shows that the Deny permission is being denied by Organizations. Because the policy simulator is aware of the SCPs attached to an account, it is a good tool to use when troubleshooting or validating an SCP.

If I run the aws workspaces describe-workspaces call again as I did in Step 5, this time I receive an AccessDeniedException error, which validates that the HIPAA SCP is working because Amazon WorkSpaces is not an allowed service in the SCP.

aws workspaces describe-workspaces

Output

An error occurred (AccessDeniedException) when calling the DescribeWorkspaces operation: 
User: arn:aws:iam::098765432110:user/admin is not authorized to perform: workspaces:DescribeWorkspaces 
on resource: arn:aws:workspaces:us-east-1:098765432110:workspace/*

This completes the process of creating and applying an SCP to my account.

Summary

In this blog post, I have shown how to create an SCP and attach it to an OU to restrict an account to HIPAA Eligible Services and additional supporting services. I also showed how to create an OU, move an account into the OU, and then validate the SCP attached to the OU. For more information, see AWS Cloud Computing in Healthcare.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues with implementing this solution, please start a new thread on the IAM forum.

– Aaron

SAML for Your Serverless JavaScript Application: Part II

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/saml-for-your-serverless-javascript-application-part-ii/

Contributors: Richard Threlkeld, Gene Ting, Stefano Buliani

The full code for both scenarios—including SAM templates—can be found at the samljs-serverless-sample GitHub repository. We highly recommend you use the SAM templates in the GitHub repository to create the resources, opitonally you can manually create them.


This is the second part of a two part series for using SAML providers in your application and receiving short-term credentials to access AWS Services. These credentials can be limited with IAM roles so the users of the applications can perform actions like fetching data from databases or uploading files based on their level of authorization. For example, you may want to build a JavaScript application that allows a user to authenticate against Active Directory Federation Services (ADFS). The user can be granted scoped AWS credentials to invoke an API to display information in the application or write to an Amazon DynamoDB table.

Part I of this series walked through a client-side flow of retrieving SAML claims and passing them to Amazon Cognito to retrieve credentials. This blog post will take you through a more advanced scenario where logic can be moved to the backend for a more comprehensive and flexible solution.

Prerequisites

As in Part I of this series, you need ADFS running in your environment. The following configurations are used for reference:

  1. ADFS federated with the AWS console. For a walkthrough with an AWS CloudFormation template, see Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0.
  2. Verify that you can authenticate with user example\bob for both the ADFS-Dev and ADFS-Production groups via the sign-in page.
  3. Create an Amazon Cognito identity pool.

Scenario Overview

The scenario in the last blog post may be sufficient for many organizations but, due to size restrictions, some browsers may drop part or all of a query string when sending a large number of claims in the SAMLResponse. Additionally, for auditing and logging reasons, you may wish to relay SAML assertions via POST only and perform parsing in the backend before sending credentials to the client. This scenario allows you to perform custom business logic and validation as well as putting tracking controls in place.

In this post, we want to show you how these requirements can be achieved in a Serverless application. We also show how different challenges (like XML parsing and JWT exchange) can be done in a Serverless application design. Feel free to mix and match, or swap pieces around to suit your needs.

This scenario uses the following services and features:

  • Cognito for unique ID generation and default role mapping
  • S3 for static website hosting
  • API Gateway for receiving the SAMLResponse POST from ADFS
  • Lambda for processing the SAML assertion using a native XML parser
  • DynamoDB conditional writes for session tracking exceptions
  • STS for credentials via Lambda
  • KMS for signing JWT tokens
  • API Gateway custom authorizers for controlling per-session access to credentials, using JWT tokens that were signed with KMS keys
  • JavaScript-generated SDK from API Gateway using a service proxy to DynamoDB
  • RelayState in the SAMLRequest to ADFS to transmit the CognitoID and a short code from the client to your AWS backend

At a high level, this solution is similar to that of Scenario 1; however, most of the work is done in the infrastructure rather than on the client.

  • ADFS still uses a POST binding to redirect the SAMLResponse to API Gateway; however, the Lambda function does not immediately redirect.
  • The Lambda function decodes and uses an XML parser to read the properties of the SAML assertion.
  • If the user’s assertion shows that they belong to a certain group matching a specified string (“Prod” in the sample), then you assign a role that they can assume (“ADFS-Production”).
  • Lambda then gets the credentials on behalf of the user and stores them in DynamoDB as well as logging an audit record in a separate table.
  • Lambda then returns a short-lived, signed JSON Web Token (JWT) to the JavaScript application.
  • The application uses the JWT to get their stored credentials from DynamoDB through an API Gateway custom authorizer.

The architecture you build in this tutorial is outlined in the following diagram.

lambdasamltwo_1.png

First, a user visits your static website hosted on S3. They generate an ephemeral random code that is transmitted during redirection to ADFS, where they are prompted for their Active Directory credentials.

Upon successful authentication, the ADFS server redirects the SAMLResponse assertion, along with the code (as the RelayState) via POST to API Gateway.

The Lambda function parses the SAMLResponse. If the user is part of the appropriate Active Directory group (AWS-Production in this tutorial), it retrieves credentials from STS on behalf of the user.

The credentials are stored in a DynamoDB table called SAMLSessions, along with the short code. The user login is stored in a tracking table called SAMLUsers.

The Lambda function generates a JWT token, with a 30-second expiration time signed with KMS, then redirects the client back to the static website along with this token.

The client then makes a call to an API Gateway resource acting as a DynamoDB service proxy that retrieves the credentials via a DeleteItem call. To make this call, the client passes the JWT in the authorization header.

A custom authorizer runs to validate the token using the KMS key again as well as the original random code.

Now that the client has credentials, it can use these to access AWS resources.

Tutorial: Backend processing and audit tracking

Before you walk through this tutorial you will need the source code from the samljs-serverless-sample Github Repository. You should use the SAM template provided in order to streamline the process but we’ll outline how you you would manually create resources too. There is a readme in the repository with instructions for using the SAM template. Either way you will still perform the manual steps of KMS key configuration, ADFS enablement of RelayState, and Amazon Cognito Identity Pool creation. The template will automate the process in creating the S3 website, Lambda functions, API Gateway resources and DynamoDB tables.

We walk through the details of all the steps and configuration below for illustrative purposes in this tutorial calling out the sections that can be omitted if you used the SAM template.

KMS key configuration

To sign JWT tokens, you need an encrypted plaintext key, to be stored in KMS. You will need to complete this step even if you use the SAM template.

  1. In the IAM console, choose Encryption Keys, Create Key.
  2. For Alias, type sessionMaster.
  3. For Advanced Options, choose KMS, Next Step.
  4. For Key Administrative Permissions, select your administrative role or user account.
  5. For Key Usage Permissions, you can leave this blank as the IAM Role (next section) will have individual key actions configured. This allows you to perform administrative actions on the set of keys while the Lambda functions have rights to just create data keys for encryption/decryption and use them to sign JWTs.
  6. Take note of the Key ID, which is needed for the Lambda functions.

IAM role configuration

You will need an IAM role for executing your Lambda functions. If you are using the SAM template this can be skipped. The sample code in the GitHub repository under Scenario2 creates separate roles for each function, with limited permissions on individual resources when you use the SAM template. We recommend separate roles scoped to individual resources for production deployments. Your Lambda functions need the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1432927122000",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem",
                “dynamodb:GetItem”,
                “dynamodb:DeleteItem”,
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "kms:GenerateDataKey*",
                “kms:Decrypt”
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Lambda function configuration

If you are not using the SAM template, create the following three Lambda functions from the GitHub repository in /Scenario2/lambda using the following names and environment variables. The Lambda functions are written in Node.js.

  • GenerateKey_awslabs_samldemo
  • ProcessSAML_awslabs_samldemo
  • SAMLCustomAuth_awslabs_samldemo

The functions above are built, packaged, and uploaded to Lambda. For two of the functions, this can be done from your workstation (the sample commands for each function assume OSX or Linux). The third will need to be built on an AWS EC2 instance running the current Lambda AMI.

GenerateKey_awslabs_samldemo

This function is only used one time to create keys in KMS for signing JWT tokens. The function calls GenerateDataKey and stores the encrypted CipherText blob as Base64 in DynamoDB. This is used by the other two functions for getting the PlainTextKey for signing with a Decrypt operation.

This function only requires a single file. It has the following environment variables:

  • KMS_KEY_ID: Unique identifier from KMS for your sessionMaster Key
  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or something unique to your organization)
  • RAND_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Navigate into /Scenario2/lambda/GenerateKey and run the following commands:

zip –r generateKey.zip .

aws lambda create-function --function-name GenerateKey_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler index.handler --timeout 10 --memory-size 512 --zip-file fileb://generateKey.zip --environment Variables={SESSION_DDB_TABLE=SAMLSessions,ENC_CONTEXT=ADFS,RAND_HASH=us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX,KMS_KEY_ID=<kms key="KEY" id="ID">}

SAMLCustomAuth_awslabs_samldemo

This is an API Gateway custom authorizer called after the client has been redirected to the website as part of the login workflow. This function calls a GET against the service proxy to DynamoDB, retrieving credentials. The function uses the KMS key signing validation of the JWT created in the ProcessSAML_awslabs_samldemo function and also validates the random code that was generated at the beginning of the login workflow.

You must install the dependencies before zipping this function up. It has the following environment variables:

  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or whatever was used in GenerateKey_awslabs_samldemo)
  • ID_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Navigate into /Scenario2/lambda/CustomAuth and run:

npm install

zip –r custom_auth.zip .

aws lambda create-function --function-name SAMLCustomAuth_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler CustomAuth.handler --timeout 10 --memory-size 512 --zip-file fileb://custom_auth.zip --environment Variables={SESSION_DDB_TABLE=SAMLSessions,ENC_CONTEXT=ADFS,ID_HASH= us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }

ProcessSAML_awslabs_samldemo

This function is called when ADFS sends the SAMLResponse to API Gateway. The function parses the SAML assertion to select a role (based on a simple string search) and extract user information. It then uses this data to get short-term credentials from STS via AssumeRoleWithSAML and stores this information in a SAMLSessions table and tracks the user login via a SAMLUsers table. Both of these are DynamoDB tables but you could also store the user information in another AWS database type, as this is for auditing purposes. Finally, this function creates a JWT (signed with the KMS key) which is only valid for 30 seconds and is returned to the client as part of a 302 redirect from API Gateway.

This function needs to be built on an EC2 server running Amazon Linux. This function leverages two main external libraries:

  • nJwt: Used for secure JWT creation for individual client sessions to get access to their records
  • libxmljs: Used for XML XPath queries of the decoded SAMLResponse from AD FS

Libxmljs uses native build tools and you should run this on EC2 running the same AMI as Lambda and with Node.js v4.3.2; otherwise, you might see errors. For more information about current Lambda AMI information, see Lambda Execution Environment and Available Libraries.

After you have the correct AMI launched in EC2 and have SSH open to that host, install Node.js. Ensure that the Node.js version on EC2 is 4.3.2, to match Lambda. If your version is off, you can roll back with NVM.

After you have set up Node.js, run the following command:

yum install -y make gcc*

Now, create a /saml folder on your EC2 server and copy up ProcessSAML.js and package.json from /Scenario2/lambda/ProcessSAML to the EC2 server. Here is a sample SCP command:

cd ProcessSAML/

ls

package.json    ProcessSAML.js

scp -i ~/path/yourpemfile.pem ./* [email protected]:/home/ec2-user/saml/

Then you can SSH to your server, cd into the /saml directory, and run:

npm install

A successful build should look similar to the following:

lambdasamltwo_2.png

Finally, zip up the package and create the function using the following AWS CLI command and these environment variables. Configure the CLI with your credentials as needed.

  • SESSION_DDB_TABLE: SAMLSessions
  • ENC_CONTEXT: ADFS (or whatever was used in GenerateKeyawslabssamldemo)
  • PRINCIPAL_ARN: Full ARN of the AD FS IdP created in the IAM console
  • USER_DDB_TABLE: SAMLUsers
  • REDIRECT_URL: Endpoint URL of your static S3 website (or CloudFront distribution domain name if you did that optional step)
  • ID_HASH: us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
zip –r saml.zip .

aws lambda create-function --function-name ProcessSAML_awslabs_samldemo --runtime nodejs4.3 --role LAMBDA_ROLE_ARN --handler ProcessSAML.handler --timeout 10 --memory-size 512 --zip-file fileb://saml.zip –environment Variables={USER_DDB_TABLE=SAMLUsers,SESSION_DDB_TABLE= SAMLSessions,REDIRECT_URL=<your S3 bucket and test page path>,ID_HASH=us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX,ENC_CONTEXT=ADFS,PRINCIPAL_ARN=<your ADFS IdP ARN>}

If you built the first two functions on your workstation and created the ProcessSAML_awslabs_samldemo function separately in the Lambda console before building on EC2, you can update the code after building on with the following command:

aws lambda update-function-code --function-name ProcessSAML_awslabs_samldemo --zip-file fileb://saml.zip

Role trust policy configuration

This scenario uses STS directly to assume a role. You will need to complete this step even if you use the SAM template. Modify the trust policy, as you did before when Amazon Cognito was assuming the role. In the GitHub repository sample code, ProcessSAML.js is preconfigured to filter and select a role with “Prod” in the name via the selectedRole variable.

This is an example of business logic you can alter in your organization later, such as a callout to an external mapping database for other rules matching. In this tutorial, it corresponds to the ADFS-Production role that was created.

  1. In the IAM console, choose Roles and open the ADFS-Production Role.
  2. Edit the Trust Permissions field and replace the content with the following:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": [
              "arn:aws:iam::ACCOUNTNUMBER:saml-provider/ADFS"
    ]
          },
          "Action": "sts:AssumeRoleWithSAML"
        }
      ]
    }

If you end up using another role (or add more complex filtering/selection logic), ensure that those roles have similar trust policy configurations. Also note that the sample policy above purposely uses an array for the federated provider matching the IdP ARN that you added. If your environment has multiple SAML providers, you could list them here and modify the code in ProcessSAML.js to process requests from different IdPs and grant or revoke credentials accordingly.

DynamoDB table creation

If you are not using the SAM template, create two DynamoDB tables:

  • SAMLSessions: Temporarily stores credentials from STS. Credentials are removed by an API Gateway Service Proxy to the DynamoDB DeleteItem call that simultaneously returns the credentials to the client.
  • SAMLUsers: This table is for tracking user information and the last time they authenticated in the system via ADFS.

The following AWS CLI commands creates the tables (indexed only with a primary key hash, called identityHash and CognitoID respectively):

aws dynamodb create-table \
    --table-name SAMLSessions \
    --attribute-definitions \
        AttributeName=group,AttributeType=S \
    --key-schema AttributeName=identityhash,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
aws dynamodb create-table \
    --table-name SAMLUsers \
    --attribute-definitions \
        AttributeName=CognitoID,AttributeType=S \
    --key-schema AttributeName=CognitoID,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

After the tables are created, you should be able to run the GenerateKey_awslabs_samldemo Lambda function and see a CipherText key stored in SAMLSessions. This is only for convenience of this post, to demonstrate that you should persist CipherText keys in a data store and never persist plaintext keys that have been decrypted. You should also never log plaintext keys in your code.

API Gateway configuration

If you are not using the SAM template, you will need to create API Gateway resources. If you have created resources for Scenario 1 in Part I, then the naming of these resources may be similar. If that is the case, then simply create an API with a different name (SAMLAuth2 or similar) and follow these steps accordingly.

  1. In the API Gateway console for your API, choose Authorizers, Custom Authorizer.
  2. Select your region and enter SAMLCustomAuth_awslabs_samldemo for the Lambda function. Choose a friendly name like JWTParser and ensure that Identity token source is method.request.header.Authorization. This tells the custom authorizer to look for the JWT in the Authorization header of the HTTP request, which is specified in the JavaScript code on your S3 webpage. Save the changes.

    lambdasamltwo_3.png

Now it’s time to wire up the Lambda functions to API Gateway.

  1. In the API Gateway console, choose Resources, select your API, and then create a Child Resource called SAML. This includes a POST and a GET method. The POST method uses the ProcessSAML_awslabs_samldemo Lambda function and a 302 redirect, while the GET method uses the JWTParser custom authorizer with a service proxy to DynamoDB to retrieve credentials upon successful authorization.
  2. lambdasamltwo_4.png

  3. Create a POST method. For Integration Type, choose Lambda and add the ProcessSAML_awslabs_samldemo Lambda function. For Method Request, add headers called RelayState and SAMLResponse.

    lambdasamltwo_5.png

  4. Delete the Method Response code for 200 and add a 302. Create a response header called Location. In the Response Models section, for Content-Type, choose application/json and for Models, choose Empty.

    lambdasamltwo_6.png

  5. Delete the Integration Response section for 200 and add one for 302 that has a Method response status of 302. Edit the response header for Location to add a Mapping value of integration.response.body.location.

    lambdasamltwo_7.png

  6. Finally, in order for Lambda to capture the SAMLResponse and RelayState values, choose Integration Request.

  7. In the Body Mapping Template section, for Content-Type, enter application/x-www-form-urlencoded and add the following template:

    {
    "SAMLResponse" :"$input.params('SAMLResponse')",
    "RelayState" :"$input.params('RelayState')",
    "formparams" : $input.json('$')
    }

  8. Create a GET method with an Integration Type of Service Proxy. Select the region and DynamoDB as the AWS Service. Use POST for the HTTP method and DeleteItem for the Action. This is important as you leverage a DynamoDB feature to return the current records when you perform deletion. This simultaneously allows credentials in this system to not be stored long term and also allows clients to retrieve them. For Execution role, use the Lambda role from earlier or a new role that only has IAM scoped permissions for DeleteItem on the SAMLSessions table.

    lambdasamltwo_8.png

  9. Save this and open Method Request.

  10. For Authorization, select your custom authorizer JWTParser. Add in a header called COGNITO_ID and save the changes.

    lambdasamltwo_9.png

  11. In the Integration Request, add in a header name of Content-Type and a value for Mapped of ‘application/x-amzn-json-1.0‘ (you need the single quotes surrounding the entry).

  12. Next, in the Body Mapping Template section, for Content-Type, enter application/json and add the following template:

    {
        "TableName": "SAMLSessions",
        "Key": {
            "identityhash": {
                "S": "$input.params('COGNITO_ID')"
            }
        },
        "ReturnValues": "ALL_OLD"
    }

Inspect this closely for a moment. When your client passes the JWT in an Authorization Header to this GET method, the JWTParser Custom Authorizer grants/denies executing a DeleteItem call on the SAMLSessions table.

ADF

If it is granted, then there needs to be an item to delete the reference as a primary key to the table. The client JavaScript (seen in a moment) passes its CognitoID through as a header called COGNITO_ID that is mapped above. DeleteItem executes to remove the credentials that were placed there via a call to STS by the ProcessSAML_awslabs_samldemo Lambda function. Because the above action specifies ALL_OLD under the ReturnValues mapping, DynamoDB returns these credentials at the same time.

lambdasamltwo_10.png

  1. Save the changes and open your /saml resource root.
  2. Choose Actions, Enable CORS.
  3. In the Access-Control-Allow-Headers section, add COGNITO_ID into the end (inside the quotes and separated from other headers by a comma), then choose Enable CORS and replace existing CORS headers.
  4. When completed, choose Actions, Deploy API. Use the Prod stage or another stage.
  5. In the Stage Editor, choose SDK Generation. For Platform, choose JavaScript and then choose Generate SDK. Save the folder someplace close. Take note of the Invoke URL value at the top, as you need this for ADFS configuration later.

Website configuration

If you are not using the SAM template, create an S3 bucket and configure it as a static website in the same way that you did for Part I.

If you are using the SAM template this will automatically be created for you however the steps below will still need to be completed:

In the source code repository, edit /Scenario2/website/configs.js.

  1. Ensure that the identityPool value matches your Amazon Cognito Pool ID and the region is correct.
  2. Leave adfsUrl the same if you’re testing on your lab server; otherwise, update with the AD FS DNS entries as appropriate.
  3. Update the relayingPartyId value as well if you used something different from the prerequisite blog post.

Next, download the minified version of the AWS SDK for JavaScript in the Browser (aws-sdk.min.js) and place it along with the other files in /Scenario2/website into the S3 bucket.

Copy the files from the API Gateway Generated SDK in the last section to this bucket so that the apigClient.js is in the root directory and lib folder is as well. The imports for these scripts (which do things like sign API requests and configure headers for the JWT in the Authorization header) are already included in the index.html file. Consult the latest API Gateway documentation if the SDK generation process updates in the future

ADFS configuration

Now that the AWS setup is complete, modify your ADFS setup to capture RelayState information about the client and to send the POST response to API Gateway for processing. You will need to complete this step even if you use the SAM template.

If you’re using Windows Server 2008 with ADFS 2.0, ensure that Update Rollup 2 is installed before enabling RelayState. Please see official Microsoft documentation for specific download information.

  1. After Update Rollup 2 is installed, modify %systemroot%\inetpub\adfs\ls\web.config. If you’re on a newer version of Windows Server running AD FS 3.0, modify %systemroot%\ADFS\Microsoft.IdentityServer.Servicehost.exe.config.
  2. Find the section in the XML marked <Microsoft.identityServer.web> and add an entry for <useRelayStateForIdpInitiatedSignOn enabled="true">. If you have the proper ADFS rollup or version installed, this should allow the RelayState parameter to be accepted by the service provider.
  3. In the ADFS console, open Relaying Party Trusts for Amazon Web Services and choose Endpoints.
  4. For Binding, choose POST and for Invoke URL,enter the URL to your API Gateway from the stage that you noted earlier.

At this point, you are ready to test out your webpage. Navigate to the S3 static website Endpoint URL and it should redirect you to the ADFS login screen. If the user login has been recent enough to have a valid SAML cookie, then you should see the login pass-through; otherwise, a login prompt appears. After the authentication has taken place, you should quickly end up back at your original webpage. Using the browser debugging tools, you see “Successful DDB call” followed by the results of a call to STS that were stored in DynamoDB.

lambdasamltwo_11.png

As in Scenario 1, the sample code under /scenario2/website/index.html has a button that allows you to “ping” an endpoint to test if the federated credentials are working. If you have used the SAM template this should already be working and you can test it out (it will fail at first – keep reading to find out how to set the IAM permissions!). If not go to API Gateway and create a new Resource called /users at the same level of /saml in your API with a GET method.

lambdasamltwo_12.png

For Integration type, choose Mock.

lambdasamltwo_13.png

In the Method Request, for Authorization, choose AWS_IAM. In the Integration Response, in the Body Mapping Template section, for Content-Type, choose application/json and add the following JSON:

{
    "status": "Success",
    "agent": "${context.identity.userAgent}"
}

lambdasamltwo_14.png

Before using this new Mock API as a test, configure CORS and re-generate the JavaScript SDK so that the browser knows about the new methods.

  1. On the /saml resource root and choose Actions, Enable CORS.
  2. In the Access-Control-Allow-Headers section, add COGNITO_ID into the endpoint and then choose Enable CORS and replace existing CORS headers.
  3. Choose Actions, Deploy API. Use the stage that you configured earlier.
  4. In the Stage Editor, choose SDK Generation and select JavaScript as your platform. Choose Generate SDK.
  5. Upload the new apigClient.js and lib directory to the S3 bucket of your static website.

One last thing must be completed before testing (You will need to complete this step even if you use the SAM template) if the credentials can invoke this mock endpoint with AWS_IAM credentials. The ADFS-Production Role needs execute-api:Invoke permissions for this API Gateway resource.

  1. In the IAM console, choose Roles, and open the ADFS-Production Role.

  2. For testing, you can attach the AmazonAPIGatewayInvokeFullAccess policy; however, for production, you should scope this down to the resource as documented in Control Access to API Gateway with IAM Permissions.

  3. After you have attached a policy with invocation rights and authenticated with AD FS to finish the redirect process, choose PING.

If everything has been set up successfully you should see an alert with information about the user agent.

Final Thoughts

We hope these scenarios and sample code help you to not only begin to build comprehensive enterprise applications on AWS but also to enhance your understanding of different AuthN and AuthZ mechanisms. Consider some ways that you might be able to evolve this solution to meet the needs of your own customers and innovate in this space. For example:

  • Completing the CloudFront configuration and leveraging SSL termination for site identification. See if this can be incorporated into the Lambda processing pipeline.
  • Attaching a scope-down IAM policy if the business rules are matched. For example, the default role could be more permissive for a group but if the user is a contractor (username with –C appended) they get extra restrictions applied when assumeRoleWithSaml is called in the ProcessSAML_awslabs_samldemo Lambda function.
  • Changing the time duration before credentials expire on a per-role basis. Perhaps if the SAMLResponse parsing determines the user is an Administrator, they get a longer duration.
  • Passing through additional user claims in SAMLResponse for further logical decisions or auditing by adding more claim rules in the ADFS console. This could also be a mechanism to synchronize some Active Directory schema attributes with AWS services.
  • Granting different sets of credentials if a user has accounts with multiple SAML providers. While this tutorial was made with ADFS, you could also leverage it with other solutions such as Shibboleth and modify the ProcessSAML_awslabs_samldemo Lambda function to be aware of the different IdP ARN values. Perhaps your solution grants different IAM roles for the same user depending on if they initiated a login from Shibboleth rather than ADFS?

The Lambda functions can be altered to take advantage of these options which you can read more about here. For more information about ADFS claim rule language manipulation, see The Role of the Claim Rule Language on Microsoft TechNet.

We would love to hear feedback from our customers on these designs and see different secure application designs that you’re implementing on the AWS platform.

Implementing Serverless Manual Approval Steps in AWS Step Functions and Amazon API Gateway

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/


Ali Baghani, Software Development Engineer

A common use case for AWS Step Functions is a task that requires human intervention (for example, an approval process). Step Functions makes it easy to coordinate the components of distributed applications as a series of steps in a visual workflow called a state machine. You can quickly build and run state machines to execute the steps of your application in a reliable and scalable fashion.

In this post, I describe a serverless design pattern for implementing manual approval steps. You can use a Step Functions activity task to generate a unique token that can be returned later indicating either approval or rejection by the person making the decision.

Key steps to implementation

When the execution of a Step Functions state machine reaches an activity task state, Step Functions schedules the activity and waits for an activity worker. An activity worker is an application that polls for activity tasks by calling GetActivityTask. When the worker successfully calls the API action, the activity is vended to that worker as a JSON blob that includes a token for callback.

At this point, the activity task state and the branch of the execution that contains the state is paused. Unless a timeout is specified in the state machine definition, which can be up to one year, the activity task state waits until the activity worker calls either SendTaskSuccess or SendTaskFailure using the vended token. This pause is the first key to implementing a manual approval step.

The second key is the ability in a serverless environment to separate the code that fetches the work and acquires the token from the code that responds with the completion status and sends the token back, as long as the token can be shared, i.e., the activity worker in this example is a serverless application supervised by a single activity task state.

In this walkthrough, you use a short-lived AWS Lambda function invoked on a schedule to implement the activity worker, which acquires the token associated with the approval step, and prepares and sends an email to the approver using Amazon SES.

It is very convenient if the application that returns the token can directly call the SendTaskSuccess and SendTaskFailure API actions on Step Functions. This can be achieved more easily by exposing these two actions through Amazon API Gateway so that an email client or web browser can return the token to Step Functions. By combining a Lambda function that acquires the token with the application that returns the token through API Gateway, you can implement a serverless manual approval step, as shown below.

In this pattern, when the execution reaches a state that requires manual approval, the Lambda function prepares and sends an email to the user with two embedded hyperlinks for approval and rejection.

If the authorized user clicks on the approval hyperlink, the state succeeds. If the authorized user clicks on the rejection link, the state fails. You can also choose to set a timeout for approval and, upon timeout, take action, such as resending the email request using retry/catch conditions in the activity task state.

Employee promotion process

As an example pattern use case, you can design a simple employee promotion process which involves a single task: getting a manager’s approval through email. When an employee is nominated for promotion, a new execution starts. The name of the employee and the email address of the employee’s manager are provided to the execution.

You’ll use the design pattern to implement the manual approval step, and SES to send the email to the manager. After acquiring the task token, the Lambda function generates and sends an email to the manager with embedded hyperlinks to URIs hosted by API Gateway.

In this example, I have administrative access to my account, so that I can create IAM roles. Moreover, I have already registered my email address with SES, so that I can send emails with the address as the sender/recipient. For detailed instructions, see Send an Email with Amazon SES.

Here is a list of what you do:

  1. Create an activity
  2. Create a state machine
  3. Create and deploy an API
  4. Create an activity worker Lambda function
  5. Test that the process works

Create an activity

In the Step Functions console, choose Tasks and create an activity called ManualStep.

stepfunctionsfirst_1.png

Remember to keep the ARN of this activity at hand.

stepfunctionsfirst_2.png

Create a state machine

Next, create the state machine that models the promotion process on the Step Functions console. Use StatesExecutionRole-us-east-1, the default role created by the console. Name the state machine PromotionApproval, and use the following code. Remember to replace the value for Resource with your activity ARN.

{
  "Comment": "Employee promotion process!",
  "StartAt": "ManualApproval",
  "States": {
    "ManualApproval": {
      "Type": "Task",
      "Resource": "arn:aws:states:us-east-1:ACCOUNT_ID:activity:ManualStep",
      "TimeoutSeconds": 3600,
      "End": true
    }
  }
}

Create and deploy an API

Next, create and deploy public URIs for calling the SendTaskSuccess or SendTaskFailure API action using API Gateway.

First, navigate to the IAM console and create the role that API Gateway can use to call Step Functions. Name the role APIGatewayToStepFunctions, choose Amazon API Gateway as the role type, and create the role.

After the role has been created, attach the managed policy AWSStepFunctionsFullAccess to it.

stepfunctionsfirst_3.png

In the API Gateway console, create a new API called StepFunctionsAPI. Create two new resources under the root (/) called succeed and fail, and for each resource, create a GET method.

stepfunctionsfirst_4.png

You now need to configure each method. Start by the /fail GET method and configure it with the following values:

  • For Integration type, choose AWS Service.
  • For AWS Service, choose Step Functions.
  • For HTTP method, choose POST.
  • For Region, choose your region of interest instead of us-east-1. (For a list of regions where Step Functions is available, see AWS Region Table.)
  • For Action Type, enter SendTaskFailure.
  • For Execution, enter the APIGatewayToStepFunctions role ARN.

stepfunctionsfirst_5.png

To be able to pass the taskToken through the URI, navigate to the Method Request section, and add a URL Query String parameter called taskToken.

stepfunctionsfirst_6.png

Then, navigate to the Integration Request section and add a Body Mapping Template of type application/json to inject the query string parameter into the body of the request. Accept the change suggested by the security warning. This sets the body pass-through behavior to When there are no templates defined (Recommended). The following code does the mapping:

{
   "cause": "Reject link was clicked.",
   "error": "Rejected",
   "taskToken": "$input.params('taskToken')"
}

When you are finished, choose Save.

Next, configure the /succeed GET method. The configuration is very similar to the /fail GET method. The only difference is for Action: choose SendTaskSuccess, and set the mapping as follows:

{
   "output": "\"Approve link was clicked.\"",
   "taskToken": "$input.params('taskToken')"
}

The last step on the API Gateway console after configuring your API actions is to deploy them to a new stage called respond. You can test our API by choosing the Invoke URL links under either of the GET methods. Because no token is provided in the URI, a ValidationException message should be displayed.

stepfunctionsfirst_7.png

Create an activity worker Lambda function

In the Lambda console, create a Lambda function with a CloudWatch Events Schedule trigger using a blank function blueprint for the Node.js 4.3 runtime. The rate entered for Schedule expression is the poll rate for the activity. This should be above the rate at which the activities are scheduled by a safety margin.

The safety margin accounts for the possibility of lost tokens, retried activities, and polls that happen while no activities are scheduled. For example, if you expect 3 promotions to happen, in a certain week, you can schedule the Lambda function to run 4 times a day during that week. Alternatively, a single Lambda function can poll for multiple activities, either in parallel or in series. For this example, use a rate of one time per minute but do not enable the trigger yet.

stepfunctionsfirst_8.png

Next, create the Lambda function ManualStepActivityWorker using the following Node.js 4.3 code. The function receives the taskToken, employee name, and manager’s email from StepFunctions. It embeds the information into an email, and sends out the email to the manager.


'use strict';
console.log('Loading function');
const aws = require('aws-sdk');
const stepfunctions = new aws.StepFunctions();
const ses = new aws.SES();
exports.handler = (event, context, callback) => {
    
    var taskParams = {
        activityArn: 'arn:aws:states:us-east-1:ACCOUNT_ID:activity:ManualStep'
    };
    
    stepfunctions.getActivityTask(taskParams, function(err, data) {
        if (err) {
            console.log(err, err.stack);
            context.fail('An error occured while calling getActivityTask.');
        } else {
            if (data === null) {
                // No activities scheduled
                context.succeed('No activities received after 60 seconds.');
            } else {
                var input = JSON.parse(data.input);
                var emailParams = {
                    Destination: {
                        ToAddresses: [
                            input.managerEmailAddress
                            ]
                    },
                    Message: {
                        Subject: {
                            Data: 'Your Approval Needed for Promotion!',
                            Charset: 'UTF-8'
                        },
                        Body: {
                            Html: {
                                Data: 'Hi!<br />' +
                                    input.employeeName + ' has been nominated for promotion!<br />' +
                                    'Can you please approve:<br />' +
                                    'https://API_DEPLOYMENT_ID.execute-api.us-east-1.amazonaws.com/respond/succeed?taskToken=' + encodeURIComponent(data.taskToken) + '<br />' +
                                    'Or reject:<br />' +
                                    'https://API_DEPLOYMENT_ID.execute-api.us-east-1.amazonaws.com/respond/fail?taskToken=' + encodeURIComponent(data.taskToken),
                                Charset: 'UTF-8'
                            }
                        }
                    },
                    Source: input.managerEmailAddress,
                    ReplyToAddresses: [
                            input.managerEmailAddress
                        ]
                };
                    
                ses.sendEmail(emailParams, function (err, data) {
                    if (err) {
                        console.log(err, err.stack);
                        context.fail('Internal Error: The email could not be sent.');
                    } else {
                        console.log(data);
                        context.succeed('The email was successfully sent.');
                    }
                });
            }
        }
    });
};

In the Lambda function handler and role section, for Role, choose Create a new role, LambdaManualStepActivityWorkerRole.

stepfunctionsfirst_9.png

Add two policies to the role: one to allow the Lambda function to call the GetActivityTask API action by calling Step Functions, and one to send an email by calling SES. The result should look as follows:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": "states:GetActivityTask",
      "Resource": "arn:aws:states:*:*:activity:ManualStep"
    },
    {
      "Effect": "Allow",
      "Action": "ses:SendEmail",
      "Resource": "*"
    }
  ]
}

In addition, as the GetActivityTask API action performs long-polling with a timeout of 60 seconds, increase the timeout of the Lambda function to 1 minute 15 seconds. This allows the function to wait for an activity to become available, and gives it extra time to call SES to send the email. For all other settings, use the Lambda console defaults.

stepfunctionsfirst_10.png

After this, you can create your activity worker Lambda function.

Test the process

You are now ready to test the employee promotion process.

In the Lambda console, enable the ManualStepPollSchedule trigger on the ManualStepActivityWorker Lambda function.

In the Step Functions console, start a new execution of the state machine with the following input:

{ "managerEmailAddress": "[email protected]", "employeeName" : "Jim" } 

Within a minute, you should receive an email with links to approve or reject Jim’s promotion. Choosing one of those links should succeed or fail the execution.

stepfunctionsfirst_11.png

Summary

In this post, you created a state machine containing an activity task with Step Functions, an API with API Gateway, and a Lambda function to dispatch the approval/failure process. Your Step Functions activity task generated a unique token that was returned later indicating either approval or rejection by the person making the decision. Your Lambda function acquired the task token by polling the activity task, and then generated and sent an email to the manager for approval or rejection with embedded hyperlinks to URIs hosted by API Gateway.

If you have questions or suggestions, please comment below.

AWS Lambda – A Look Back at 2016

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/aws-lambda-a-look-back-at-2016/

2016 was an exciting year for AWS Lambda, Amazon API Gateway and serverless compute technology, to say the least. But just in case you have been hiding away and haven’t heard of serverless computing with AWS Lambda and Amazon API Gateway, let me introduce these great services to you.  AWS Lambda lets you run code without provisioning or managing servers, making it a serverless compute service that is event-driven and allows developers to bring their functions to the cloud easily for virtually any type of application or backend.  Amazon API Gateway helps you quickly build highly scalable, secure, and robust APIs at scale and provides the ability to maintain and monitor created APIs.

With the momentum of serverless in 2016, of course, the year had to end with a bang as the AWS team launched some powerful service features at re:Invent to make it even easier to build serverless solutions.  These features include:

Since Jeff has already introduced most of the aforementioned new service features for building distributed applications and microservices like Step Functions, let’s walk-through the last four new features not yet discussed using a common serverless use case example: Real-time Stream Processing.  In our walk-through of the stream processing use case, we will implement a Dead Letter Queue for notifications of errors that may come from the Lambda function processing a stream of data, we will take an existing Lambda function written in Node.js to process the stream and rewrite it using the C# language.  We then will build an example of the monetization of a Lambda backed API using API Gateway’s integration with AWS Marketplace.  This will be exciting, so let’s get started.

During the AWS Developer Days in San Francisco and Austin, I presented an example of leveraging AWS Lambda for real-time stream processing by building a demo showcasing a streaming solution with Twitter Streaming APIs. I will build upon this example to demonstrate the power of Dead Letter Queues (DLQ), C# Support, API Gateway Monetization features, and the open source template for API Gateway Developer Portal.  In the demo, a console or web application streams tweets gathered from the Twitter Streaming API that has the keywords ‘awscloud’ and/or ‘serverless’.  Those tweets are sent real-time to Amazon Kinesis Streams where Lambda detects the new records and processes the stream batch by writing the tweets to the NoSQL database, Amazon DynamoDB.

Now that we understand the real-time streaming process demo’s workflow, let’s take a deeper look at the Lambda function that processes the batch records from Kinesis.  First, you will notice below that the Lambda function, DevDayStreamProcessor, has an event source or trigger that is a Kinesis stream named DevDay2016Stream with a Batch size of 100.  Our Lambda function will poll the stream periodically for new records and automatically read and process batches of records, in this case, the tweets detected on the stream.

Now we will examine our Lambda function code which is written in Node.js 4.3. The section of the Lambda function shown below loops through the batch of tweet records from our Kinesis stream, parses each record, and writes desired tweet information into an array of JSON data. The array of the JSON tweet items is passed to the function, ddbItemsWrite which is outside of our Lambda handler.

'use strict';
console.log('Loading function');

var timestamp;
var twitterID;
var tweetData;
var ddbParams;
var itemNum = 0;
var dataItemsBatch = [];
var dbBatch = [];
var AWS = require('aws-sdk');
var ddbTable = 'TwitterStream';
var dynamoDBClient = new AWS.DynamoDB.DocumentClient();

exports.handler = (event, context, callback) => {
    var counter = 0; 
    
    event.Records.forEach((record) => {
        // Kinesis data is base64 encoded so decode here
        console.log("Base 64 record: " + JSON.stringify(record, null, 2));
        const payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
        console.log('Decoded payload:', payload);
        
        var data = payload.replace(/[\u0000-\u0019]+/g," "); 
        try
        {  tweetData = JSON.parse(data);   }
        catch(err)
        {  callback(err, err.stack);   }
        
        timestamp = "" + new Date().getTime();
        twitterID = tweetData.id.toString();
        itemNum = itemNum+1;
               
         var ddbItem = {
                PutRequest: { 
                    Item: { 
                        TwitterID: twitterID,
                        TwitterUser: tweetData.username.toString(),
                        TwitterUserPic: tweetData.pic,
                        TwitterTime: new Date(tweetData.time.replace(/( \+)/, ' UTC$1')).toLocaleString(), 
                        Tweet: tweetData.text,
                        TweetTopic: tweetData.topic,
                        Tags: (tweetData.hashtags) ? tweetData.hashtags : " ",
                        Location: (tweetData.loc) ? tweetData.loc : " ",
                        Country: (tweetData.country) ? tweetData.country : " ",
                        TimeStamp: timestamp,
                        RecordNum: itemNum
                    }
                }
            };
            
            dataItemsBatch.push(ddbItem);
            counter++;
});
    
    var twitterItems = {}; 
    twitterItems[ddbTable] = dataItemsBatch; 
    ddbItemsWrite(twitterItems, 0, context, callback); 

};

The ddbItemsWrite function shown below will take the array of JSON tweet records processed from the Kinesis stream, and write the records multiple items at a time to our DynamoDB table using batch operations. This function leverages the DynamoDB best practice of retrying unprocessed items by implementing an exponential backoff algorithm to prevent write request failures due to throttling on the individual tables.

 function ddbItemsWrite(items, retries, ddbContext, ddbCallback) 
    { 
        dynamoDBClient.batchWrite({ RequestItems: items }, function(err, data) 
            { 
                if (err) 
                { 
                    console.log('DDB call failed: ' + err, err.stack); 
                    ddbCallback(err, err.stack); 
                } 
                else 
                { 
                    if(Object.keys(data.UnprocessedItems).length) 
                    { 
                        console.log('Unprocessed items remain, retrying.'); 
                        var delay = Math.min(Math.pow(2, retries) * 100, ddbContext.getRemainingTimeInMillis() - 200); 
                        setTimeout(function() {ddbItemsWrite(data.UnprocessedItems, retries + 1, ddbContext, ddbCallback)}, delay); 
                    } 
                    else 
                    { 
                         ddbCallback(null, "Success");
                         console.log("Completed Successfully");
                    } 
                } 
            } 
        );
    }

Currently, this Lambda function works as expected and will successfully process tweets captured in Kinesis from the Twitter Streaming API, however, this function has a flaw that will cause an error to occur when processing batch write requests to our DynamoDB table.  In the Lambda function, the current code does not take into account that the DynamoDB batchWrite function should be comprised of no more than 25 write (put) requests per single call to this function up to 16 MB of data. Therefore, without changing the code appropriately to have the ddbItemsWrite function to handle batches of 25 or have the handler function put items in the array in groups of 25 requests before sending to the ddbItemsWrite function; there will be a validation exception thrown when the batch of tweets items sent is greater than 25.  This is a great example of a bug that is not easily detected in small-scale testing scenarios yet will cause failures under production load.

 

Dead Letter Queues

Now that we are aware of an event that will cause the ddbItemsWrite Lambda function to throw an exception and/or an event that will fail while processing records, we have a first-rate scenario for leveraging Dead Letter Queues (DLQ).

Since AWS Lambda DLQ functionality is only available for asynchronous event sources like Amazon S3, Amazon SNS, AWS IoT or direct asynchronous invocations, and not for streaming event sources such as Amazon Kinesis or Amazon DynamoDB streams; our first step is to break this Lambda function into two functions.  The first Lambda function will handle the processing of the Kinesis stream, and the second Lambda function will take the data processed by the first function and write the tweet information to DynamoDB.  We will then setup our DLQ on the second Lambda function for the error that will occur on writing the batch of tweets to DynamoDB as noted above.

We have two options when setting up a target for our DLQ; Amazon SNS topic or an Amazon SQS queue.  In this walk-through, we will opt for using an Amazon SQS queue.  Therefore, my first step in using DLQ is to create a SQS Standard queue.  A Standard queue type is a queue which has high transactions throughput, a message will be delivered at least once, but another copy of the message may also be delivered, and it is possible that messages might be delivered in an order different from which they were sent.  You can learn more about creating SQS queues and queue type in the Amazon SQS documentation.

Once my queue, StreamDemoDLQ, is created, I will grab the ARN from the Details tab of this selected queue. If I am not using the console to designate the DLQ resource for this function, I will need the ARN for the queue for my Lambda function to identify this SQS queue as the DLQ target for error and event failure notifications. Additionally, I will use the ARN to add permissions to my Lambda execution role policy in order to access this SQS queue.

I will now return to my Lambda function and select the Configuration tab and expand the Advanced settings section. I will select SQS in the DLQ Resource field and select my StreamDemoDLQ queue in the SQS Queue field dropdown.

Remember, the execution role for the Lambda function must explicitly provide sqs:SendMessage access permissions to in order to successfully send messages to your SQS DLQ.  Therefore, I ensured that my Lambda role, lambda_kinesis_role, has the following IAM policy for SQS permissions.

 

We have now successfully configured a Dead Letter Queue for our Lambda function using Amazon SQS. To learn more about Dead Letter Queues in Lambda, read the Troubleshooting and Monitoring section of the AWS Lambda Developer Guide and check out the AWS Compute Blog post on Dead Letter Queues.

C# Support

As I mentioned earlier, another very exciting feature added to Lambda during AWS re:Invent was the support for the C# language via the open source .NET Core 1.0 platform.  Since the Lambda console does not offer editing for compiled languages yet, in order to author a C# Lambda function you can use tooling in Visual Studio with the AWS Toolkit, Yeoman, and/or the .NET CLI.  To deploy Lambda functions written in C#, you can use the Lambda plugin in the AWS ToolKit for Visual Studio or create a deployment package with the .NET Core command line.

A C# Lambda function handler should be defined as an instance or static method in a class. There are two handler function parameters; the first is the input type which is the event data and second is the Lambda context object of type ILambdaContext. The event data input object types for AWS Services include the following:

  • Amazon.Lambda.APIGatewayEvents
  • Amazon.Lambda.CognitoEvents
  • Amazon.Lambda.ConfigEvents
  • Amazon.Lambda.DynamoDBEvents
  • Amazon.Lambda.KinesisEvents
  • Amazon.Lambda.S3Events
  • Amazon.Lambda.SNSEvents

Now that we have discussed more detail around C# Support in Lambda, let’s rewrite our DevDayStreamProcessor lambda function with the C# language. For this example, I will use Visual Studio IDE to write the Lambda function, and additionally take advantage of the AWS Lambda Visual Studio plugin to deploy the function. Remember in order to use the AWS Toolkit for Visual Studio with Lambda, you will need to have Visual Studio 2015 Update 3 version and NET Core tools. You can read more about installing Visual Studio 2015 Update 3 and .NET Core here.

To create the C# function using Visual Studio, I start a New Project, select AWS Lambda Project (.NET Core) and name it ServerlessStreamProcessor.

What’s really cool about taking advantage of the AWS Toolkit for Visual Studio to author this function, is that inside of Visual Studio I can use Lambda blueprints to get started in a similar way that I would in using the Lambda console.  Therefore in order to replicate the DevDayStreamProcessor in C#, I will select the Simple Kinesis Function blueprint.

It should be noted that when writing Lambda functions in C#, there is no need to mark the class declaration nor the target handler function as a Lambda function. Additionally, when writing CloudWatch logs you can use the standard C# Console class WriteLine function or use the ILambdaContext LogLine function found as a part of the ILambdaContext interface. With the template for accessing the Kinesis stream in place, I finish writing the C# Lambda function, ServerlessStreamProcessor, utilizing the same variable names as in the Node.js code in DevDayStreamProcessor. Please note the C# Lambda handler function below.

using System.Collections.Generic;
using Amazon.Lambda.Core;
using Amazon.Lambda.KinesisEvents;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Newtonsoft.Json.Linq;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializerAttribute(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace ServerlessStreamProcessor
{
    public class LambdaTwitterStream
    {
        string twitterID, timeStamp;
        int itemNum = 0;
        
        private static AmazonDynamoDBClient dynamoDBClient = new AmazonDynamoDBClient();
        List<TwitterItem> dataItemsBatch = new List<TwitterItem>();
        
        public void FunctionHandler(KinesisEvent kinesisEvent, ILambdaContext context)
        {
            DynamoDBContext dbContext = new DynamoDBContext(dynamoDBClient);
            context.Logger.LogLine($"Beginning to process {kinesisEvent.Records.Count} records...");
            
            foreach (var record in kinesisEvent.Records)
            {
                context.Logger.LogLine($"Event ID: {record.EventId}");
                context.Logger.LogLine($"Event Name: {record.EventName}");

                // Kinesis data is base64 encoded so decode here
                string tweetData = GetRecordContents(record.Kinesis);
                context.Logger.LogLine($"Decoded Payload: {tweetData}");
                tweetData = @"" + tweetData;
                JObject twitterObj = JObject.Parse(tweetData);
                
                twitterID = twitterObj["id"].ToString();
                timeStamp = DateTime.Now.Millisecond.ToString();
                itemNum++;
                context.Logger.LogLine(timeStamp);
                context.Logger.LogLine($"Twitter ID is: {twitterID}");
                context.Logger.LogLine(itemNum.ToString());

                TwitterItem ddbItem = new TwitterItem()
                { 
                    TwitterID = twitterID,
                    TwitterUser = twitterObj["username"].ToString(),
                    TwitterUserPic = twitterObj["pic"].ToString(),
                    TwitterTime = DateTime.Parse(twitterObj["time"].ToString()).ToUniversalTime().ToString(),
                    Tweet = twitterObj["text"].ToString(),
                    TweetTopic = twitterObj["topic"].ToString(),
                    Tags = twitterObj["hashtags"] != null ? twitterObj["hashtags"].ToString() : String.Empty,
                    Location = twitterObj["loc"] != null ? twitterObj["loc"].ToString() : String.Empty,
                    Country = twitterObj["country"] != null ? twitterObj["country"].ToString() : String.Empty,
                    TimeStamp =  timeStamp,
                    RecordNum = itemNum
                };
                
                dataItemsBatch.Add(ddbItem);
            }

            context.Logger.LogLine(JObject.FromObject(dataItemsBatch).ToString());
            ddbItemsWrite(dataItemsBatch, 0, dbContext, context);
            context.Logger.LogLine("Success - Completed Successfully");
            context.Logger.LogLine("Stream processing complete.");
        }

There are only a few differences that should be noted between our Kinesis stream processor written in C# and our original Node.js code.  Since the input parameter type supported by default in C# Lambda functions is the System.IO.Stream type, the Kinesis base64 string is decoded by using a StreamReader with ASCII encoding in a blueprint provided function, GetRecordContents.

 

private string GetRecordContents(KinesisEvent.Record streamRecord)
{
    using (var reader = new StreamReader(streamRecord.Data, Encoding.ASCII))
    {
        return reader.ReadToEnd();
    }
}

The other thing to note is that in order to write the tweet data to the DynamoDB Table, I added the AWS .NET SDK NuGet package for DynamoDB; AWSSDK.DynamoDBv2 to the Lambda function project via the NuGet package manager within Visual Studio.  I also created a .NET data object, TwitterItem, to map to the data being stored in the DynamoDB table. Using the AWS .NET SDK higher level programming interface, object persistence model for DynamoDB, I created a collection of TwitterItem objects to be written via the BatchWrite object class in our ddbItemsWrite C# function.

private async void ddbItemsWrite(List<TwitterItem> items, int retries, DynamoDBContext ddbContext, ILambdaContext context)
{
BatchWrite<TwitterItem> twitterStreamBatchWrite = ddbContext.CreateBatchWrite<TwitterItem>();
        
        try
        {
            twitterStreamBatchWrite.AddPutItems(items);   
            await twitterStreamBatchWrite.ExecuteAsync();
        }
        catch (Exception ex)
        {
            context.Logger.LogLine($"DDB call failed: {ex.Source} ");
            context.Logger.LogLine($"Exception: {ex.Message}");
            context.Logger.LogLine($"Exception Stacktrace: {ex.StackTrace}");
        }      
}

Another benefit of using AWS Toolkit for Visual Studio to author my C# Lambda function is that I can deploy my Lambda function directly to AWS with a single click.  Selecting my project name in the Solution Explorer and performing a right-click, I get a menu option, Publish to AWS Lambda, which brings up a menu for information to include about my Lambda function for deployment to AWS.

It is important to note that the handler function signature follows the nomenclature of Assembly :: Namespace :: ClassName :: Method, therefore, the signature of our C# Lambda function shown here is: ServerlessStreamProcessor :: ServerlessStreamProcessor.LambdaTwitterStream :: FunctionHandler.  We provide this information to the Upload to AWS Lambda dialog box and select Next to assign a role for the function.

Upon completion, you can test in the Lambda console or in Visual Studio with AWS toolkit provided plugin (shown below) using the sample data of the triggering event source for an iterative approach to developing the Lambda function.

You can learn more about authoring AWS Lambda functions using the C# Language in the AWS Lambda developer guide or by reading the post announcing C# Support on the Compute Blog.

API Gateway Monetization and Developer Portal

If you have been following the microservices momentum, you may be aware of an architectural pattern that calls for using smart endpoints and/or using an API gateway via REST APIs to manage access and exposure of individual services that make up a microservices solution.  Amazon API Gateway enables creation and management of RESTful APIs to expose AWS Lambda functions, external HTTP endpoints, as well as, other AWS services.  In addition, Amazon API Gateway allows clients and external developers to have access to a deployed APIs by via HTTP protocol or a platform/language targeted SDK.

With the introduction of SaaS Subscriptions on AWS Marketplace and the API Gateway integration with the AWS Marketplace, you can now monetize your APIs by allowing customers to directly consume the APIs you create with API Gateway in the AWS Marketplace.  AWS customers can subscribe and be billed for the APIs published on the marketplace with their existing AWS account.  With the integration of API Gateway with the AWS Marketplace, the process to get started is easy on the AWS Marketplace.

To get started, you must ensure that you have enabled the Usage Plan feature in Amazon API Gateway.

Once enabled the next step is to create a Usage Plan, enable throttling (if desired) with targeted rate and burst request thresholds, and finally enable quotas (if you choose) by providing targeted request quota per a set timeframe.

Next, we would choose our APIs and related stage(s) that we wish to be associated with the usage plan. Please note that this is an optional step as you can opt not associate a specific API with your usage plan.

All that is left to do is add or create an API key for the usage plan.  Again, it should be noted that this is also an optional step in creating your usage plan.

Now that we have our usage plan, StreamingPlan, we are ready for the next step in preparation for selling our API on the marketplace. You have the option to create multiple usage plans with varying APIs and limits, and sell these plans as differentiated API products on AWS Marketplace.

In order to enable customers to buy our new API product, however, the AWS Marketplace requires that each API product has an external developer portal to handle subscription requests, provide API information details and ability for the management of usage.

This customer need for an external developer portal for the marketplace birthed the new open source API Gateway developer portal serverless web application implementation.  The goal of the API Gateway developer portal project was to allow customers to follow a few easy steps to create a serverless web application that lists a catalog of your APIs built with API Gateway while allowing for developer signups.

The API Gateway developer portal was built upon AWS Serverless Express; an open source library published by AWS which aids you in utilizing AWS Lambda and Amazon API Gateway in building web applications/services with the Node.js Express framework.  Additionally, the API Gateway developer portal application uses an AWS SAM (Serverless Application Model) template to deploy its serverless resources.  AWS SAM is a simplified CloudFormation template and specification that allows easier management and deployment of serverless applications on AWS.

To build your developer portal using the API Gateway portal, you would start by cloning the aws-api-gateway-developer-portal project from GitHub.

Assuming you have the latest version of the AWS CLI and Node.js installed, you would setup the developer portal by running “npm run setup” on the command line for Mac and Linux OS users. For Windows users, you would run “npm run win-setup” on the command line setup the developer portal.

The result is a functional sample developer portal website running on S3 that you can customize in order to create your own developer portal for your APIs.

The frontend of the sample developer portal website is built with the React JavaScript library, and the backend is an AWS Lambda function running using the aws-serverless-express library. Additionally, a Lambda function with a SNS event source was created as a listener for notification when customers subscribe or unsubscribe to your API via the AWS Marketplace console.  You can learn more about the steps to build, customize, and deploy your API Gateway developer portal web application with this reference project by visiting the AWS Compute blog post which discusses the architecture and implementation in more detail.

 

The next key step in monetizing our API is establishing an account on the AWS Marketplace.  If an account is not already established, registering is simply verifying that you meet the requirement prerequisites provided in the AWS Marketplace Seller Guide and completing a seller registration form on the AWS Marketplace Management Portal.  You can see a snapshot of the start of the seller registration form below.

To list the API, you would fill a product load form describing the API, establish the pricing for the API, and provide t\he IDs of AWS Accounts that will test the API subscription process.  Completing this form would also require you to submit the URL for your API developer portal.

When your seller registration is complete, you will be supplied an AWS Marketplace product code.  You will need to associate your marketplace product code with your API usage plan.  In order to complete this step, you would simply log into the API Gateway console and go to your API usage plan. Go to the Marketplace tab and enter your product code. This tells API Gateway to send measurement data to AWS Marketplace when your API is used.

With your Amazon API Gateway managed API packaged into a usage plan, the accompanying API developer portal created, seller account registration completed, and product code associated with API usage plan; we are now ready to monetize our API on the AWS Marketplace.

Learn more about monetizing your APIs created with API Gateway by checking out the related blog post and reviewing the API Gateway developer guide documentation.

Summary

As you can see, the AWS teams were busy in 2016 working to make the customer experience easier for creating and deploying serverless architectures, as well as, providing mechanisms for customers to generate and monetize their API Gateway managed APIs.

Visit the product documentation for AWS Lambda and Amazon API Gateway to learn more about these services and all the newly released features.

Tara

Authorizing Access Through a Proxy Resource to Amazon API Gateway and AWS Lambda Using Amazon Cognito User Pools

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/authorizing-access-through-a-proxy-resource-to-amazon-api-gateway-and-aws-lambda-using-amazon-cognito-user-pools/


Ed Lima, Solutions Architect

Want to create your own user directory that can scale to hundreds of millions of users? Amazon Cognito user pools are fully managed so that you don’t have to worry about the heavy lifting associated with building, securing, and scaling authentication to your apps.

The AWS Mobile blog post Integrating Amazon Cognito User Pools with API Gateway back in May explained how to integrate user pools with Amazon API Gateway using an AWS Lambda custom authorizer. Since then, we’ve released a new feature where you can directly configure a Cognito user pool authorizer to authenticate your API calls; more recently, we released a new proxy resource feature. In this post, I show how to use these new great features together to secure access to an API backed by a Lambda proxy resource.

Walkthrough

In this post, I assume that you have some basic knowledge about the services involved. If not, feel free to review our documentation and tutorials on:

Start by creating a user pool called “myApiUsers”, and enable verifications with optional MFA access for extra security:

cognitouserpoolsauth_1.png

Be mindful that if you are using a similar solution for production workloads you will need to request a SMS spending threshold limit increase from Amazon SNS in order to send SMS messages to users for phone number verification or for MFA. For the purposes of this article, since we are only testing our API authentication with a single user the default limit will suffice.

Now, create an app in your user pool, making sure to clear Generate client secret:

cognitouserpoolsauth_2.png

Using the client ID of your newly created app, add a user, “jdoe”, with the AWS CLI. The user needs a valid email address and phone number to receive MFA codes:

aws cognito-idp sign-up \
--client-id 12ioh8c17q3stmndpXXXXXXXX \
--username jdoe \
--password [email protected] \
--region us-east-1 \
--user-attributes '[{"Name":"given_name","Value":"John"},{"Name":"family_name","Value":"Doe"},{"Name":"email","Value":"[email protected]"},{"Name":"gender","Value":"Male"},{"Name":"phone_number","Value":"+61XXXXXXXXXX"}]'  

In the Cognito User Pools console, under Users, select the new user and choose Confirm User and Enable MFA:

cognitouserpoolsauth_3.png

Your Cognito user is now ready and available to connect.

Next, create a Node.js Lambda function called LambdaForSimpleProxy with a basic execution role. Here’s the code:

'use strict';
console.log('Loading CUP2APIGW2Lambda Function');

exports.handler = function(event, context) {
    var responseCode = 200;
    console.log("request: " + JSON.stringify(event));
    
    var responseBody = {
        message: "Hello, " + event.requestContext.authorizer.claims.given_name + " " + event.requestContext.authorizer.claims.family_name +"!" + " You are authenticated to your API using Cognito user pools!",
        method: "This is an authorized "+ event.httpMethod + " to Lambda from your API using a proxy resource.",
        body: event.body
    };

    //Response including CORS required header
    var response = {
        statusCode: responseCode,
        headers: {
            "Access-Control-Allow-Origin" : "*"
        },
        body: JSON.stringify(responseBody)
    };

    console.log("response: " + JSON.stringify(response))
    context.succeed(response);
};

For the last piece of the back-end puzzle, create a new API called CUP2Lambda from the Amazon API Gateway console. Under Authorizers, choose Create, Cognito User Pool Authorizer with the following settings:

cognitouserpoolsauth_4.png

Create an ANY method under the root of the API as follows:

cognitouserpoolsauth_5.png

After that, choose Save, OK to give API Gateway permissions to invoke the Lambda function. It’s time to configure the authorization settings for your ANY method. Under Method Request, enter the Cognito user pool as the authorization for your API:

cognitouserpoolsauth_6.png

Finally, choose Actions, Enable CORS. This creates an OPTIONS method in your API:

cognitouserpoolsauth_7.png

Now it’s time to deploy the API to a stage (such as prod) and generate a JavaScript SDK from the SDK Generation tab. You can use other methods to connect to your API however in this article I’ll show how to use the API Gateway SDK. Since we are using an ANY method the SDK does not have calls for specific methods other than the OPTIONS method created by Enable CORS, you have to add a couple of extra functions to the apigClient.js file so that your SDK can perform GET and POST operations to your API:


    apigClient.rootGet = function (params, body, additionalParams) {
        if(additionalParams === undefined) { additionalParams = {}; }
        
        apiGateway.core.utils.assertParametersDefined(params, [], ['body']);       

        var rootGetRequest = {
            verb: 'get'.toUpperCase(),
            path: pathComponent + uritemplate('/').expand(apiGateway.core.utils.parseParametersToObject(params, [])),
            headers: apiGateway.core.utils.parseParametersToObject(params, []),
            queryParams: apiGateway.core.utils.parseParametersToObject(params, []),
            body: body
        };
        

        return apiGatewayClient.makeRequest(rootGetRequest, authType, additionalParams, config.apiKey);
    };

    apigClient.rootPost = function (params, body, additionalParams) {
        if(additionalParams === undefined) { additionalParams = {}; }
     
        apiGateway.core.utils.assertParametersDefined(params, ['body'], ['body']);
       
        var rootPostRequest = {
            verb: 'post'.toUpperCase(),
            path: pathComponent + uritemplate('/').expand(apiGateway.core.utils.parseParametersToObject(params, [])),
            headers: apiGateway.core.utils.parseParametersToObject(params, []),
            queryParams: apiGateway.core.utils.parseParametersToObject(params, []),
            body: body
        };
        
        return apiGatewayClient.makeRequest(rootPostRequest, authType, additionalParams, config.apiKey);

    };

You can now use a little front end web page to authenticate users and test authorized calls to your API. In order for it to work, you need to add some external libraries and dependencies including the API Gateway SDK you just generated. You can find more details in our Cognito as well as API Gateway SDK documentation guides.

With the dependencies in place, you can use the following JavaScript code to authenticate your Cognito user pool user and connect to your API in order to perform authorized calls (replace your own user pool Id and client ID details accordingly):

<script type="text/javascript">
 //Configure the AWS client with the Cognito role and a blank identity pool to get initial credentials

  AWS.config.update({
    region: 'us-east-1',
    credentials: new AWS.CognitoIdentityCredentials({
      IdentityPoolId: ''
    })
  });

  AWSCognito.config.region = 'us-east-1';
  AWSCognito.config.update({accessKeyId: 'null', secretAccessKey: 'null'});
  var token = "";
 
  //Authenticate user with MFA

  document.getElementById("buttonAuth").addEventListener("click", function(){  
    var authenticationData = {
      Username : document.getElementById('username').value,
      Password : document.getElementById('password').value,
      };

    var showGetPut = document.getElementById('afterLogin');
    var hideLogin = document.getElementById('login');

    var authenticationDetails = new AWSCognito.CognitoIdentityServiceProvider.AuthenticationDetails(authenticationData);

   // Replace with your user pool details

    var poolData = { 
        UserPoolId : 'us-east-1_XXXXXXXXX', 
        ClientId : '12ioh8c17q3stmndpXXXXXXXX', 
        Paranoia : 7
    };

    var userPool = new AWSCognito.CognitoIdentityServiceProvider.CognitoUserPool(poolData);

    var userData = {
        Username : document.getElementById('user').value,
        Pool : userPool
    };

    var cognitoUser = new AWSCognito.CognitoIdentityServiceProvider.CognitoUser(userData);
    cognitoUser.authenticateUser(authenticationDetails, {
      onSuccess: function (result) {
        token = result.getIdToken().getJwtToken(); // CUP Authorizer = ID Token
        console.log('ID Token: ' + result.getIdToken().getJwtToken()); // Show ID Token in the console
        var cognitoGetUser = userPool.getCurrentUser();
        if (cognitoGetUser != null) {
          cognitoGetUser.getSession(function(err, result) {
            if (result) {
              console.log ("User Successfuly Authenticated!");  
            }
          });
        }

        //Hide Login form after successful authentication
        showGetPut.style.display = 'block';
        hideLogin.style.display = 'none';
      },
    onFailure: function(err) {
        alert(err);
    },
    mfaRequired: function(codeDeliveryDetails) {
            var verificationCode = prompt('Please input a verification code.' ,'');
            cognitoUser.sendMFACode(verificationCode, this);
        }
    });
  });

//Send a GET request to the API

document.getElementById("buttonGet").addEventListener("click", function(){
  var apigClient = apigClientFactory.newClient();
  var additionalParams = {
      headers: {
        Authorization: token
      }
    };

  apigClient.rootGet({},{},additionalParams)
      .then(function(response) {
        console.log(JSON.stringify(response));
        document.getElementById("output").innerHTML = ('<pre align="left"><code>Response: '+JSON.stringify(response.data, null, 2)+'</code></pre>');
      }).catch(function (response) {
        document.getElementById('output').innerHTML = ('<pre align="left"><code>Error: '+JSON.stringify(response, null, 2)+'</code></pre>');
        console.log(response);
    });
//}
});

//Send a POST request to the API

document.getElementById("buttonPost").addEventListener("click", function(){
  var apigClient = apigClientFactory.newClient();
  var additionalParams = {
      headers: {
        Authorization: token
      }
    };
    
 var body = {
        "message": "Sample POST payload"
  };

  apigClient.rootPost({},body,additionalParams)
      .then(function(response) {
        console.log(JSON.stringify(response));
        document.getElementById("output").innerHTML = ('<pre align="left"><code>Response: '+JSON.stringify(response.data, null, 2)+'</code></pre>');
      }).catch(function (response) {
        document.getElementById('output').innerHTML = ('<pre align="left"><code>Error: '+JSON.stringify(response, null, 2)+'</code></pre>');
        console.log(response);
    });
});
</script>

As far as the front end is concerned you can use some simple HTML code to test, such as the following snippet:

<body>
<div id="container" class="container">
    <br/>
    <img src="http://awsmedia.s3.amazonaws.com/AWS_Logo_PoweredBy_127px.png">
    <h1>Cognito User Pools and API Gateway</h1>
    <form name="myform">
        <ul>
          <li class="fields">
            <div id="login">
            <label>User Name: </label>
            <input id="username" size="60" class="req" type="text"/>
            <label>Password: </label>
            <input id="password" size="60" class="req" type="password"/>
            <button class="btn" type="button" id='buttonAuth' title="Log in with your username and password">Log In</button>
            <br />
            </div>
            <div id="afterLogin" style="display:none;"> 
            <br />
            <button class="btn" type="button" id='buttonPost'>POST</button>
            <button class="btn" type="button" id='buttonGet' >GET</button>
            <br />
          </li>
        </ul>
      </form>
  <br/>
    <div id="output"></div>
  <br/>         
  </div>        
  <br/>
  </div>
</body>

After adding some extra CSS styling of your choice (for example adding "list-style: none" to remove list bullet points), the front end is ready. You can test it by using a local web server in your computer or a static website on Amazon S3.

Enter the user name and password details for John Doe and choose Log In:

cognitouserpoolsauth_8.png

A MFA code is then sent to the user and can be validated accordingly:

cognitouserpoolsauth_9.png

After authentication, you can see the ID token generated by Cognito for further access testing:

cognitouserpoolsauth_10.png

If you go back to the API Gateway console and test your Cognito user pool authorizer with the same token, you get the authenticated user claims accordingly:

cognitouserpoolsauth_11.png

In your front end, you can now perform authenticated GET calls to your API by choosing GET.

cognitouserpoolsauth_12.png

Or you can perform authenticated POST calls to your API by choosing POST.

cognitouserpoolsauth_13.png

The calls reach your Lambda proxy and return a valid response accordingly. You can also test from the command line using cURL, by sending the user pool ID token that you retrieved from the developer console earlier, in the “Authorization” header:

cognitouserpoolsauth_14.png

It’s possible to improve this solution by integrating an Amazon DynamoDB table, for instance. You could detect the method request on event.httpMethod in the Lambda function and issue a GetItem call to a table for a GET request or a PutItem call to a table for a POST request. There are lots of possibilities for this kind of proxy resource integration.

Summary

The Cognito user pools integration with API Gateway provides a new way to secure your API workloads, and the new proxy resource for Lambda allows you to perform any business logic or transformations to your API calls from Lambda itself instead of using body mapping templates. These new features provide very powerful options to secure and handle your API logic.

I hope this post helps with your API workloads. If you have questions or suggestions, please comment below.

Binary Support for API Integrations with Amazon API Gateway

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/binary-support-for-api-integrations-with-amazon-api-gateway/

A year ago, the Microservices without the Servers post showed how Lambda can be used for creating image thumbnails. This required the client to Base64 encode the binary image file before calling the image conversion API as well as Base64 decode the response before it could be rendered. With the recent Amazon API Gateway launch of Binary Support for API Integrations, you can now specify media types that you want API Gateway to treat as binary and send and receive binary data through API endpoints hosted on Amazon API Gateway.

After this feature is configured, you can specify if you would like API Gateway to either pass the Integration Request and Response bodies through, convert them to text (Base64 encoding), or convert them to binary (Base64 decoding). These options are available for HTTP, AWS Service, and HTTP Proxy integrations. In the case of Lambda Function and Lambda Function Proxy Integrations, which currently only support JSON, the request body is always converted to JSON.

In this post, I show how you can use the new binary support in API Gateway to turn this Lambda function into a public API, which you can use to include a binary image file in a POST request and get a thumbnail version of that same image. I also show how you can now use API Gateway and Lambda to create a thumbnail service, which you can use to include a binary image file in a POST request and get a thumbnail version of the same image.

Walkthrough

To get started, log in to the AWS Management Console to set up a Lambda integration, using the image-processing-service blueprint.

Create the Lambda function

In the Lambda console, choose Create a Lambda Function.

In the blueprint filter step, for Select runtime , type in ‘image’ and then choose image-processing-service.

Do not set up a trigger. Choose Next.

In the Configure function step, specify the function name, such as ‘thumbnail’.

In the Lambda function handler and role step, for Role , choose Create new role from template(s), and specify the role name (e.g., ‘myMicroserviceRole’). Finally, choose Next. For more details, see AWS Lambda Permissions Model.

Review your Lambda function configuration and choose Create Function.

You have now successfully created the Lambda function that will create a thumbnail.

Create an API and POST method

In this section, you set up an API Gateway thumbnail API to expose a publically accessible RESTful endpoint.

In the API Gateway console, choose Create API.

For API name , enter ‘Thumbnail’, add a description, and choose Create API.

In the created API, choose Resources , Actions , and Create Method.

To create the method, choose POST and select the checkmark.

To set up the POST method, for Integration type , select Lambda Function , select the appropriate Lambda region, and enter ‘thumbnail’ for Lambda Function. Choose Save.

In the Add Permission to Lambda Function dialog box, choose OK to enable API Gateway to invoke the ‘thumbnail’ Lambda function.

Set up the integration

Now, you are ready to set up the integration. In the main page, open Integration Request.

On the Integration Request page, expand Body Mapping Templates.

For Request body passthrough , choose When there are no templates defined (recommended). For Content-Type , enter "image/png".

Choose Add mapping template and add the following template. The thumbnail Lambda function requires that you pass an operation to execute, in this case "thumbnail", and the image payload "base64Image" you are passing in, which is "$input.body". Review the following JSON and choose Save.

Specify which media types need to be handled as binary. Choose [API name], Binary Support.

Choose Edit , specify the media type (such as "image/png") to be handled as binary, and then choose Save.

Deployment

Now that the API is configured, you need to deploy it. On the thumbnail Resources page, choose Action , Deploy API.

For Deployment stage , select [New Stage], specify a stage name, and then choose Deploy.

A stage has been created for you; you receive the Invoke URL value to be used for your thumbnail API.

Testing

Now, you are ready to test the newly created API. Download your favorite .png image (such as apigateway.png), and issue the following curl command. Update the .png image file name and the Invoke URL value accordingly.

$ curl --request POST -H "Accept: image/png" -H "Content-Type: image/png" --data-binary "@apigateway.png" https://XXXXX.execute-api.us-east-1.amazonaws.com/prod > apigateway-thumb.png

You should now be able to open the created images in your favorite image viewer to confirm that resizing has occurred.

Summary

This is just one example of how you can leverage the new binary capabilities of Binary Support in API Gateway. For more examples, see the API Gateway Payload Encodings topic in the Amazon API Gateway Developer Guide.

If you have questions or suggestions, please comment below.

AWS Week in Review – October 31, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-31-2016/

Over 25 internal and external contributors helped out with pull requests and fresh content this week! Thank you all for your help and your support.

Monday

October 31

Tuesday

November 1

Wednesday

November 2

Thursday

November 3

Friday

November 4

Saturday

November 5

Sunday

November 6

New & Notable Open Source

New Customer Success Stories

  • Apposphere – Using AWS and bitfusion.io from the AWS Marketplace, Apposphere can scale 50 to 60 percent month-over-month while keeping customer satisfaction high. Based in Austin, Texas, the Apposphere mobile app delivers real-time leads from social media channels.
  • CADFEM – CADFEM uses AWS to make complex simulation software more accessible to smaller engineering firms, helping them compete with much larger ones. The firm specializes in simulation software and services for the engineering industry.
  • Mambu – Using AWS, Mambu helped one of its customers launch the United Kingdom’s first cloud-based bank, and the company is now on track for tenfold growth, giving it a competitive edge in the fast-growing fintech sector. Mambu is an all-in-one SaaS banking platform for managing credit and deposit products quickly, simply, and affordably.
  • Okta – Okta uses AWS to get new services into production in days instead of weeks. Okta creates products that use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections.
  • PayPlug – PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites. PayPlug is a startup created in 2013 that developed an online payment solution. It differentiates itself by the simplicity of its services and its ease of integration on e-commerce websites
  • Rent-a-Center – Rent-a-Center is a leading renter of furniture, appliances, and electronics to customers in the United States, Canada, Puerto Rico, and Mexico. Rent-A-Center uses AWS to manage its new e-commerce website, scale to support a 1,000 percent spike in site traffic, and enable a DevOps approach.
  • UK Ministry of Justice – By going all in on the AWS Cloud, the UK Ministry of Justice (MoJ) can use technology to enhance the effectiveness and fairness of the services it provides to British citizens. The MoJ is a ministerial department of the UK government. MoJ had its own on-premises data center, but lacked the ability to change and adapt rapidly to the needs of its citizens. As it created more digital services, MoJ turned to AWS to automate, consolidate, and deliver constituent services.

New SlideShare Presentations

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Migrating a Native JAVA REST API to a Serverless Architecture with the Lambada Framework for AWS

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/migrating-a-native-java-rest-api-to-a-serverless-architecture-with-the-lambada-framework-for-aws/

This is a guest post by Çağatay Gürtürk, the creator of the Lambada framework

Serverless computing has become a hot topics since AWS Lambda and Amazon API Gateway started to offer an elegant way to build and deploy REST APIs without needing to maintain 24/7 running servers and infrastructure, with attractive pricing models.

Being the first language offered by Lambda, Node.JS seems to have the most online resources and tools but it is also possible to write Lambda functions natively with Java and Python. Java is especially interesting as a language because of its maturity, large community, and available codebase. With Lambda and Java, it is even possible to apply enterprise patterns and frameworks such as Spring, as well as all the best practices we used to apply in the Java world.

In order to make development for Lambda in Java easier, I started Lambada Framework as an open source project. It is a little, but powerful, open source project in beta stage that lets developers create a new serverless API in AWS infrastructure or migrate an existing one.

Lambada Framework accomplishes this target by implementing the most common JAX-RS annotations and providing a Maven plugin to deploy easily to the AWS cloud. Briefly, JAX-RS is a standard annotation set which can be used to map regular Java methods to HTTP paths and methods. For instance, you can look at the following method:

@GET
@Path("/helloworld/{id}")
public Response indexEndpoint(@PathParam int id) {
    return Response.status(200).entity("Hello world: " + id).build();
}

This is a very lean method marked with @GET and @Path annotations, which mean that this method is called when a GET request comes to URLs in "/helloworld/{id}" format, with theid parameter as an argument. Finally, it returns a Response object within this method with a 200 response code and text content. As you can see, these annotations offer a seamless way to define a REST API and map different resources to Java methods.

JAX-RS annotations on their own do not mean so much and they do not have any effect out-of-the-box. To make these annotations work, a JAX-RS implementation framework should be added to the project. This framework would scan all the JAX-RS annotations in the project and create a server and routing table to respond to HTTP requests correctly. While Jersey is one such reference implementation, and the most popular one, there are also other implementations of JAX-RS, such as RESTEasy and Apache CXF. You are free to choose any of them and your controller methods always stay same, thanks to standard annotations.

Lambada Framework is a JAX-RS implementation but different from the others: instead of running a web server, it scans the JAX-RS annotations at build time and populates Lambda functions and the API Gateway definitions using them.

This means that if you already marked your controller methods with JAX-RS annotations and used a framework like Jersey or RestEasy, you can easily switch to serverless architecture with very little modifications in your code. You would have to change only your build mechanism and replace your preferred JAX-RS implementation with Lambada Framework.

In the following example, you see how to deploy a very basic REST API to Lambda.

  1. First, clone the example project to your local directory:
    git clone https://github.com/lambadaframework/lambadaframework-boilerplate
  1. This project has a pom.xml file with some configuration options. You must change the deployment.bucket option; other changes are up to you. The Lambada Framework creates this bucket in your account if it does not exists and it uses that bucket during your project’s lifetime. S3 bucket names are global and must be unique, so you must pick a name which is not taken by any one else.

  2. Make sure that the default AWS profile installed in your system has administrator privileges, or at least the following IAM policy:

     {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": [
                        "cloudformation:*",
                        "s3:*",
                        "lambda:*",
                        "execute-api:*",
                        "apigateway:*",
                        "iam:*",
                        "ec2:DescribeSecurityGroups",
                        "ec2:DescribeVpcs",
                        "ec2:DescribeSubnets"
                    ],
                    "Resource": [
                        "*"
                    ]
                }
            ]
        }
  1. Now you are all set. In the root directory of your project, fire the following command:

    mvn deploy

Your project compiles to a fat JAR with all its dependencies and is deployed to Lambda. After the JAR file is on the S3 bucket, Lambada scans that for supported JAX-RS annotations in your code and creates the necessary API Gateway endpoints. At the end of the process, the URL of your API is printed on the screen. You can navigate to this URL and explore your API using the AWS Management Console to see which resources and methods are created.

Lambada Framework is under development and support for missing JAX-RS annotations are being added. Follow the Lambada Framework GitHub page for the newest features and feel free to submit any issue or contributions.

Happy serverless computing!

Error Handling Patterns in Amazon API Gateway and AWS Lambda

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/error-handling-patterns-in-amazon-api-gateway-and-aws-lambda/


Ryan Green @ryangtweets
Software Development Engineer, API Gateway

A common API design practice is to define an explicit contract for the types of error responses that the API can produce. This allows API consumers to implement a robust error-handling mechanism which may include user feedback or automatic retries, improving the usability and reliability of applications consuming your API.

In addition, well-defined input and output contracts, including error outcomes, allows strongly-typed SDK client generation which further improves the application developer experience. Similarly, your API backend should be prepared to handle the various types of errors that may occur and/or surface them to the client via the API response.

This post discusses some recommended patterns and tips for handling error outcomes in your serverless API built on Amazon API Gateway and AWS Lambda.

HTTP status codes

In HTTP, error status codes are generally divided between client (4xx) and server (5xx) errors. It’s up to your API to determine which errors are appropriate for your application. The table shows some common patterns of basic API errors.

Type HTTP status code Description
Data Validation 400 (Bad Request) The client sends some invalid data in the request, for example, missing or incorrect content in the payload or parameters. Could also represent a generic client error.
Authentication/Authorization 401 (Unauthorized)
403 (Forbidden)
The client is not authenticated (403) or is not authorized to access the requested resource (401).
Invalid Resource 404 (Not Found) The client is attempting to access a resource that doesn’t exist.
Throttling 429 (Too Many Requests) The client is sending more than the allowed number of requests per unit time.
Dependency Issues 502 (Bad Gateway)
504 (Gateway Timeout)
A dependent service is throwing errors (502) or timing out (504).
Unhandled Errors 500 (Internal Server Error)
503 (Service Unavailable)
The service failed in an unexpected way (500), or is failing but is expected to recover (503).

For more information about HTTP server status codes, see RFC2616 section 10.5 on the W3C website.

Routing Lambda function errors to API Gateway HTTP responses

In API Gateway, AWS recommends that you model the various types of HTTP responses that your API method may produce, and define a mapping from the various error outcomes in your backend Lambda implementation to these HTTP responses.

In Lambda, function error messages are always surfaced in the “errorMessage” field in the response. Here’s how it’s populated in the various runtimes:

Node.js (4.3):

exports.handler = function(event, context, callback) {
    callback(new Error("the sky is falling!");
};

Java:

public class LambdaFunctionHandler implements RequestHandler<String, String> {
  @Override
    public String handleRequest(String input, Context context) {
        throw new RuntimeException("the sky is falling!");
   }
}

Python:

def lambda_handler(event, context):
    raise Exception('the sky is falling!')
Each results in the following Lambda response body:
{
  "errorMessage" : "the sky is falling!",
    …
}

The routing of Lambda function errors to HTTP responses in API Gateway is achieved by pattern matching against this “errorMessage” field in the Lambda response. This allows various function errors to be routed to API responses with an appropriate HTTP status code and response body.

The Lambda function must exit with an error in order for the response pattern to be evaluated – it is not possible to “fake” an error response by simply returning an “errorMessage” field in a successful Lambda response.

Note: Lambda functions failing due to a service error, i.e. before the Lambda function code is executed, are not subject to the API Gateway routing mechanism. These types of errors include internal server errors, Lambda function or account throttling, or failure of Lambda to parse the request body. Generally, these types of errors are returned by API Gateway as a 500 response. AWS recommends using CloudWatch Logs to troubleshoot these types of errors.

API Gateway method response and integration response

In API Gateway, the various HTTP responses supported by your method are represented by method responses. These define an HTTP status code as well as a model schema for the expected shape of the payload for the response.

Model schemas are not required on method responses but they enable support for strongly-typed SDK generation. For example, the generated SDKs can unmarshall your API error responses into appropriate exception types which are thrown from the SDK client.

The mapping from a Lambda function error to an API Gateway method responseis defined by an integration response. An integration response defines a selection pattern used to match the Lambda function “errorMessage” and routes it to an associated method response.

Note: API Gateway uses Java pattern-style regexes for response mapping. For more information, see Pattern in the Oracle documentation.

Example:

Lambda function (Node.js 4.3):

exports.handler = (event, context, callback) => {
   callback ("the sky is falling!");
};

Lambda response body:

{
  "errorMessage": "the sky is falling!"
}

API Gateway integration response:

Selection pattern : “the sky is falling!”

Method response : 500

API Gateway response:

Status: 500

Response body:

{
  "errorMessage": "the sky is falling!"
}

In this example, API Gateway returns the Lambda response body verbatim, a.k.a. “passthrough”. It is possible to define mapping templates on the integration response to transform the Lambda response body into a different form for the API Gateway method response. This is useful when you want to format or filter the response seen by the API client.

When a Lambda function completes successfully or if none of the integration response patterns match the error message, API Gateway responds with the default integration response (typically, HTTP status 200). For this reason, it is imperative that you design your integration response patterns such that they capture every possible error outcome from your Lambda function. Because the evaluation order is undefined, it is unadvisable to define a “catch-all” (i.e., “.*”) error pattern which may be evaluated before the default response.

Common patterns for error handling in API Gateway and Lambda

There are many ways to structure your serverless API to handle error outcomes. The following section will identify two successful patterns to consider when designing your API.

Simple prefix-based

This common pattern uses a prefix in the Lambda error message string to route error types.

You would define a static set of prefixes, and create integration responses to capture each and route them to the appropriate method response. An example mapping might look like the following:

Prefix Method response status
[BadRequest] 400
[Forbidden] 403
[NotFound] 404
[InternalServerError] 500

Example:

Lambda function (NodeJS):

exports.handler = (event, context, callback) => {
    callback("[BadRequest] Validation error: Missing field 'name'");
};

Lambda output:

{
  "errorMessage": "[BadRequest] Validation error: Missing field 'name'"
}

API Gateway integration response:

Selection pattern: “^[BadRequest].*”

Method response: 400

API Gateway response:

Status: 400

Response body:

{
  "errorMessage": "[BadRequest] Validation error: Missing field 'name'"
}

If you don’t want to expose the error prefix to API consumers, you can perform string processing within a mapping template and strip the prefix from the errorMessage field.

Custom error object serialization

Lambda functions can return a custom error object serialized as a JSON string, and fields in this object can be used to route to the appropriate API Gateway method response.

This pattern uses a custom error object with an “httpStatus” field and defines an explicit 1-to-1 mapping from the value of this field to the method response.

An API Gateway mapping template is defined to deserialize the custom error object and build a custom response based on the fields in the Lambda error.

Lambda function (Node.js 4.3):

exports.handler = (event, context, callback) => {
    var myErrorObj = {
        errorType : "InternalServerError",
        httpStatus : 500,
        requestId : context.awsRequestId,
        message : "An unknown error has occurred. Please try again."
    }
    callback(JSON.stringify(myErrorObj));
};

Lambda function (Java):

public class LambdaFunctionHandler implements RequestHandler<String, String> {
  @Override
    public String handleRequest(String input, Context context) {

        Map<String, Object> errorPayload = new HashMap();
        errorPayload.put("errorType", "BadRequest");
        errorPayload.put("httpStatus", 400);
        errorPayload.put("requestId", context.getAwsRequestId());
        errorPayload.put("message", "An unknown error has occurred. Please try again.");
        String message = new ObjectMapper().writeValueAsString(errorPayload);
        
        throw new RuntimeException(message);
    }
}

Note: this example uses Jackson ObjectMapper for JSON serialization. For more information, see ObjectMapper on the FasterXML website.

Lambda output:

{
  "errorMessage": "{\"errorType\":\"InternalServerError\",\"httpStatus\":500,\"requestId\":\"40cd9bf6-0819-11e6-98f3-415848322efb\",\"message\":\"An unknown error has occurred. Please try again.\"}"
}

Integration response:

Selection pattern: “.*httpStatus”:500.*”

Method response: 500

Mapping template:

#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))
{
  "type" : "$errorMessageObj.errorType",
  "message" : "$errorMessageObj.message",
  "request-id" : "$errorMessageObj.requestId"
}

Note: This template makes use of the $util.parseJson() function to parse elements from the custom Lambda error object. For more information, see Accessing the $util Variable.

API Gateway response:

Status: 500

Response body:

{
  "type": "InternalServerError",
  "message": " An unknown error has occurred. Please try again.",
  "request-id": "e308b7b7-081a-11e6-9ab9-117c7feffb09"
}

This is a full Swagger example of the custom error object serialization pattern. This can be imported directly into API Gateway for testing or as a starting point for your API.

{
  "swagger": "2.0",
  "info": {
    "version": "2016-04-21T23:52:49Z",
    "title": "Best practices for API error responses with API Gateway and Lambda"
  },
  "schemes": [
    "https"
  ],
  "paths": {
    "/lambda": {
      "get": {
        "consumes": [
          "application/json"
        ],
        "produces": [
          "application/json"
        ],
        "parameters": [
          {
            "name": "status",
            "in": "query",
            "required": true,
            "type": "string"
          }
        ],
        "responses": {
          "200": {
            "description": "200 response",
            "schema": {
              "$ref": "#/definitions/Empty"
            }
          },
          "400": {
            "description": "400 response",
            "schema": {
              "$ref": "#/definitions/Error"
            }
          },
          "403": {
            "description": "403 response",
            "schema": {
              "$ref": "#/definitions/Error"
            }
          },
          "404": {
            "description": "404 response",
            "schema": {
              "$ref": "#/definitions/Error"
            }
          },
          "500": {
            "description": "500 response",
            "schema": {
              "$ref": "#/definitions/Error"
            }
          }
        },
        "x-amazon-apigateway-integration": {
          "responses": {
            "default": {
              "statusCode": "200"
            },
            ".\*httpStatus\\\":404.\*": {
              "statusCode": "404",
              "responseTemplates": {
                "application/json": "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n  \"type\" : \"$errorMessageObj.errorType\",\n  \"message\" : \"$errorMessageObj.message\",\n  \"request-id\" : \"$errorMessageObj.requestId\"\n}"
              }
            },
            ".\*httpStatus\\\":403.\*": {
              "statusCode": "403",
              "responseTemplates": {
                "application/json": "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n  \"type\" : \"$errorMessageObj.errorType\",\n  \"message\" : \"$errorMessageObj.message\",\n  \"request-id\" : \"$errorMessageObj.requestId\"\n}"
              }
            },
            ".\*httpStatus\\\":400.\*": {
              "statusCode": "400",
              "responseTemplates": {
                "application/json": "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n  \"type\" : \"$errorMessageObj.errorType\",\n  \"message\" : \"$errorMessageObj.message\",\n  \"request-id\" : \"$errorMessageObj.requestId\"\n}"
              }
            },
            ".\*httpStatus\\\":500.\*": {
              "statusCode": "500",
              "responseTemplates": {
                "application/json": "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n  \"type\" : \"$errorMessageObj.errorType\",\n  \"message\" : \"$errorMessageObj.message\",\n  \"request-id\" : \"$errorMessageObj.requestId\"\n}"
              }
            }
          },
          "httpMethod": "POST",
          "requestTemplates": {
            "application/json": "{\"failureStatus\" : $input.params('status')\n}"
          },
          "uri": "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/[MY_FUNCTION_ARN]/invocations",
          "type": "aws"
        }
      }
    }
  },
  "definitions": {
    "Empty": {
      "type": "object"
    },
    "Error": {
      "type": "object",
      "properties": {
        "message": {
          "type": "string"
        },
        "type": {
          "type": "string"
        },
        "request-id": {
          "type": "string"
        }
      }
    }
  }
}

Conclusion

There are many ways to represent errors in your API. While API Gateway and Lambda provide the basic building blocks, it is helpful to follow some best practices when designing your API. This post highlights a few successful patterns that we have identified but we look forward to seeing other patterns emerge from our serverless API users.