All posts by Stefano Buliani

Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/

This post written by: Magnus Bjorkman – Solutions Architect

Many customers are looking to run their services at global scale, deploying their backend to multiple regions. In this post, we describe how to deploy a Serverless API into multiple regions and how to leverage Amazon Route 53 to route the traffic between regions. We use latency-based routing and health checks to achieve an active-active setup that can fail over between regions in case of an issue. We leverage the new regional API endpoint feature in Amazon API Gateway to make this a seamless process for the API client making the requests. This post does not cover the replication of your data, which is another aspect to consider when deploying applications across regions.

Solution overview

Currently, the default API endpoint type in API Gateway is the edge-optimized API endpoint, which enables clients to access an API through an Amazon CloudFront distribution. This typically improves connection time for geographically diverse clients. By default, a custom domain name is globally unique and the edge-optimized API endpoint would invoke a Lambda function in a single region in the case of Lambda integration. You can’t use this type of endpoint with a Route 53 active-active setup and fail-over.

The new regional API endpoint in API Gateway moves the API endpoint into the region and the custom domain name is unique per region. This makes it possible to run a full copy of an API in each region and then use Route 53 to use an active-active setup and failover. The following diagram shows how you do this:

Active/active multi region architecture

  • Deploy your Rest API stack, consisting of API Gateway and Lambda, in two regions, such as us-east-1 and us-west-2.
  • Choose the regional API endpoint type for your API.
  • Create a custom domain name and choose the regional API endpoint type for that one as well. In both regions, you are configuring the custom domain name to be the same, for example, helloworldapi.replacewithyourcompanyname.com
  • Use the host name of the custom domain names from each region, for example, xxxxxx.execute-api.us-east-1.amazonaws.com and xxxxxx.execute-api.us-west-2.amazonaws.com, to configure record sets in Route 53 for your client-facing domain name, for example, helloworldapi.replacewithyourcompanyname.com

The above solution provides an active-active setup for your API across the two regions, but you are not doing failover yet. For that to work, set up a health check in Route 53:

Route 53 Health Check

A Route 53 health check must have an endpoint to call to check the health of a service. You could do a simple ping of your actual Rest API methods, but instead provide a specific method on your Rest API that does a deep ping. That is, it is a Lambda function that checks the status of all the dependencies.

In the case of the Hello World API, you don’t have any other dependencies. In a real-world scenario, you could check on dependencies as databases, other APIs, and external dependencies. Route 53 health checks themselves cannot use your custom domain name endpoint’s DNS address, so you are going to directly call the API endpoints via their region unique endpoint’s DNS address.

Walkthrough

The following sections describe how to set up this solution. You can find the complete solution at the blog-multi-region-serverless-service GitHub repo. Clone or download the repository locally to be able to do the setup as described.

Prerequisites

You need the following resources to set up the solution described in this post:

  • AWS CLI
  • An S3 bucket in each region in which to deploy the solution, which can be used by the AWS Serverless Application Model (SAM). You can use the following CloudFormation templates to create buckets in us-east-1 and us-west-2:
    • us-east-1:
    • us-west-2:
  • A hosted zone registered in Amazon Route 53. This is used for defining the domain name of your API endpoint, for example, helloworldapi.replacewithyourcompanyname.com. You can use a third-party domain name registrar and then configure the DNS in Amazon Route 53, or you can purchase a domain directly from Amazon Route 53.

Deploy API with health checks in two regions

Start by creating a small “Hello World” Lambda function that sends back a message in the region in which it has been deployed.


"""Return message."""
import logging

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the hello world message."""

    region = context.invoked_function_arn.split(':')[3]

    logger.info("message: " + "Hello from " + region)
    
    return {
		"message": "Hello from " + region
    }

Also create a Lambda function for doing a health check that returns a value based on another environment variable (either “ok” or “fail”) to allow for ease of testing:


"""Return health."""
import logging
import os

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the health."""

    logger.info("status: " + os.environ['STATUS'])
    
    return {
		"status": os.environ['STATUS']
    }

Deploy both of these using an AWS Serverless Application Model (SAM) template. SAM is a CloudFormation extension that is optimized for serverless, and provides a standard way to create a complete serverless application. You can find the full helloworld-sam.yaml template in the blog-multi-region-serverless-service GitHub repo.

A few things to highlight:

  • You are using inline Swagger to define your API so you can substitute the current region in the x-amazon-apigateway-integration section.
  • Most of the Swagger template covers CORS to allow you to test this from a browser.
  • You are also using substitution to populate the environment variable used by the “Hello World” method with the region into which it is being deployed.

The Swagger allows you to use the same SAM template in both regions.

You can only use SAM from the AWS CLI, so do the following from the command prompt. First, deploy the SAM template in us-east-1 with the following commands, replacing “<your bucket in us-east-1>” with a bucket in your account:


> cd helloworld-api
> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-east-1> --region us-east-1
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-east-1

Second, do the same in us-west-2:


> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-west-2> --region us-west-2
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-west-2

The API was created with the default endpoint type of Edge Optimized. Switch it to Regional. In the Amazon API Gateway console, select the API that you just created and choose the wheel-icon to edit it.

API Gateway edit API settings

In the edit screen, select the Regional endpoint type and save the API. Do the same in both regions.

Grab the URL for the API in the console by navigating to the method in the prod stage.

API Gateway endpoint link

You can now test this with curl:


> curl https://2wkt1cxxxx.execute-api.us-west-2.amazonaws.com/prod/helloworld
{"message": "Hello from us-west-2"}

Write down the domain name for the URL in each region (for example, 2wkt1cxxxx.execute-api.us-west-2.amazonaws.com), as you need that later when you deploy the Route 53 setup.

Create the custom domain name

Next, create an Amazon API Gateway custom domain name endpoint. As part of using this feature, you must have a hosted zone and domain available to use in Route 53 as well as an SSL certificate that you use with your specific domain name.

You can create the SSL certificate by using AWS Certificate Manager. In the ACM console, choose Get started (if you have no existing certificates) or Request a certificate. Fill out the form with the domain name to use for the custom domain name endpoint, which is the same across the two regions:

Amazon Certificate Manager request new certificate

Go through the remaining steps and validate the certificate for each region before moving on.

You are now ready to create the endpoints. In the Amazon API Gateway console, choose Custom Domain Names, Create Custom Domain Name.

API Gateway create custom domain name

A few things to highlight:

  • The domain name is the same as what you requested earlier through ACM.
  • The endpoint configuration should be regional.
  • Select the ACM Certificate that you created earlier.
  • You need to create a base path mapping that connects back to your earlier API Gateway endpoint. Set the base path to v1 so you can version your API, and then select the API and the prod stage.

Choose Save. You should see your newly created custom domain name:

API Gateway custom domain setup

Note the value for Target Domain Name as you need that for the next step. Do this for both regions.

Deploy Route 53 setup

Use the global Route 53 service to provide DNS lookup for the Rest API, distributing the traffic in an active-active setup based on latency. You can find the full CloudFormation template in the blog-multi-region-serverless-service GitHub repo.

The template sets up health checks, for example, for us-east-1:


HealthcheckRegion1:
  Type: "AWS::Route53::HealthCheck"
  Properties:
    HealthCheckConfig:
      Port: "443"
      Type: "HTTPS_STR_MATCH"
      SearchString: "ok"
      ResourcePath: "/prod/healthcheck"
      FullyQualifiedDomainName: !Ref Region1HealthEndpoint
      RequestInterval: "30"
      FailureThreshold: "2"

Use the health check when you set up the record set and the latency routing, for example, for us-east-1:


Region1EndpointRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    Region: us-east-1
    HealthCheckId: !Ref HealthcheckRegion1
    SetIdentifier: "endpoint-region1"
    HostedZoneId: !Ref HostedZoneId
    Name: !Ref MultiregionEndpoint
    Type: CNAME
    TTL: 60
    ResourceRecords:
      - !Ref Region1Endpoint

You can create the stack by using the following link, copying in the domain names from the previous section, your existing hosted zone name, and the main domain name that is created (for example, hellowordapi.replacewithyourcompanyname.com):

The following screenshot shows what the parameters might look like:
Serverless multi region Route 53 health check

Specifically, the domain names that you collected earlier would map according to following:

  • The domain names from the API Gateway “prod”-stage go into Region1HealthEndpoint and Region2HealthEndpoint.
  • The domain names from the custom domain name’s target domain name goes into Region1Endpoint and Region2Endpoint.

Using the Rest API from server-side applications

You are now ready to use your setup. First, demonstrate the use of the API from server-side clients. You can demonstrate this by using curl from the command line:


> curl https://hellowordapi.replacewithyourcompanyname.com/v1/helloworld/
{"message": "Hello from us-east-1"}

Testing failover of Rest API in browser

Here’s how you can use this from the browser and test the failover. Find all of the files for this test in the browser-client folder of the blog-multi-region-serverless-service GitHub repo.

Use this html file:


<!DOCTYPE HTML>
<html>
<head>
    <meta charset="utf-8"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <meta name="viewport" content="width=device-width, initial-scale=1"/>
    <title>Multi-Region Client</title>
</head>
<body>
<div>
   <h1>Test Client</h1>

    <p id="client_result">

    </p>

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="settings.js"></script>
    <script src="client.js"></script>
</body>
</html>

The html file uses this JavaScript file to repeatedly call the API and print the history of messages:


var messageHistory = "";

(function call_service() {

   $.ajax({
      url: helloworldMultiregionendpoint+'v1/helloworld/',
      dataType: "json",
      cache: false,
      success: function(data) {
         messageHistory+="<p>"+data['message']+"</p>";
         $('#client_result').html(messageHistory);
      },
      complete: function() {
         // Schedule the next request when the current one's complete
         setTimeout(call_service, 10000);
      },
      error: function(xhr, status, error) {
         $('#client_result').html('ERROR: '+status);
      }
   });

})();

Also, make sure to update the settings in settings.js to match with the API Gateway endpoints for the DNS-proxy and the multi-regional endpoint for the Hello World API: var helloworldMultiregionendpoint = "https://hellowordapi.replacewithyourcompanyname.com/";

You can now open the HTML file in the browser (you can do this directly from the file system) and you should see something like the following screenshot:

Serverless multi region browser test

You can test failover by changing the environment variable in your health check Lambda function. In the Lambda console, select your health check function and scroll down to the Environment variables section. For the STATUS key, modify the value to fail.

Lambda update environment variable

You should see the region switch in the test client:

Serverless multi region broker test switchover

During an emulated failure like this, the browser might take some additional time to switch over due to connection keep-alive functionality. If you are using a browser like Chrome, you can kill all the connections to see a more immediate fail-over: chrome://net-internals/#sockets

Summary

You have implemented a simple way to do multi-regional serverless applications that fail over seamlessly between regions, either being accessed from the browser or from other applications/services. You achieved this by using the capabilities of Amazon Route 53 to do latency based routing and health checks for fail-over. You unlocked the use of these features in a serverless application by leveraging the new regional endpoint feature of Amazon API Gateway.

The setup was fully scripted using CloudFormation, the AWS Serverless Application Model (SAM), and the AWS CLI, and it can be integrated into deployment tools to push the code across the regions to make sure it is available in all the needed regions. For more information about cross-region deployments, see Building a Cross-Region/Cross-Account Code Deployment Solution on AWS on the AWS DevOps blog.

Using Enhanced Request Authorizers in Amazon API Gateway

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-enhanced-request-authorizers-in-amazon-api-gateway/

Recently, AWS introduced a new type of authorizer in Amazon API Gateway, enhanced request authorizers. Previously, custom authorizers received only the bearer token included in the request and the ARN of the API Gateway method being called. Enhanced request authorizers receive all of the headers, query string, and path parameters as well as the request context. This enables you to make more sophisticated authorization decisions based on parameters such as the client IP address, user agent, or a query string parameter alongside the client bearer token.

Enhanced request authorizer configuration

From the API Gateway console, you can declare a new enhanced request authorizer by selecting the Request option as the AWS Lambda event payload:

Create enhanced request authorizer

 

Just like normal custom authorizers, API Gateway can cache the policy returned by your Lambda function. With enhanced request authorizers, however, you can also specify the values that form the unique key of a policy in the cache. For example, if your authorization decision is based on both the bearer token and the IP address of the client, both values should be part of the unique key in the policy cache. The identity source parameter lets you specify these values as mapping expressions:

  • The bearer token appears in the Authorization header
  • The client IP address is stored in the sourceIp parameter of the request context.

Configure identity sources

 

Using enhanced request authorizers with Swagger

You can also define enhanced request authorizers in your Swagger (Open API) definitions. In the following example, you can see that all of the options configured in the API Gateway console are available as custom extensions in the API definition. For example, the identitySource field is a comma-separated list of mapping expressions.

securityDefinitions:
  IpAuthorizer:
    type: "apiKey"
    name: "IpAuthorizer"
    in: "header"
    x-amazon-apigateway-authtype: "custom"
    x-amazon-apigateway-authorizer:
      authorizerResultTtlInSeconds: 300
      identitySource: "method.request.header.Authorization, context.identity.sourceIp"
      authorizerUri: "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:XXXXXXXXXX:function:py-ip-authorizer/invocations"
      type: "request"

After you have declared your authorizer in the security definitions section, you can use it in your API methods:

---
swagger: "2.0"
info:
  title: "request-authorizer-demo"
basePath: "/dev"
paths:
  /hello:
    get:
      security:
      - IpAuthorizer: []
...

Enhanced request authorizer Lambda functions

Enhanced request authorizer Lambda functions receive an event object that is similar to proxy integrations. It contains all of the information about a request, excluding the body.

{
    "methodArn": "arn:aws:execute-api:us-east-1:XXXXXXXXXX:xxxxxx/dev/GET/hello",
    "resource": "/hello",
    "requestContext": {
        "resourceId": "xxxx",
        "apiId": "xxxxxxxxx",
        "resourcePath": "/hello",
        "httpMethod": "GET",
        "requestId": "9e04ff18-98a6-11e7-9311-ef19ba18fc8a",
        "path": "/dev/hello",
        "accountId": "XXXXXXXXXXX",
        "identity": {
            "apiKey": "",
            "sourceIp": "58.240.196.186"
        },
        "stage": "dev"
    },
    "queryStringParameters": {},
    "httpMethod": "GET",
    "pathParameters": {},
    "headers": {
        "cache-control": "no-cache",
        "x-amzn-ssl-client-hello": "AQACJAMDAAAAAAAAAAAAAAAAAAAAAAAAAAAA…",
        "Accept-Encoding": "gzip, deflate",
        "X-Forwarded-For": "54.240.196.186, 54.182.214.90",
        "Accept": "*/*",
        "User-Agent": "PostmanRuntime/6.2.5",
        "Authorization": "hello"
    },
    "stageVariables": {},
    "path": "/hello",
    "type": "REQUEST"
}

The following enhanced request authorizer snippet is written in Python and compares the source IP address against a list of valid IP addresses. The comments in the code explain what happens in each step.

...
VALID_IPS = ["58.240.195.186", "201.246.162.38"]

def lambda_handler(event, context):

    # Read the client’s bearer token.
    jwtToken = event["headers"]["Authorization"]
    
    # Read the source IP address for the request form 
    # for the API Gateway context object.
    clientIp = event["requestContext"]["identity"]["sourceIp"]
    
    # Verify that the client IP address is allowed.
    # If it’s not valid, raise an exception to make sure
    # that API Gateway returns a 401 status code.
    if clientIp not in VALID_IPS:
        raise Exception('Unauthorized')
    
    # Only allow hello users in!
    if not validate_jwt(userId):
        raise Exception('Unauthorized')

    # Use the values from the event object to populate the 
    # required parameters in the policy object.
    policy = AuthPolicy(userId, event["requestContext"]["accountId"])
    policy.restApiId = event["requestContext"]["apiId"]
    policy.region = event["methodArn"].split(":")[3]
    policy.stage = event["requestContext"]["stage"]
    
    # Use the scopes from the bearer token to make a 
    # decision on which methods to allow in the API.
    policy.allowMethod(HttpVerb.GET, '/hello')

    # Finally, build the policy.
    authResponse = policy.build()

    return authResponse
...

Conclusion

API Gateway customers build complex APIs, and authorization decisions often go beyond the simple properties in a JWT token. For example, users may be allowed to call the “list cars” endpoint but only with a specific subset of filter parameters. With enhanced request authorizers, you have access to all request parameters. You can centralize all of your application’s access control decisions in a Lambda function, making it easier to manage your application security.

Real World AWS Scalability

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/real-world-aws-scalability/

This is a guest post from Linda Hedges, Principal SA, High Performance Computing.

—–

One question we often hear is: “How well will my application scale on AWS?” For HPC workloads that cross multiple nodes, the cluster network is at the heart of scalability concerns. AWS uses advanced Ethernet networking technology which, like all things AWS, is designed for scale, security, high availability, and low cost. This network is exceptional and continues to benefit from Amazon’s rapid pace of development. For real world applications, all but the most demanding customers find that their applications run very well on AWS! Many have speculated that highly-coupled workloads require a name-brand network fabric to achieve good performance. For most applications, this is simply not the case. As with all clusters, the devil is in the details and some applications benefit from cluster tuning. This blog discusses the scalability of a representative, real-world application and provides a few performance tips for achieving excellent application performance using STARCCM+ as an example. For more HPC specific information, please see our website.

TLG Aerospace, a Seattle based aerospace engineering services company, runs most of their STARCCM+ Computational Fluid Dynamics (CFD) cases on AWS. A detailed case study describing TLG Aerospace’s experience and the results they achieved can be found here. This blog uses one of their CFD cases as an example to understand AWS scalability. By leveraging Amazon EC2 Spot instances which allow customers to purchase unused capacity at significantly reduced rates, TLG Aerospace consistently achieves an 80% cost savings compared to their previous cloud and on-premise HPC cluster options. TLG Aerospace experiences solid value, terrific scale-up, and effectively limitless case throughput – all with no queue wait!

HPC applications such as Computational Fluid Dynamics (CFD) depend heavily on the application’s ability to efficiently scale compute tasks in parallel across multiple compute resources. Parallel performance is often evaluated by determining an application’s scale-up. Scale-up is a function of the number of processors used and is defined as the time it takes to complete a run on one processor, divided by the time it takes to complete the same run on the number of processors used for the parallel run.

As an example, consider an application with a time to completion, or turn-around time of 32 hours when run on one processor. If the same application runs in one hour when run on 32 processors, then the scale-up is 32 hours of time on one processor / 1 hour time on 32 processors, or equal to 32, for 32 processes. Scaling is considered to be excellent when the scale-up is close to or equal to the number of processors on which the application is run.

If the same application took eight hours to complete on 32 processors, it would have a scale-up of only four: 32 (time on one processor) / 8 (time to complete on 32 processors). A scale-up of four on 32 processors is considered to be poor.

In addition to characterizing the scale-up of an application, scalability can be further characterized as “strong” or “weak” scaling. Note that the term “weak”, as used here, does not mean inadequate or bad but is a technical term facilitating the description of the type of scaling which is sought. Strong scaling offers a traditional view of application scaling, where a problem size is fixed and spread over an increasing number of processors. As more processors are added to the calculation, good strong scaling means that the time to complete the calculation decreases proportionally with increasing processor count.

In comparison, weak scaling does not fix the problem size used in the evaluation, but purposely increases the problem size as the number of processors also increases. The ratio of the problem size to the number of processors on which the case is run is held constant. For a CFD calculation, problem size most often refers to the size of the grid for a similar configuration.

An application demonstrates good weak scaling when the time to complete the calculation remains constant as the ratio of compute effort to the number of processors is held constant. Weak scaling offers insight into how an application behaves with varying case size.

Figure 1: Strong Scaling Demonstrated for a 16M Cell STARCCM+ CFD Calculation

Scale-up as a function of increasing processor count is shown in Figure 1 for the STARCCM+ case data provided by TLG Aerospace. This is a demonstration of “strong” scalability. The blue line shows what ideal or perfect scalability looks like. The purple triangles show the actual scale-up for the case as a function of increasing processor count. Excellent scaling is seen to well over 400 processors for this modest-sized 16M cell case as evidenced by the closeness of these two curves. This example was run on Amazon EC2 c3.8xlarge instances, each an Intel E5-2680, providing either 16 cores or 32 hyper-threaded processors.

AWS customers can choose to run their applications on either threads or cores. AWS provides access to the underlying hardware of our servers. For an application like STARCCM+, excellent linear scaling can be seen when using either threads or cores though testing of a specific case and application is always recommended. For this example, threads were chosen as the processing basis. Running on threads offered a few percent performance improvement when compared to running the same case on cores. Note that the number of available cores is equal to half of the number of available threads.

The scalability of real-world problems is directly related to the ratio of the compute-effort per-core to the time required to exchange data across the network. The grid size of a CFD case provides a strong indication of how much computational effort is required for a solution. Thus, larger cases will scale to even greater processor counts than for the modest size case discussed here.

Figure 2: Scale-up and Efficiency as a Function of Cells per Processor

STARCCM+ has been shown to demonstrate exceptional “weak” scaling on AWS. That’s not shown here though weak scaling is reflected in Figure 2 by plotting the cells per processor on the horizontal axis. The purple line in Figure 2 shows scale-up as a function of grid cells per processor. The vertical axis for scale-up is on the left-hand side of the graph as indicated by the purple arrow. The green line in Figure 2 shows efficiency as a function of grid cells per processor. The vertical axis for efficiency is shown on the right hand side of the graph and is indicated with a green arrow. Efficiency is defined as the scale-up divided by the number of processors used in the calculation.

Weak scaling is evidenced by considering the number of grid cells per processor as a measure of compute effort. Holding the grid cells per processor constant while increasing total case size demonstrates weak scaling. Weak scaling is not shown here because only one CFD case is used. Fewer grid cells per processor means reduced computational effort per processor. Maintaining efficiency while reducing cells per processor demonstrates the excellent strong scalability of STARCCM+ on AWS.

Efficiency remains at about 100% between approximately 250,000 grid cells per thread (or processor) and 100,000 grid cells per thread. Efficiency starts to fall off at about 100,000 grid cells per thread. An efficiency of at least 80% is maintained until 25,000 grid cells per thread. Decreasing grid cells per processor leads to decreased efficiency because the total computational effort per processor is reduced. Note that the perceived ability to achieve more than 100% efficiency (here, at about 150,000 cells per thread) is common in scaling studies, is case specific, and often related to smaller effects such as timing variation and memory caching.

Figure 3: Cost for per Run Based on Spot Pricing ($0.35 per hour for c3.8xlarge) as a function of Turn-around Time

Plots of scale-up and efficiency offer understanding of how a case or application scales. The bottom line though is that what really matters to most HPC users is case turn-around time and cost. A plot of turn-around time versus CPU cost for this case is shown in Figure 3. As the number of threads are increased, the total turn-around time decreases. But also, as the number of threads increase, the inefficiency increases. Increasing inefficiency leads to increased cost. The cost shown is based on typical Amazon Spot price for the c3.8xlarge and only includes the computational costs. Small costs will also be incurred for data storage.

Minimum cost and turn-around time were achieved with approximately 100,000 cells per thread. Many users will choose a cell count per thread to achieve the lowest possible cost. Others, may choose a cell count per thread to achieve the fastest turn-around time. If a run is desired in 1/3rd the time of the lowest price point, it can be achieved with approximately 25,000 cells per thread. (Note that many users run STARCCM+ with significantly fewer cells per thread than this.) While this increases the compute cost, other concerns, such as license costs or schedules can be over-riding factors. For this 16M cell case, the added inefficiency results in an increase in run price from $3 to $4 for computing. Many find the reduced turn-around time well worth the price of the additional instances.

As with any cluster, good performance requires attention to the details of the cluster set up. While AWS allows for the quick set up and take down of clusters, performance is affected by many of the specifics in that set up. This blog provides some examples.

On AWS, a placement group is a grouping of instances within a single Availability Zone that allow for low latency between the instances. Placement groups are recommended for all applications where low latency is a requirement. A placement group was used to achieve the best performance from STARCCM+. More on placement groups can be found in our docs.

Amazon Linux is a version of Linux maintained by Amazon. The distribution evolved from Red Hat Linux (RHEL) and is designed to provide a stable, secure, and highly performant environment. Amazon Linux is optimized to run on AWS and offers excellent performance for running HPC applications. For the case presented here, the operating system used was Amazon Linux. Other Linux distributions are also performant. However, it is strongly recommended that for Linux HPC applications, a minimum of the version 3.10 Linux kernel be used to be sure of using the latest Xen libraries. See our Amazon Linux page to learn more.

Amazon Elastic Block Store (EBS) is a persistent block level storage device often used for cluster storage on AWS. EBS provides reliable block level storage volumes that can be attached (and removed) from an Amazon EC2 instance. A standard EBS general purpose SSD (gp2) volume is all that is required to meet the needs of STARCCM+ and was used here. Other HPC applications may require faster I/O to prevent data writes from being a bottle neck to turn-around speed. For these applications, other storage options exist. A guide to amazon storage is found here.

As mentioned previously, STARCCM+, like many other CFD solvers, runs well on both threads and cores. Hyper-Threading can improve the performance of some MPI applications depending on the application, the case, and the size of the workload allocated to each thread; it may also slow performance. The one-size-fits-all nature of the static cluster compute environments means that most HPC clusters disable hyper-threading. To first order, it is believed that computationally intensive workloads run best on cores while those that are I/O bound may run best on threads. Again, a few percent increase in performance was discovered for this case, by running with threads. If there is no time to evaluate the effect of hyper-threading on case performance than it is recommended that hyper-threading be disabled. When hyper-threading is disabled, it is important to bind the core to designated CPU. This is called processor or CPU affinity. It almost universally improves performance over unpinned cores for computationally intensive workloads.

Occasionally, an application will include frequent time measurement in their code; perhaps this is done for performance tuning. Under these circumstances, performance can be improved by setting the clock source to the TSC (Time Stamp Counter). This tuning was not required for this application but is mentioned here for completeness.

When evaluating an application, it is always recommended that a meaningful real-world case be used. A case that is too big or too small won’t reflect the performance and scalability achievable in every day operation. The only way to know positively how an application will perform on AWS is to try it!

AWS offers solid strong scaling and exceptional weak scaling. Good performance can be achieved on AWS, for most applications. In addition to low cost and quick turn-around time, important considerations for HPC also include throughput and availability. AWS offers effectively limitless throughput, security, cost-savings, and high-availability making queues a “thing of the past”. A long queue wait makes for a very long case turn-around time, regardless of the scale.

Using API Gateway with VPC endpoints via AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-api-gateway-with-vpc-endpoints-via-aws-lambda/

To isolate critical parts of their app’s architecture, customers often rely on Virtual Private Cloud (VPC) and private subnets. Today, Amazon API Gateway cannot directly integrate with endpoints that live within a VPC without internet access. However, it is possible to proxy calls to your VPC endpoints using AWS Lambda functions.

This post guides you through the setup necessary to configure API Gateway, Lambda, and your VPC to proxy requests from API Gateway to HTTP endpoints in your VPC private subnets. With this solution, you can use API Gateway for authentication, authorization, and throttling before a request reaches your HTTP endpoint.

For this example, we have written a very basic Express application that receives a GET and POST method on its root resource (“/”). The application is deployed on an EC2 instance within a private subnet of a VPC. We use a Lambda function that connects to our private subnet to proxy requests from API Gateway to the Express HTTP endpoint. The CloudFormation template below deploys the API Gateway API, the AWS Lambda functions, and sets the correct permissions on both resources. Our CloudFormation template requires 4 parameters:

  • The IP address or DNS name of the instance running your express application (for example, 10.0.1.16)
  • The port used by the Express app (for example, 8080)
  • The security group of the EC2 instance (for example, sg-xx3xx6x0)
  • The subnet ID of your VPC’s private subnet (for example, subnet-669xx03x)

Click the link below to deploy the CloudFormation template, the rest of this blog post dives deeper on each component of the architecture.



The Express application

We have written a very simple web service using Express and Node.js. The service accepts GET and POST requests to its root resource and responds with a JSON object. You can use the sample code below to start the application on your instance. Before you create the application, make sure that you have installed Node.js on your instance.

Create a new folder on your web server called vpcproxy. In the new folder, create a new file called index.js and paste the code below in the file.

var express = require('express');
var bodyParser = require('body-parser');

var app = express();
app.use(bodyParser.json());

app.get('/', function(req, res) {
        if (req.query.error) {
                res.status(403).json({error: "Random error"}).end();
                return;
        }
        res.json({ message: 'Hello World!' });
});

app.post('/', function(req, res) {
        console.log("post");
        console.log(req.body);
        res.json(req.body).end();
});
app.listen(8080, function() {
        console.log("app started");
});

To install the required dependencies, from the vpcproxy folder, run the following command: npm install express body-parser

After the dependencies are installed, you can start the application by running: node index.js

API Gateway configuration

The API Gateway API declares all of the same methods that your Express application supports. Each method is configured to transform requests into a JSON structure that AWS Lambda can understand, and responses are generated using mapping templates from the Lambda output.

The first step is to transform a request into an event for Lambda. The mapping template below captures all of the request information and includes the configuration of the backend endpoint that the Lambda function should interact with. This template is applied to all requests for any endpoint.

#set($allParams = $input.params())
{
  "requestParams" : {
    "hostname" : "10.0.1.16",
    "port" : "8080",
    "path" : "$context.resourcePath",
    "method" : "$context.httpMethod"
  },
  "bodyJson" : $input.json('$'),
  "params" : {
    #foreach($type in $allParams.keySet())
      #set($params = $allParams.get($type))
      "$type" : {
        #foreach($paramName in $params.keySet())
          "$paramName" : "$util.escapeJavaScript($params.get($paramName))"
          #if($foreach.hasNext),#end
        #end
      }
      #if($foreach.hasNext),#end
    #end
  },
  "stage-variables" : {
    #foreach($key in $stageVariables.keySet())
      "$key" : "$util.escapeJavaScript($stageVariables.get($key))"
      #if($foreach.hasNext),#end
    #end
  },
  "context" : {
    "account-id" : "$context.identity.accountId",
    "api-id" : "$context.apiId",
    "api-key" : "$context.identity.apiKey",
    "authorizer-principal-id" : "$context.authorizer.principalId",
    "caller" : "$context.identity.caller",
    "cognito-authentication-provider" : "$context.identity.cognitoAuthenticationProvider",
    "cognito-authentication-type" : "$context.identity.cognitoAuthenticationType",
    "cognito-identity-id" : "$context.identity.cognitoIdentityId",
    "cognito-identity-pool-id" : "$context.identity.cognitoIdentityPoolId",
    "http-method" : "$context.httpMethod",
    "stage" : "$context.stage",
    "source-ip" : "$context.identity.sourceIp",
    "user" : "$context.identity.user",
    "user-agent" : "$context.identity.userAgent",
    "user-arn" : "$context.identity.userArn",
    "request-id" : "$context.requestId",
    "resource-id" : "$context.resourceId",
    "resource-path" : "$context.resourcePath"
  }
}

After the Lambda function has processed the request and response, API Gateway is configured to transform the output into an HTTP response. The output from the Lambda function is a JSON structure that contains the response status code, body, and headers:

{  
   "status":200,
   "bodyJson":{  
      "message":"Hello World!"
   },
   "headers":{  
      "x-powered-by":"Express",
      "content-type":"application/json; charset=utf-8",
      "content-length":"26",
      "etag":"W/\"1a-r2dz039gtg5rjLoq32eF4w\"",
      "date":"Wed, 25 May 2016 18:41:22 GMT",
      "connection":"keep-alive"
   }
}

These values are then mapped in API Gateway using header mapping expressions and mapping templates for the response body.

First, all known headers are mapped:

"responseParameters": {
    "method.response.header.etag": "integration.response.body.headers.etag",
    "method.response.header.x-powered-by": "integration.response.body.headers.x-powered-by",
    "method.response.header.date": "integration.response.body.headers.date",
    "method.response.header.content-length": "integration.response.body.headers.content-length"
}

Then the body is extracted from Lambda’s output JSON using a very simple body mapping template: $input.json('$.bodyJson')

Response codes other than 200 are handled using using regular expressions to match the status code in API Gateway (for example \{\"status\"\:400.*), and the parseJson method of the $util object to extract the response body.

#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))
$errorMessageObj.bodyJson

All of this configuration is included in Swagger format in the CloudFormation template of this tutorial. The Swagger is generated dynamically based on the four parameters that the template requires using the Fn::Join function.

The AWS Lambda function

The proxy Lambda function is written in JavaScript and captures all of the request details forwarded by API Gateway, creates similar request using the standard Node.js http package, and forwards it to the private endpoint. Responses from the private endpoint are encapsulated in a JSON object which API Gateway turns into an HTTP response. The private endpoint configuration is passed to the Lambda function from API Gateway in the event model. The Lambda function code is also included in the CloudFormation template.

var http = require('http');

exports.myHandler = function(event, context, callback) {
    // setup request options and parameters
    var options = {
      host: event.requestParams.hostname,
      port: event.requestParams.port,
      path: event.requestParams.path,
      method: event.requestParams.method
    };
    
    // if you have headers set them otherwise set the property to an empty map
    if (event.params && event.params.header && Object.keys(event.params.header).length > 0) {
        options.headers = event.params.header
    } else {
        options.headers = {};
    }
    
    // Force the user agent and the "forwaded for" headers because we want to 
    // take them from the API Gateway context rather than letting Node.js set the Lambda ones
    options.headers["User-Agent"] = event.context["user-agent"];
    options.headers["X-Forwarded-For"] = event.context["source-ip"];
    // if I don't have a content type I force it application/json
    // Test invoke in the API Gateway console does not pass a value
    if (!options.headers["Content-Type"]) {
        options.headers["Content-Type"] = "application/json";
    }
    // build the query string
    if (event.params && event.params.querystring && Object.keys(event.params.querystring).length > 0) {
        var queryString = generateQueryString(event.params.querystring);
        
        if (queryString !== "") {
            options.path += "?" + queryString;
        }
    }
    
    // Define my callback to read the response and generate a JSON output for API Gateway.
    // The JSON output is parsed by the mapping templates
    callback = function(response) {
        var responseString = '';
    
        // Another chunk of data has been recieved, so append it to `str`
        response.on('data', function (chunk) {
            responseString += chunk;
        });
      
        // The whole response has been received
        response.on('end', function () {
            // Parse response to json
            var jsonResponse = JSON.parse(responseString);
    
            var output = {
                status: response.statusCode,
                bodyJson: jsonResponse,
                headers: response.headers
            };
            
            // if the response was a 200 we can just pass the entire JSON back to
            // API Gateway for parsing. If the backend returned a non 200 status 
            // then we return it as an error
            if (response.statusCode == 200) {
                context.succeed(output);
            } else {
                // set the output JSON as a string inside the body property
                output.bodyJson = responseString;
                // stringify the whole thing again so that we can read it with 
                // the $util.parseJson method in the mapping templates
                context.fail(JSON.stringify(output));
            }
        });
    }
    
    var req = http.request(options, callback);
    
    if (event.bodyJson && event.bodyJson !== "") {
        req.write(JSON.stringify(event.bodyJson));
    }
    
    req.on('error', function(e) {
        console.log('problem with request: ' + e.message);
        context.fail(JSON.stringify({
            status: 500,
            bodyJson: JSON.stringify({ message: "Internal server error" })
        }));
    });
    
    req.end();
}

function generateQueryString(params) {
    var str = [];
    for(var p in params) {
        if (params.hasOwnProperty(p)) {
            str.push(encodeURIComponent(p) + "=" + encodeURIComponent(params[p]));
        }
    }
    return str.join("&");
}

Conclusion

You can use Lambda functions to proxy HTTP requests from API Gateway to an HTTP endpoint within a VPC without Internet access. This allows you to keep your EC2 instances and applications completely isolated from the internet while still exposing them via API Gateway. By using API Gateway to front your existing endpoints, you can configure authentication and authorization rules as well as throttling rules to limit the traffic that your backend receives.

If you have any questions or suggestions, please comment below.

Amazon API Gateway mapping improvements

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/amazon-api-gateway-mapping-improvements/

Yesterday we announced the new Swagger import API. You may have also noticed a new first time user experience in the API Gateway console that automatically creates a sample Pet Store API and guides you though API Gateway features. That is not all we’ve been doing:

Over the past few weeks, we’ve made mapping requests and responses easier. This post takes you through the new features we introduced and gives practical examples of how to use them.

Multiple 2xx responses

We heard from many of you that you want to return more than one 2xx response code from your API. You can now configure Amazon API Gateway to return multiple 2xx response codes, each with its own header and body mapping templates. For example, when creating resources, you can return 201 for “created” and 202 for “accepted”.

Context variables in parameter mapping

We have added the ability to reference context variables from the parameter mapping fields. For example, you can include the identity principal or the stage name from the context variable in a header to your HTTP backend. To send the principalId returned by a custom authorizer in an X-User-ID header to your HTTP backend, use this mapping expression:

context.authorizer.principalId

For more information, see the context variable in the Mapping Template Reference page of the documentation.

Access to raw request body

Mapping templates in API Gateway help you transform incoming requests and outgoing responses from your API’s back end. The $input variable in mapping templates enables you to read values from a JSON body and its properties. You can now also return the raw payload, whether it’s JSON, XML, or a string using the $input.body property.

For example, if you have configured your API to receive raw data and pass it to Amazon Kinesis using an AWS service proxy integration, you can use the body property to read the incoming body and the $util variable to encode it for an Amazon Kinesis stream.

{
  "Data" : "$util.base64Encode($input.body)",
  "PartitionKey" : "key",
  "StreamName" : "Stream"
}

JSON parse function

We have also added a parseJson() method to the $util object in mapping templates. The parseJson() method parses stringified JSON input into its object representation. You can manipulate this object representation in the mapping templates. For example, if you need to return an error from AWS Lambda, you can now return it like this:

exports.handler = function(event, context) {
    var myErrorObj = {
        errorType : "InternalFailure",
        errorCode : 9130,
        detailedMessage : "This is my error message",
        stackTrace : ["foo1", "foo2", "foo3"],
        data : {
            numbers : [1, 2, 3]
        }
    }
    
    context.fail(JSON.stringify(myErrorObj));
};

Then, you can use the parseJson() method in the mapping template to extract values from the error and return a meaningful message from your API, like this:

#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))
#set ($bodyObj = $util.parseJson($input.body))

{
"type" : "$errorMessageObj.errorType",
"code" : $errorMessageObj.errorCode,
"message" : "$errorMessageObj.detailedMessage",
"someData" : "$errorMessageObj.data.numbers[2]"
}

This will produce a response that looks like this:

{
"type" : "InternalFailure",
"code" : 9130,
"message" : "This is my error message",
"someData" : "3"
}

Conclusion

We continuously release new features and improvements to Amazon API Gateway. Your feedback is extremely important and guides our priorities. Keep sending us feedback on the API Gateway forum and on social media.

Using Amazon API Gateway as a proxy for DynamoDB

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/

Andrew Baird Andrew Baird, AWS Solutions Architect
Amazon API Gateway has a feature that enables customers to create their own API definitions directly in front of an AWS service API. This tutorial will walk you through an example of doing so with Amazon DynamoDB.
Why use API Gateway as a proxy for AWS APIs?
Many AWS services provide APIs that applications depend on directly for their functionality. Examples include:

Amazon DynamoDB – An API-accessible NoSQL database.
Amazon Kinesis – Real-time ingestion of streaming data via API.
Amazon CloudWatch – API-driven metrics collection and retrieval.

If AWS already exposes internet-accessible APIs, why would you want to use API Gateway as a proxy for them? Why not allow applications to just directly depend on the AWS service API itself?
Here are a few great reasons to do so:

You might want to enable your application to integrate with very specific functionality that an AWS service provides, without the need to manage access keys and secret keys that AWS APIs require.
There may be application-specific restrictions you’d like to place on the API calls being made to AWS services that you would not be able to enforce if clients integrated with the AWS APIs directly.
You may get additional value out of using a different HTTP method from the method that is used by the AWS service. For example, creating a GET request as a proxy in front of an AWS API that requires an HTTP POST so that the response will be cached.
You can accomplish the above things without having to introduce a server-side application component that you need to manage or that could introduce increased latency. Even a lightweight Lambda function that calls a single AWS service API is code that you do not need to create or maintain if you use API Gateway directly as an AWS service proxy.

Here, we will walk you through a hypothetical scenario that shows how to create an Amazon API Gateway AWS service proxy in front of Amazon DynamoDB.
The Scenario
You would like the ability to add a public Comments section to each page of your website. To achieve this, you’ll need to accept and store comments and you will need to retrieve all of the comments posted for a particular page so that the UI can display them.
We will show you how to implement this functionality by creating a single table in DynamoDB, and creating the two necessary APIs using the AWS service proxy feature of Amazon API Gateway.
Defining the APIs
The first step is to map out the APIs that you want to create. For both APIs, we’ve linked to the DynamoDB API documentation. Take note of how the API you define below differs in request/response details from the native DynamoDB APIs.
Post Comments
First, you need an API that accepts user comments and stores them in the DynamoDB table. Here’s the API definition you’ll use to implement this functionality:
Resource: /comments
HTTP Method: POST
HTTP Request Body:
{
"pageId": "example-page-id",
"userName": "ExampleUserName",
"message": "This is an example comment to be added."
}
After you create it, this API becomes a proxy in front of the DynamoDB API PutItem.
Get Comments
Second, you need an API to retrieve all of the comments for a particular page. Use the following API definition:
Resource: /comments/{pageId}
HTTP Method: GET
The curly braces around {pageId} in the URI path definition indicate that pageId will be treated as a path variable within the URI.
This API will be a proxy in front of the DynamoDB API Query. Here, you will notice the benefit: your API uses the GET method, while the DynamoDB GetItem API requires an HTTP POST and does not include any cache headers in the response.
Creating the DynamoDB Table
First, Navigate to the DynamoDB console and select Create Table. Next, name the table Comments, with commentId as the Primary Key. Leave the rest of the default settings for this example, and choose Create.

After this table is populated with comments, you will want to retrieve them based on the page that they’ve been posted to. To do this, create a secondary index on an attribute called pageId. This secondary index enables you to query the table later for all comments posted to a particular page. When viewing your table, choose the Indexes tab and choose Create index.

When querying this table, you only want to retrieve the pieces of information that matter to the client: in this case, these are the pageId, the userName, and the message itself. Any other data you decide to store with each comment does not need to be retrieved from the table for the publically accessible API. Type the following information into the form to capture this and choose Create index:

Creating the APIs
Now, using the AWS service proxy feature of Amazon API Gateway, we’ll demonstrate how to create each of the APIs you defined. Navigate to the API Gateway service console, and choose Create API. In API name, type CommentsApi and type a short description. Finally, choose Create API.

Now you’re ready to create the specific resources and methods for the new API.
Creating the Post Comments API
In the editor screen, choose Create Resource. To match the description of the Post Comments API above, provide the appropriate details and create the first API resource:

Now, with the resource created, set up what happens when the resource is called with the HTTP POST method. Choose Create Method and select POST from the drop down. Click the checkmark to save.
To map this API to the DynamoDB API needed, next to Integration type, choose Show Advanced and choose AWS Service Proxy.
Here, you’re presented with options that define which specific AWS service API will be executed when this API is called, and in which region. Fill out the information as shown, matching the DynamoDB table you created a moment ago. Before you proceed, create an AWS Identity and Access Management (IAM) role that has permission to call the DynamoDB API PutItem for the Comments table; this role must have a service trust relationship to API Gateway. For more information on IAM policies and roles, see the Overview of IAM Policies topic.
After inputting all of the information as shown, choose Save.

If you were to deploy this API right now, you would have a working service proxy API that only wraps the DynamoDB PutItem API. But, for the Post Comments API, you’d like the client to be able to use a more contextual JSON object structure. Also, you’d like to be sure that the DynamoDB API PutItem is called precisely the way you expect it to be called. This eliminates client-driven error responses and removes the possibility that the new API could be used to call another DynamoDB API or table that you do not intend to allow.
You accomplish this by creating a mapping template. This enables you to define the request structure that your API clients will use, and then transform those requests into the structure that the DynamoDB API PutItem requires.
From the Method Execution screen, choose Integration Request:

In the Integration Request screen expand the Mapping Templates section and choose Add mapping template. Under Content-Type, type application/json and then choose the check mark:

Next, choose the pencil icon next to Input passthrough and choose Mapping template from the dropdown. Now, you’ll be presented with a text box where you create the mapping template. For more information on creating mapping templates, see API Gateway Mapping Template Reference.
The mapping template will be as follows. We’ll walk through what’s important about it next:
{
"TableName": "Comments",
"Item": {
"commentId": {
"S": "$context.requestId"
},
"pageId": {
"S": "$input.path(‘$.pageId’)"
},
"userName": {
"S": "$input.path(‘$.userName’)"
},
"message": {
"S": "$input.path(‘$.message’)"
}
}
}
This mapping template creates the JSON structure required by the DynamoDB PutItem API. The entire mapping template is static. The three input variables are referenced from the request JSON using the $input variable and each comment is stamped with a unique identifier. This unique identifier is the commentId and is extracted directly from the API request’s $context variable. This $context variable is set by the API Gateway service itself. To review other parameters that are available to a mapping template, see API Gateway Mapping Template Reference. You may decide that including information like sourceIp or other headers could be valuable to you.
With this mapping template, no matter how your API is called, the only variance from the DynamoDB PutItem API call will be the values of pageId, userName, and message. Clients of your API will not be able to dictate which DynamoDB table is being targeted (because “Comments” is statically listed), and they will not have any control over the object structure that is specified for each item (each input variable is explicitly declared a string to the PutItem API).
Back in the Method Execution pane click TEST.
Create an example Request Body that matches the API definition documented above and then choose Test. For example, your request body could be:
{
"pageId": "breaking-news-story-01-18-2016",
"userName": "Just Saying Thank You",
"message": "I really enjoyed this story!!"
}
Navigate to the DynamoDB console and view the Comments table to show that the request really was successfully processed:

Great! Try including a few more sample items in the table to further test the Get Comments API.
If you deployed this API, you would be all set with a public API that has the ability to post public comments and store them in DynamoDB. For some use cases you may only want to collect data through a single API like this: for example, when collecting customer and visitor feedback, or for a public voting or polling system. But for this use case, we’ll demonstrate how to create another API to retrieve records from a DynamoDB table as well. Many of the details are similar to the process above.
Creating the Get Comments API
Return to the Resources view, choose the /comments resource you created earlier and choose Create Resource, like before.
This time, include a request path parameter to represent the pageId of the comments being retrieved. Input the following information and then choose Create Resource:

In Resources, choose your new /{pageId} resource and choose Create Method. The Get Comments API will be retrieving data from our DynamoDB table, so choose GET for the HTTP method.
In the method configuration screen choose Show advanced and then select AWS Service Proxy. Fill out the form to match the following. Make sure to use the appropriate AWS Region and IAM execution role; these should match what you previously created. Finally, choose Save.

Modify the Integration Request and create a new mapping template. This will transform the simple pageId path parameter on the GET request to the needed DynamoDB Query API, which requires an HTTP POST. Here is the mapping template:
{
"TableName": "Comments",
"IndexName": "pageId-index",
"KeyConditionExpression": "pageId = :v1",
"ExpressionAttributeValues": {
":v1": {
"S": "$input.params(‘pageId’)"
}
}
}
Now test your mapping template. Navigate to the Method Execution pane and choose the Test icon on the left. Provide one of the pageId values that you’ve inserted into your Comments table and choose Test.

You should see a response like the following; it is directly passing through the raw DynamoDB response:

Now you’re close! All you need to do before you deploy your API is to map the raw DynamoDB response to the similar JSON object structure that you defined on the Post Comment API.
This will work very similarly to the mapping template changes you already made. But you’ll configure this change on the Integration Response page of the console by editing the default mapping response’s mapping template.
Navigate to Integration Response and expand the 200 response code by choosing the arrow on the left. In the 200 response, expand the Mapping Templates section. In Content-Type choose application/json then choose the pencil icon next to Output Passthrough.

Now, create a mapping template that extracts the relevant pieces of the DynamoDB response and places them into a response structure that matches our use case:
#set($inputRoot = $input.path(‘$’))
{
"comments": [
#foreach($elem in $inputRoot.Items) {
"commentId": "$elem.commentId.S",
"userName": "$elem.userName.S",
"message": "$elem.message.S"
}#if($foreach.hasNext),#end
#end
]
}
Now choose the check mark to save the mapping template, and choose Save to save this default integration response. Return to the Method Execution page and test your API again. You should now see a formatted response.
Now you have two working APIs that are ready to deploy! See our documentation to learn about how to deploy API stages.
But, before you deploy your API, here are some additional things to consider:

Authentication: you may want to require that users authenticate before they can leave comments. Amazon API Gateway can enforce IAM authentication for the APIs you create. To learn more, see Amazon API Gateway Access Permissions.
DynamoDB capacity: you may want to provision an appropriate amount of capacity to your Comments table so that your costs and performance reflect your needs.
Commenting features: Depending on how robust you’d like commenting to be on your site, you might like to introduce changes to the APIs described here. Examples are attributes that track replies or timestamp attributes.

Conclusion
Now you’ve got a fully functioning public API to post and retrieve public comments for your website. This API communicates directly with the Amazon DynamoDB API without you having to manage a single application component yourself!

Using Amazon API Gateway as a proxy for DynamoDB

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/

Andrew Baird Andrew Baird, AWS Solutions Architect
Amazon API Gateway has a feature that enables customers to create their own API definitions directly in front of an AWS service API. This tutorial will walk you through an example of doing so with Amazon DynamoDB.
Why use API Gateway as a proxy for AWS APIs?
Many AWS services provide APIs that applications depend on directly for their functionality. Examples include:

Amazon DynamoDB – An API-accessible NoSQL database.
Amazon Kinesis – Real-time ingestion of streaming data via API.
Amazon CloudWatch – API-driven metrics collection and retrieval.

If AWS already exposes internet-accessible APIs, why would you want to use API Gateway as a proxy for them? Why not allow applications to just directly depend on the AWS service API itself?
Here are a few great reasons to do so:

You might want to enable your application to integrate with very specific functionality that an AWS service provides, without the need to manage access keys and secret keys that AWS APIs require.
There may be application-specific restrictions you’d like to place on the API calls being made to AWS services that you would not be able to enforce if clients integrated with the AWS APIs directly.
You may get additional value out of using a different HTTP method from the method that is used by the AWS service. For example, creating a GET request as a proxy in front of an AWS API that requires an HTTP POST so that the response will be cached.
You can accomplish the above things without having to introduce a server-side application component that you need to manage or that could introduce increased latency. Even a lightweight Lambda function that calls a single AWS service API is code that you do not need to create or maintain if you use API Gateway directly as an AWS service proxy.

Here, we will walk you through a hypothetical scenario that shows how to create an Amazon API Gateway AWS service proxy in front of Amazon DynamoDB.
The Scenario
You would like the ability to add a public Comments section to each page of your website. To achieve this, you’ll need to accept and store comments and you will need to retrieve all of the comments posted for a particular page so that the UI can display them.
We will show you how to implement this functionality by creating a single table in DynamoDB, and creating the two necessary APIs using the AWS service proxy feature of Amazon API Gateway.
Defining the APIs
The first step is to map out the APIs that you want to create. For both APIs, we’ve linked to the DynamoDB API documentation. Take note of how the API you define below differs in request/response details from the native DynamoDB APIs.
Post Comments
First, you need an API that accepts user comments and stores them in the DynamoDB table. Here’s the API definition you’ll use to implement this functionality:
Resource: /comments
HTTP Method: POST
HTTP Request Body:
{
"pageId": "example-page-id",
"userName": "ExampleUserName",
"message": "This is an example comment to be added."
}
After you create it, this API becomes a proxy in front of the DynamoDB API PutItem.
Get Comments
Second, you need an API to retrieve all of the comments for a particular page. Use the following API definition:
Resource: /comments/{pageId}
HTTP Method: GET
The curly braces around {pageId} in the URI path definition indicate that pageId will be treated as a path variable within the URI.
This API will be a proxy in front of the DynamoDB API Query. Here, you will notice the benefit: your API uses the GET method, while the DynamoDB GetItem API requires an HTTP POST and does not include any cache headers in the response.
Creating the DynamoDB Table
First, Navigate to the DynamoDB console and select Create Table. Next, name the table Comments, with commentId as the Primary Key. Leave the rest of the default settings for this example, and choose Create.

After this table is populated with comments, you will want to retrieve them based on the page that they’ve been posted to. To do this, create a secondary index on an attribute called pageId. This secondary index enables you to query the table later for all comments posted to a particular page. When viewing your table, choose the Indexes tab and choose Create index.

When querying this table, you only want to retrieve the pieces of information that matter to the client: in this case, these are the pageId, the userName, and the message itself. Any other data you decide to store with each comment does not need to be retrieved from the table for the publically accessible API. Type the following information into the form to capture this and choose Create index:

Creating the APIs
Now, using the AWS service proxy feature of Amazon API Gateway, we’ll demonstrate how to create each of the APIs you defined. Navigate to the API Gateway service console, and choose Create API. In API name, type CommentsApi and type a short description. Finally, choose Create API.

Now you’re ready to create the specific resources and methods for the new API.
Creating the Post Comments API
In the editor screen, choose Create Resource. To match the description of the Post Comments API above, provide the appropriate details and create the first API resource:

Now, with the resource created, set up what happens when the resource is called with the HTTP POST method. Choose Create Method and select POST from the drop down. Click the checkmark to save.
To map this API to the DynamoDB API needed, next to Integration type, choose Show Advanced and choose AWS Service Proxy.
Here, you’re presented with options that define which specific AWS service API will be executed when this API is called, and in which region. Fill out the information as shown, matching the DynamoDB table you created a moment ago. Before you proceed, create an AWS Identity and Access Management (IAM) role that has permission to call the DynamoDB API PutItem for the Comments table; this role must have a service trust relationship to API Gateway. For more information on IAM policies and roles, see the Overview of IAM Policies topic.
After inputting all of the information as shown, choose Save.

If you were to deploy this API right now, you would have a working service proxy API that only wraps the DynamoDB PutItem API. But, for the Post Comments API, you’d like the client to be able to use a more contextual JSON object structure. Also, you’d like to be sure that the DynamoDB API PutItem is called precisely the way you expect it to be called. This eliminates client-driven error responses and removes the possibility that the new API could be used to call another DynamoDB API or table that you do not intend to allow.
You accomplish this by creating a mapping template. This enables you to define the request structure that your API clients will use, and then transform those requests into the structure that the DynamoDB API PutItem requires.
From the Method Execution screen, choose Integration Request:

In the Integration Request screen expand the Mapping Templates section and choose Add mapping template. Under Content-Type, type application/json and then choose the check mark:

Next, choose the pencil icon next to Input passthrough and choose Mapping template from the dropdown. Now, you’ll be presented with a text box where you create the mapping template. For more information on creating mapping templates, see API Gateway Mapping Template Reference.
The mapping template will be as follows. We’ll walk through what’s important about it next:
{
"TableName": "Comments",
"Item": {
"commentId": {
"S": "$context.requestId"
},
"pageId": {
"S": "$input.path(‘$.pageId’)"
},
"userName": {
"S": "$input.path(‘$.userName’)"
},
"message": {
"S": "$input.path(‘$.message’)"
}
}
}
This mapping template creates the JSON structure required by the DynamoDB PutItem API. The entire mapping template is static. The three input variables are referenced from the request JSON using the $input variable and each comment is stamped with a unique identifier. This unique identifier is the commentId and is extracted directly from the API request’s $context variable. This $context variable is set by the API Gateway service itself. To review other parameters that are available to a mapping template, see API Gateway Mapping Template Reference. You may decide that including information like sourceIp or other headers could be valuable to you.
With this mapping template, no matter how your API is called, the only variance from the DynamoDB PutItem API call will be the values of pageId, userName, and message. Clients of your API will not be able to dictate which DynamoDB table is being targeted (because “Comments” is statically listed), and they will not have any control over the object structure that is specified for each item (each input variable is explicitly declared a string to the PutItem API).
Back in the Method Execution pane click TEST.
Create an example Request Body that matches the API definition documented above and then choose Test. For example, your request body could be:
{
"pageId": "breaking-news-story-01-18-2016",
"userName": "Just Saying Thank You",
"message": "I really enjoyed this story!!"
}
Navigate to the DynamoDB console and view the Comments table to show that the request really was successfully processed:

Great! Try including a few more sample items in the table to further test the Get Comments API.
If you deployed this API, you would be all set with a public API that has the ability to post public comments and store them in DynamoDB. For some use cases you may only want to collect data through a single API like this: for example, when collecting customer and visitor feedback, or for a public voting or polling system. But for this use case, we’ll demonstrate how to create another API to retrieve records from a DynamoDB table as well. Many of the details are similar to the process above.
Creating the Get Comments API
Return to the Resources view, choose the /comments resource you created earlier and choose Create Resource, like before.
This time, include a request path parameter to represent the pageId of the comments being retrieved. Input the following information and then choose Create Resource:

In Resources, choose your new /{pageId} resource and choose Create Method. The Get Comments API will be retrieving data from our DynamoDB table, so choose GET for the HTTP method.
In the method configuration screen choose Show advanced and then select AWS Service Proxy. Fill out the form to match the following. Make sure to use the appropriate AWS Region and IAM execution role; these should match what you previously created. Finally, choose Save.

Modify the Integration Request and create a new mapping template. This will transform the simple pageId path parameter on the GET request to the needed DynamoDB Query API, which requires an HTTP POST. Here is the mapping template:
{
"TableName": "Comments",
"IndexName": "pageId-index",
"KeyConditionExpression": "pageId = :v1",
"ExpressionAttributeValues": {
":v1": {
"S": "$input.params(‘pageId’)"
}
}
}
Now test your mapping template. Navigate to the Method Execution pane and choose the Test icon on the left. Provide one of the pageId values that you’ve inserted into your Comments table and choose Test.

You should see a response like the following; it is directly passing through the raw DynamoDB response:

Now you’re close! All you need to do before you deploy your API is to map the raw DynamoDB response to the similar JSON object structure that you defined on the Post Comment API.
This will work very similarly to the mapping template changes you already made. But you’ll configure this change on the Integration Response page of the console by editing the default mapping response’s mapping template.
Navigate to Integration Response and expand the 200 response code by choosing the arrow on the left. In the 200 response, expand the Mapping Templates section. In Content-Type choose application/json then choose the pencil icon next to Output Passthrough.

Now, create a mapping template that extracts the relevant pieces of the DynamoDB response and places them into a response structure that matches our use case:
#set($inputRoot = $input.path(‘$’))
{
"comments": [
#foreach($elem in $inputRoot.Items) {
"commentId": "$elem.commentId.S",
"userName": "$elem.userName.S",
"message": "$elem.message.S"
}#if($foreach.hasNext),#end
#end
]
}
Now choose the check mark to save the mapping template, and choose Save to save this default integration response. Return to the Method Execution page and test your API again. You should now see a formatted response.
Now you have two working APIs that are ready to deploy! See our documentation to learn about how to deploy API stages.
But, before you deploy your API, here are some additional things to consider:

Authentication: you may want to require that users authenticate before they can leave comments. Amazon API Gateway can enforce IAM authentication for the APIs you create. To learn more, see Amazon API Gateway Access Permissions.
DynamoDB capacity: you may want to provision an appropriate amount of capacity to your Comments table so that your costs and performance reflect your needs.
Commenting features: Depending on how robust you’d like commenting to be on your site, you might like to introduce changes to the APIs described here. Examples are attributes that track replies or timestamp attributes.

Conclusion
Now you’ve got a fully functioning public API to post and retrieve public comments for your website. This API communicates directly with the Amazon DynamoDB API without you having to manage a single application component yourself!

Using Amazon API Gateway with microservices deployed on Amazon ECS

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-with-microservices-deployed-on-amazon-ecs/

Rudy Krol Rudy Krol, AWS Solutions Architect
One convenient way to run microservices is to deploy them as Docker containers. Docker containers are quick to provision, easily portable, and provide process isolation. Amazon EC2 Container Service (Amazon ECS) provides a highly scalable, high performance container management service. This service supports Docker containers and enables you to easily run microservices on a managed cluster of Amazon EC2 instances.
Microservices usually expose REST APIs for use in front ends, third-party applications, and other microservices. A best practice is to manage these APIs with an API gateway. This provides a unique entry point for all of your APIs and also eliminates the need to implement API-specific code for things like security, caching, throttling, and monitoring for each of your microservices. You can implement this pattern in a few minutes using Amazon API Gateway. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
In this post, we’ll explain how to use Amazon API Gateway to expose APIs for microservices running on Amazon ECS by leveraging the HTTP proxy mode of Amazon API Gateway. Amazon API Gateway can make proxy calls to any publicly accessible endpoint; for example, an Elastic Load Balancing load balancer endpoint in front of a microservice that is deployed on Amazon ECS. The following diagram shows the high level architecture described in this article:

You will see how you can benefit from stage variables to dynamically set the endpoint value depending on the stage of the API deployment.
In the first part of this post, we will walk through the AWS Management Console to create the dev environment (ECS cluster, ELB load balancers, and API Gateway configuration). The second part explains how to automate the creation of a production environment with AWS CloudFormation and AWS CLI.
Creating a dev environment with the AWS Management Console
Let’s begin by provisioning a sample helloworld microservice using the Getting Started wizard.
Sign in to Amazon ECS console. If this is the first time you’re using the Amazon ECS console, you’ll see a welcome page. Otherwise, you’ll see the console home page and the Create Cluster button.
Step 1: Create a task definition

In the Amazon ECS console, do one of the following:

If Get Started Now is displayed, choose it.
If it is not displayed, go to the Getting Started wizard.

Optional: (depending on the AWS Region) Deselect the Store container images securely with Amazon ECR checkbox and choose Continue.
For Task definition name, type ecsconsole-helloworld.
For Container name, type helloworld.
Choose Advanced options and type the following text in the Command field: /bin/sh -c "echo ‘{ "hello" : "world" }’ > /usr/local/apache2/htdocs/index.html && httpd-foreground"
Choose Update and then choose Next step

Step 2: Configure service

For Service name, type ecsconsole-service-helloworld.
For Desired number of tasks, type 2.
In the Elastic load balancing section, for Container name: host port, choose helloworld:80.
For Select IAM role for service, choose Create new role or use an existing ecsServiceRole if you already created the required role.
Choose Next Step.

Step 3: Configure cluster

For Cluster name, type dev.
For Number of instances, type 2.
For Select IAM role for service, choose Create new role or use an existing ecsInstanceRole if you already created the required role.
Choose Review and Launch and then choose Launch Instance & Run Service.

At this stage, after a few minutes of pending process, the helloworld microservice will be running in the dev ECS cluster with an ELB load balancer in front of it. Make note of the DNS Name of the ELB load balancer for later use; you can find it in the Load Balancers section of the EC2 console.
Configuring API Gateway
Now, let’s configure API Gateway to expose the APIs of this microservice. Sign in to the API Gateway console. If this is your first time using the API Gateway console, you’ll see a welcome page. Otherwise, you’ll see the API Gateway console home page and the Create API button.
Step 1: Create an API

In the API Gateway console, do one of the following:

If Get Started Now is displayed, choose it.
If Create API is displayed, choose it.
If neither is displayed, in the secondary navigation bar, choose the API Gateway console home button, and then choose Create API.

For API name, type EcsDemoAPI.
Choose Create API.

Step 2: Create Resources

In the API Gateway console, choose the root resource (/), and then choose Create Resource.
For Resource Name, type HelloWorld.
For Resource Path, leave the default value of /helloworld.
Choose Create Resource.

Step 3: Create GET Methods

In the Resources pane, choose /helloworld, and then choose Create Method.
For the HTTP method, choose GET, and then save your choice.

Step 4: Specify Method Settings

In the Resources pane, in /helloworld, choose GET.
In the Setup pane, for Integration type, choose HTTP Proxy.
For HTTP method, choose GET.
For Endpoint URL, type http://${stageVariables.helloworldElb}
Choose Save.

Step 5: Deploy the API

In the Resources pane, choose Deploy API.
For Deployment stage, choose New Stage.
For Stage name, type dev.
Choose Deploy.
In the stage settings page, choose the Stage Variables tab.
Choose Add Stage Variable, type helloworldElb for Name, type the DNS Name of the ELB in the Value field and then save.

Step 6: Test the API

In the Stage Editor pane, next to Invoke URL, copy the URL to the clipboard. It should look something like this: https://.execute-api..amazonaws.com/dev
Paste this URL in the address box of a new browser tab.
Append /helloworld to the URL and validate. You should see the following JSON document: { "hello": "world" }

Automating prod environment creation
Now we’ll improve this setup by automating the creation of the prod environment. We use AWS CloudFormation to set up the prod ECS cluster, deploy the helloworld service, and create an ELB in front of the service. You can use the template with your preferred method:
Using AWS CLI
aws cloudformation create-stack –stack-name EcsHelloworldProd –template-url https://s3.amazonaws.com/rko-public-bucket/ecs_cluster.template –parameters ParameterKey=AsgMaxSize,ParameterValue=2 ParameterKey=CreateElasticLoadBalancer,ParameterValue=true ParameterKey=EcsInstanceType,ParameterValue=t2.micro
Using AWS console Launch the AWS CloudFormation stack with the Launch Stack button below and use these parameter values:

AsgMaxSize: 2
CreateElasticLoadBalancer: true
EcsInstanceType: t2.micro


Configuring API Gateway with AWS CLI
We’ll use the API Gateway configuration that we created earlier and simply add the prod stage.
Here are the commands to create the prod stage and configure the stage variable to point to the ELB load balancer:
#Retrieve API ID
API_ID=$(aws apigateway get-rest-apis –output text –query "items[?name==’EcsDemoAPI’].{ID:id}")

#Retrieve ELB DNS name from CloudFormation Stack outputs
ELB_DNS=$(aws cloudformation describe-stacks –stack-name EcsHelloworldProd –output text –query "Stacks[0].Outputs[?OutputKey==’EcsElbDnsName’].{DNS:OutputValue}")

#Create prod stage and set helloworldElb variable
aws apigateway create-deployment –rest-api-id $API_ID –stage-name prod –variables helloworldElb=$ELB_DNS
You can then test the API on the prod stage using this simple cURL command:
AWS_REGION=$(aws configure get region)
curl https://$API_ID.execute-api.$AWS_REGION.amazonaws.com/prod/helloworld
You should see { "hello" : "world" } as the result of the cURL request. If the result is an error message like {"message": "Internal server error"}, verify that you have healthy instances behind your ELB load balancer. It can take some time to pass the health checks, so you’ll have to wait for a minute before trying again.
From the stage settings page you also have the option to export the API configuration to a Swagger file, including the API Gateway extension. Exporting the API configuration as a Swagger file enables you to keep the definition in your source repository. You can then import it at any time, either by overwriting the existing API or by importing it as a brand new API. The API Gateway import tool helps you parse the Swagger definition and import it into the service.
Conclusion
In this post, we looked at how to use Amazon API Gateway to expose APIs for microservices deployed on Amazon ECS. The integration with the HTTP proxy mode pointing to ELB load balancers is a simple method to ensure the availability and scalability of your microservice architecture. With ELB load balancers, you don’t have to worry about how your containers are deployed on the cluster.
We also saw how stage variables help you connect your APIs on different ELB load balancers, depending on the stage where the API is deployed.

Using Amazon API Gateway with microservices deployed on Amazon ECS

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-with-microservices-deployed-on-amazon-ecs/

Rudy Krol Rudy Krol, AWS Solutions Architect
One convenient way to run microservices is to deploy them as Docker containers. Docker containers are quick to provision, easily portable, and provide process isolation. Amazon EC2 Container Service (Amazon ECS) provides a highly scalable, high performance container management service. This service supports Docker containers and enables you to easily run microservices on a managed cluster of Amazon EC2 instances.
Microservices usually expose REST APIs for use in front ends, third-party applications, and other microservices. A best practice is to manage these APIs with an API gateway. This provides a unique entry point for all of your APIs and also eliminates the need to implement API-specific code for things like security, caching, throttling, and monitoring for each of your microservices. You can implement this pattern in a few minutes using Amazon API Gateway. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
In this post, we’ll explain how to use Amazon API Gateway to expose APIs for microservices running on Amazon ECS by leveraging the HTTP proxy mode of Amazon API Gateway. Amazon API Gateway can make proxy calls to any publicly accessible endpoint; for example, an Elastic Load Balancing load balancer endpoint in front of a microservice that is deployed on Amazon ECS. The following diagram shows the high level architecture described in this article:

You will see how you can benefit from stage variables to dynamically set the endpoint value depending on the stage of the API deployment.
In the first part of this post, we will walk through the AWS Management Console to create the dev environment (ECS cluster, ELB load balancers, and API Gateway configuration). The second part explains how to automate the creation of a production environment with AWS CloudFormation and AWS CLI.
Creating a dev environment with the AWS Management Console
Let’s begin by provisioning a sample helloworld microservice using the Getting Started wizard.
Sign in to Amazon ECS console. If this is the first time you’re using the Amazon ECS console, you’ll see a welcome page. Otherwise, you’ll see the console home page and the Create Cluster button.
Step 1: Create a task definition

In the Amazon ECS console, do one of the following:

If Get Started Now is displayed, choose it.
If it is not displayed, go to the Getting Started wizard.

Optional: (depending on the AWS Region) Deselect the Store container images securely with Amazon ECR checkbox and choose Continue.
For Task definition name, type ecsconsole-helloworld.
For Container name, type helloworld.
Choose Advanced options and type the following text in the Command field: /bin/sh -c "echo ‘{ "hello" : "world" }’ > /usr/local/apache2/htdocs/index.html && httpd-foreground"
Choose Update and then choose Next step

Step 2: Configure service

For Service name, type ecsconsole-service-helloworld.
For Desired number of tasks, type 2.
In the Elastic load balancing section, for Container name: host port, choose helloworld:80.
For Select IAM role for service, choose Create new role or use an existing ecsServiceRole if you already created the required role.
Choose Next Step.

Step 3: Configure cluster

For Cluster name, type dev.
For Number of instances, type 2.
For Select IAM role for service, choose Create new role or use an existing ecsInstanceRole if you already created the required role.
Choose Review and Launch and then choose Launch Instance & Run Service.

At this stage, after a few minutes of pending process, the helloworld microservice will be running in the dev ECS cluster with an ELB load balancer in front of it. Make note of the DNS Name of the ELB load balancer for later use; you can find it in the Load Balancers section of the EC2 console.
Configuring API Gateway
Now, let’s configure API Gateway to expose the APIs of this microservice. Sign in to the API Gateway console. If this is your first time using the API Gateway console, you’ll see a welcome page. Otherwise, you’ll see the API Gateway console home page and the Create API button.
Step 1: Create an API

In the API Gateway console, do one of the following:

If Get Started Now is displayed, choose it.
If Create API is displayed, choose it.
If neither is displayed, in the secondary navigation bar, choose the API Gateway console home button, and then choose Create API.

For API name, type EcsDemoAPI.
Choose Create API.

Step 2: Create Resources

In the API Gateway console, choose the root resource (/), and then choose Create Resource.
For Resource Name, type HelloWorld.
For Resource Path, leave the default value of /helloworld.
Choose Create Resource.

Step 3: Create GET Methods

In the Resources pane, choose /helloworld, and then choose Create Method.
For the HTTP method, choose GET, and then save your choice.

Step 4: Specify Method Settings

In the Resources pane, in /helloworld, choose GET.
In the Setup pane, for Integration type, choose HTTP Proxy.
For HTTP method, choose GET.
For Endpoint URL, type http://${stageVariables.helloworldElb}
Choose Save.

Step 5: Deploy the API

In the Resources pane, choose Deploy API.
For Deployment stage, choose New Stage.
For Stage name, type dev.
Choose Deploy.
In the stage settings page, choose the Stage Variables tab.
Choose Add Stage Variable, type helloworldElb for Name, type the DNS Name of the ELB in the Value field and then save.

Step 6: Test the API

In the Stage Editor pane, next to Invoke URL, copy the URL to the clipboard. It should look something like this: https://.execute-api..amazonaws.com/dev
Paste this URL in the address box of a new browser tab.
Append /helloworld to the URL and validate. You should see the following JSON document: { "hello": "world" }

Automating prod environment creation
Now we’ll improve this setup by automating the creation of the prod environment. We use AWS CloudFormation to set up the prod ECS cluster, deploy the helloworld service, and create an ELB in front of the service. You can use the template with your preferred method:
Using AWS CLI
aws cloudformation create-stack –stack-name EcsHelloworldProd –template-url https://s3.amazonaws.com/rko-public-bucket/ecs_cluster.template –parameters ParameterKey=AsgMaxSize,ParameterValue=2 ParameterKey=CreateElasticLoadBalancer,ParameterValue=true ParameterKey=EcsInstanceType,ParameterValue=t2.micro
Using AWS console Launch the AWS CloudFormation stack with the Launch Stack button below and use these parameter values:

AsgMaxSize: 2
CreateElasticLoadBalancer: true
EcsInstanceType: t2.micro


Configuring API Gateway with AWS CLI
We’ll use the API Gateway configuration that we created earlier and simply add the prod stage.
Here are the commands to create the prod stage and configure the stage variable to point to the ELB load balancer:
#Retrieve API ID
API_ID=$(aws apigateway get-rest-apis –output text –query "items[?name==’EcsDemoAPI’].{ID:id}")

#Retrieve ELB DNS name from CloudFormation Stack outputs
ELB_DNS=$(aws cloudformation describe-stacks –stack-name EcsHelloworldProd –output text –query "Stacks[0].Outputs[?OutputKey==’EcsElbDnsName’].{DNS:OutputValue}")

#Create prod stage and set helloworldElb variable
aws apigateway create-deployment –rest-api-id $API_ID –stage-name prod –variables helloworldElb=$ELB_DNS
You can then test the API on the prod stage using this simple cURL command:
AWS_REGION=$(aws configure get region)
curl https://$API_ID.execute-api.$AWS_REGION.amazonaws.com/prod/helloworld
You should see { "hello" : "world" } as the result of the cURL request. If the result is an error message like {"message": "Internal server error"}, verify that you have healthy instances behind your ELB load balancer. It can take some time to pass the health checks, so you’ll have to wait for a minute before trying again.
From the stage settings page you also have the option to export the API configuration to a Swagger file, including the API Gateway extension. Exporting the API configuration as a Swagger file enables you to keep the definition in your source repository. You can then import it at any time, either by overwriting the existing API or by importing it as a brand new API. The API Gateway import tool helps you parse the Swagger definition and import it into the service.
Conclusion
In this post, we looked at how to use Amazon API Gateway to expose APIs for microservices deployed on Amazon ECS. The integration with the HTTP proxy mode pointing to ELB load balancers is a simple method to ensure the availability and scalability of your microservice architecture. With ELB load balancers, you don’t have to worry about how your containers are deployed on the cluster.
We also saw how stage variables help you connect your APIs on different ELB load balancers, depending on the stage where the API is deployed.

Introducing custom authorizers in Amazon API Gateway

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/introducing-custom-authorizers-in-amazon-api-gateway/

Today Amazon API Gateway is launching custom request authorizers. With custom request authorizers, developers can authorize their APIs using bearer token authorization strategies, such as OAuth using an AWS Lambda function. For each incoming request, API Gateway verifies whether a custom authorizer is configured, and if so, API Gateway calls the Lambda function with the authorization token. You can use Lambda to implement various authorization strategies (e.g., JWT verification, OAuth provider callout). Custom authorizers must return AWS Identity and Access Management (IAM) policies. These policies are used to authorize the request. If the policy returned by the authorizer is valid, API Gateway caches the returned policy associated with the incoming token for up to 1 hour so that your Lambda function doesn’t need to be invoked again.

Configuring custom authorizers
You can configure custom authorizers from the API Gateway console or using the APIs. In the console, we have added a new section called custom authorizers inside your API.

An API can have multiple custom authorizers and each method within your API can use a different authorizer. For example, the POST method for the /login resource can use a different authorizer than the GET method for the /pets resource.
To configure an authorizer you must specify a unique name and select a Lambda function to act as the authorizer. You also need to indicate which field of the incoming request contains your bearer token. API Gateway will pass the value of the field to your Lambda authorizer. For example, in most cases your bearer token will be in the Authorization header; you can select this field using the method.request.header.Authorization mapping expression. Optionally, you can specify a regular expression to validate the incoming token before your authorizer is triggered and you can also specify a TTL for the policy cache.

Once you have configured a custom authorizer, you can simply select it from the authorization dropdown in the method request page.

The authorizer function in AWS Lambda
API Gateway invokes the Lambda authorizer by passing in the Lambda event. The Lambda event includes the bearer token from the request and full ARN of the API method being invoked. The authorizer Lambda event looks like this:
{
"type":"TOKEN",
"authorizationToken":"<Incoming bearer token>",
"methodArn":"arn:aws:execute-api:<Region id>:<Account id>:<API id>/<Stage>/<Method>/<Resource path>"
}
Your Lambda function must return a valid IAM policy. API Gateway uses this policy to make authorization decisions for the token. For example, if you use JWT tokens, you can use the Lambda function to open the token and then generate a policy based on the scopes included in the token. Later today we will publish authorizer Lambda blueprints for Node.js and Python that include a policy generator object. This sample function uses AWS Key Management Service (AWS KMS) to decrypt the signing key for the token, the nJwt library for Node.js to validate a token, and then the policy generator object included in the Lambda blueprint to generate and return a valid policy to Amazon API Gateway.
var nJwt = require(‘njwt’);
var AWS = require(‘aws-sdk’);
var signingKey = "CiCnRmG+t+ BASE 64 ENCODED ENCRYPTED SIGNING KEY Mk=";

exports.handler = function(event, context) {
console.log(‘Client token: ‘ + event.authorizationToken);
console.log(‘Method ARN: ‘ + event.methodArn);
var kms = new AWS.KMS();

var decryptionParams = {
CiphertextBlob : new Buffer(signingKey, ‘base64’)
}

kms.decrypt(decryptionParams, function(err, data) {
if (err) {
console.log(err, err.stack);
context.fail("Unable to load encryption key");
} else {
key = data.Plaintext;

try {
verifiedJwt = nJwt.verify(event.authorizationToken, key);
console.log(verifiedJwt);

// parse the ARN from the incoming event
var apiOptions = {};
var tmp = event.methodArn.split(‘:’);
var apiGatewayArnTmp = tmp[5].split(‘/’);
var awsAccountId = tmp[4];
apiOptions.region = tmp[3];
apiOptions.restApiId = apiGatewayArnTmp[0];
apiOptions.stage = apiGatewayArnTmp[1];

policy = new AuthPolicy(verifiedJwt.body.sub, awsAccountId, apiOptions);

if (verifiedJwt.body.scope.indexOf("admins") > -1) {
policy.allowAllMethods();
} else {
policy.allowMethod(AuthPolicy.HttpVerb.GET, "*");
policy.allowMethod(AuthPolicy.HttpVerb.POST, "/users/" + verifiedJwt.body.sub);
}

context.succeed(policy.build());

} catch (ex) {
console.log(ex, ex.stack);
context.fail("Unauthorized");
}
}
});
};
You can also generate a policy in your code instead of using the provided AuthPolicy object. Valid policies include the principal identifier associated with the token and a named IAM policy that can be cached and used to authorize future API calls with the same token. The principalId will be accessible in the mapping template.
{
"principalId": "xxxxxxx", // the principal user identification associated with the token send by the client
"policyDocument": { // example policy shown below, but this value is any valid policy
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-api:us-east-1:xxxxxxxxxxxx:xxxxxxxx:/test/*/mydemoresource/*"
]
}
]
}
}
To learn more about the possible options in a policy, see the public access permissions reference for API Gateway. All of the variables that are normally available in IAM policies are also available to custom authorizer policies. For example, you could restrict access using the ${aws:sourceIp} variable. To learn more, see the policy variables reference.
Because policies are cached for a configured TTL, API Gateway only invokes your Lambda function the first time it sees a token; all of the calls that follow during the TTL period are authorized by API Gateway using the cached policy.
Conclusion
You can use custom authorizers in API Gateway to support any bearer token. This allows you to authorize access to your APIs using tokens from an OAuth flow or SAML assertions. Further, you can leverage all of the variables available to IAM policies without setting up your API to use IAM authorization.
Custom authorizers are available in the API Gateway console and APIs now, and authorizer Lambda blueprints will follow later today. Get in touch through the API Gateway forum if you have questions or feedback about custom authorizers.

Introducing custom authorizers in Amazon API Gateway

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/introducing-custom-authorizers-in-amazon-api-gateway/

Today Amazon API Gateway is launching custom request authorizers. With custom request authorizers, developers can authorize their APIs using bearer token authorization strategies, such as OAuth using an AWS Lambda function. For each incoming request, API Gateway verifies whether a custom authorizer is configured, and if so, API Gateway calls the Lambda function with the authorization token. You can use Lambda to implement various authorization strategies (e.g., JWT verification, OAuth provider callout). Custom authorizers must return AWS Identity and Access Management (IAM) policies. These policies are used to authorize the request. If the policy returned by the authorizer is valid, API Gateway caches the returned policy associated with the incoming token for up to 1 hour so that your Lambda function doesn’t need to be invoked again.

Configuring custom authorizers
You can configure custom authorizers from the API Gateway console or using the APIs. In the console, we have added a new section called custom authorizers inside your API.

An API can have multiple custom authorizers and each method within your API can use a different authorizer. For example, the POST method for the /login resource can use a different authorizer than the GET method for the /pets resource.
To configure an authorizer you must specify a unique name and select a Lambda function to act as the authorizer. You also need to indicate which field of the incoming request contains your bearer token. API Gateway will pass the value of the field to your Lambda authorizer. For example, in most cases your bearer token will be in the Authorization header; you can select this field using the method.request.header.Authorization mapping expression. Optionally, you can specify a regular expression to validate the incoming token before your authorizer is triggered and you can also specify a TTL for the policy cache.

Once you have configured a custom authorizer, you can simply select it from the authorization dropdown in the method request page.

The authorizer function in AWS Lambda
API Gateway invokes the Lambda authorizer by passing in the Lambda event. The Lambda event includes the bearer token from the request and full ARN of the API method being invoked. The authorizer Lambda event looks like this:
{
"type":"TOKEN",
"authorizationToken":"<Incoming bearer token>",
"methodArn":"arn:aws:execute-api:<Region id>:<Account id>:<API id>/<Stage>/<Method>/<Resource path>"
}
Your Lambda function must return a valid IAM policy. API Gateway uses this policy to make authorization decisions for the token. For example, if you use JWT tokens, you can use the Lambda function to open the token and then generate a policy based on the scopes included in the token. Later today we will publish authorizer Lambda blueprints for Node.js and Python that include a policy generator object. This sample function uses AWS Key Management Service (AWS KMS) to decrypt the signing key for the token, the nJwt library for Node.js to validate a token, and then the policy generator object included in the Lambda blueprint to generate and return a valid policy to Amazon API Gateway.
var nJwt = require(‘njwt’);
var AWS = require(‘aws-sdk’);
var signingKey = "CiCnRmG+t+ BASE 64 ENCODED ENCRYPTED SIGNING KEY Mk=";

exports.handler = function(event, context) {
console.log(‘Client token: ‘ + event.authorizationToken);
console.log(‘Method ARN: ‘ + event.methodArn);
var kms = new AWS.KMS();

var decryptionParams = {
CiphertextBlob : new Buffer(signingKey, ‘base64’)
}

kms.decrypt(decryptionParams, function(err, data) {
if (err) {
console.log(err, err.stack);
context.fail("Unable to load encryption key");
} else {
key = data.Plaintext;

try {
verifiedJwt = nJwt.verify(event.authorizationToken, key);
console.log(verifiedJwt);

// parse the ARN from the incoming event
var apiOptions = {};
var tmp = event.methodArn.split(‘:’);
var apiGatewayArnTmp = tmp[5].split(‘/’);
var awsAccountId = tmp[4];
apiOptions.region = tmp[3];
apiOptions.restApiId = apiGatewayArnTmp[0];
apiOptions.stage = apiGatewayArnTmp[1];

policy = new AuthPolicy(verifiedJwt.body.sub, awsAccountId, apiOptions);

if (verifiedJwt.body.scope.indexOf("admins") > -1) {
policy.allowAllMethods();
} else {
policy.allowMethod(AuthPolicy.HttpVerb.GET, "*");
policy.allowMethod(AuthPolicy.HttpVerb.POST, "/users/" + verifiedJwt.body.sub);
}

context.succeed(policy.build());

} catch (ex) {
console.log(ex, ex.stack);
context.fail("Unauthorized");
}
}
});
};
You can also generate a policy in your code instead of using the provided AuthPolicy object. Valid policies include the principal identifier associated with the token and a named IAM policy that can be cached and used to authorize future API calls with the same token. The principalId will be accessible in the mapping template.
{
"principalId": "xxxxxxx", // the principal user identification associated with the token send by the client
"policyDocument": { // example policy shown below, but this value is any valid policy
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-api:us-east-1:xxxxxxxxxxxx:xxxxxxxx:/test/*/mydemoresource/*"
]
}
]
}
}
To learn more about the possible options in a policy, see the public access permissions reference for API Gateway. All of the variables that are normally available in IAM policies are also available to custom authorizer policies. For example, you could restrict access using the ${aws:sourceIp} variable. To learn more, see the policy variables reference.
Because policies are cached for a configured TTL, API Gateway only invokes your Lambda function the first time it sees a token; all of the calls that follow during the TTL period are authorized by API Gateway using the cached policy.
Conclusion
You can use custom authorizers in API Gateway to support any bearer token. This allows you to authorize access to your APIs using tokens from an OAuth flow or SAML assertions. Further, you can leverage all of the variables available to IAM policies without setting up your API to use IAM authorization.
Custom authorizers are available in the API Gateway console and APIs now, and authorizer Lambda blueprints will follow later today. Get in touch through the API Gateway forum if you have questions or feedback about custom authorizers.

Using API Gateway mapping templates to handle changes in your back-end APIs

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-api-gateway-mapping-templates-to-handle-changes-in-your-back-end-apis/

Maitreya Ranganath Maitreya Ranganath, AWS Solutions Architect
Changes to APIs are always risky, especially if changes are made in ways that are not backward compatible. In this blog post, we show you how to use Amazon API Gateway mapping templates to isolate your API consumers from API changes. This enables your API consumers to migrate to new API versions on their own schedule.
For an example scenario, we start with a very simple Store Front API with one resource for orders and one GET method. For this example, the API target is implemented in AWS Lambda to keep things simple – but you can of course imagine the back end being your own endpoint.
The structure of the API V1 is:
Method: GET
Path: /orders
Query Parameters:
start = timestamp
end = timestamp

Response:
[
{
“orderId” : string,
“orderTs” : string,
“orderAmount” : number
}
]
The initial version (V1) of the API was implemented when there were few orders per day. The API was not paginated; if the number of orders that match the query is larger than 5, an error returns. The API consumer must then submit a request with a smaller time range.
The API V1 is exposed through API Gateway and you have several consumers of this API in Production.
After you upgrade the back end, the API developers make a change to support pagination. This makes the API more scalable and allows the API consumers to handle large lists of orders by paging through them with a token. This is a good design change but it breaks backward compatibility. It introduces a challenge because you have a large base of API consumers using V1 and their code can’t handle the changed nesting structure of this response.
The structure of API V2 is:
Method: GET
Path: /orders
Query Parameters:
start = timestamp
end = timestamp
token = string (optional)

Response:
{
“nextToken” : string,
“orders” : [
{
“orderId” : string,
“orderTs” : string
“orderAmount” : number
}
]
}
Using mapping templates, you can isolate your API consumers from this change: your existing V1 API consumers will not be impacted when you publish V2 of the API in parallel. You want to let your consumers migrate to V2 on their own schedule.
We’ll show you how to do that in this blog post. Let’s get started.
Deploying V1 of the API
To deploy V1 of the API, create a simple Lambda function and expose that through API Gateway:

Sign in to the AWS Lambda console.
Choose Create a Lambda function.
In Step 1: Select blueprint, choose Skip; you’ll enter the details for the Lambda function manually.
In Step 2: Configure function, use the following values:

In Name, type getOrders.
In Description, type Returns orders for a time-range.
In Runtime, choose Node.js.
For Code entry type, choose Edit code inline. Copy and paste the code snippet below into the code input box.

MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
console.log(‘start =’, event.start);
console.log(‘end =’, event.end);

start = Date.parse(decodeURIComponent(event.start));
end = Date.parse(decodeURIComponent(event.end));

if(isNaN(start)) {
context.fail("Invalid parameter ‘start’");
}
if(isNaN(end)) {
context.fail("Invalid parameter ‘end’");
}

duration = end – start;

if(duration 5 * MILISECONDS_DAY) {
context.fail("Too many results, try your request with a shorter duration");
}

orderList = [];
count = 0;

for(d = start; d < end; d += MILISECONDS_DAY) {
order = {
"orderId" : "order-" + count,
"orderTs" : (new Date(d).toISOString()),
"orderAmount" : Math.round(Math.random()*100.0)
};
count += 1;
orderList.push(order);
}

console.log(‘Generated’, count, ‘orders’);
context.succeed(orderList);
};

In Handler, leave the default value of index.handler.
In Role, choose Basic execution role or choose an existing role if you’ve created one for Lambda before.
In Advanced settings, leave the default values and choose Next.


Finally, review the settings in the next page and choose Create function.
Your Lambda function is now created. You can test it by sending a test event. Enter the following for your test event:
{
"start": "2015-10-01T00:00:00Z",
"end": "2015-10-04T00:00:00Z"
}
Check the execution result and log output to see the results of your test.

Next, choose the API endpoints tab and then choose Add API endpoint. In Add API endpoint, use the following values:

In API endpoint type, choose API Gateway
In API name, type StoreFront
In Resource name, type /orders
In Method, choose GET
In Deployment stage, use the default value of prod
In Security, choose Open to allow the API to be publicly accessed
Choose Submit to create the API


The API is created and the API endpoint URL is displayed for the Lambda function.
Next, switch to the API Gateway console and verify that the new API appears on the list of APIs. Choose StoreFront to view its details.
To view the method execution details, in the Resources pane, choose GET. Choose Integration Request to edit the method properties.

On the Integration Request details page, expand the Mapping Templates section and choose Add mapping template. In Content-Type, type application/json and choose the check mark to accept.

Choose the edit icon to the right of Input passthrough. From the drop down, choose Mapping template and copy and paste the mapping template text below into the Template input box. Choose the check mark to create the template.
{
#set($queryMap = $input.params().querystring)

#foreach( $key in $queryMap.keySet())
"$key" : "$queryMap.get($key)"
#if($foreach.hasNext),#end
#end
}
This step is needed because the Lambda function requires its input as a JSON document. The mapping template takes query string parameters from the GET request and creates a JSON input document. Mapping templates use Apache Velocity, expose a number of utility functions, and give you access to all of the incoming requests data and context parameters. You can learn more from the mapping template reference page.
Back to the GET method configuration page, in the left pane, choose the GET method and then open the Method Request settings. Expand the URL Query String Parameters section and choose Add query string. In Name, type start and choose the check mark to accept. Repeat the process to create a second parameter named end.
From the GET method configuration page, in the top left, choose Test to test your API. Type the following values for the query string parameters and then choose Test:

In start, type 2015-10-01T00:00:00Z
In end, type 2015-10-04T00:00:00Z

Verify that the response status is 200 and the response body contains a JSON response with 3 orders.
Now that your test is successful, you can deploy your changes to the production stage. In the Resources pane, choose Deploy API. In Deployment stage, choose prod. In Deployment description, type a description of the deployment, and then choose Deploy.
The prod Stage Editor page appears, displaying the Invoke URL. In the CloudWatch Settings section, choose Enable CloudWatch Logs so you can see logs and metrics from this stage. Keep in mind that CloudWatch logs are charged to your account separately from API Gateway.
You have now deployed an API that is backed by V1 of the Lambda function.
Testing V1 of the API
Now you’ll test V1 of the API with curl and confirm its behavior. First, copy the Invoke URL and add the query parameters ?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z and make a GET invocation using curl.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

[
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 82
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 3
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 75
}
]
This should output a JSON response with 3 orders. Next, check what happens if you use a longer time-range by changing the end timestamp to 2015-10-15T00:00:00Z:
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"errorMessage": "Too many results, try your request with a shorter duration"
}
You see that the API returns an error indicating the time range is too long. This is correct V1 API behavior, so you are all set.
Updating the Lambda Function to V2
Next, you will update the Lambda function code to V2. This simulates the scenario of the back end of your API changing in a manner that is not backward compatible.
Switch to the Lambda console and choose the getOrders function. In the code input box, copy and paste the code snippet below. Be sure to replace all of the existing V1 code with V2 code.
MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
console.log(‘start =’, event.start);
console.log(‘end =’, event.end);

start = Date.parse(decodeURIComponent(event.start));
end = Date.parse(decodeURIComponent(event.end));

token = NaN;
if(event.token) {
s = new Buffer(event.token, ‘base64’).toString();
token = Date.parse(s);
}

if(isNaN(start)) {
context.fail("Invalid parameter ‘start’");
}
if(isNaN(end)) {
context.fail("Invalid parameter ‘end’");
}
if(!isNaN(token)) {
start = token;
}

duration = end – start;

if(duration <= 0) {
context.fail("Invalid parameters ‘end’ must be greater than ‘start’");
}

orderList = [];
count = 0;

console.log(‘start=’, start, ‘ end=’, end);

for(d = start; d < end && count < 5; d += MILISECONDS_DAY) {
order = {
"orderId" : "order-" + count,
"orderTs" : (new Date(d).toISOString()),
"orderAmount" : Math.round(Math.random()*100.0)
};
count += 1;
orderList.push(order);
}

nextToken = null;
if(d < end) {
nextToken = new Buffer(new Date(d).toISOString()).toString(‘base64’);
}

console.log(‘Generated’, count, ‘orders’);

result = {
orders : orderList,
};

if(nextToken) {
result.nextToken = nextToken;
}
context.succeed(result);
};
Choose Save to save V2 of the code. Then choose Test. Note that the output structure is different in V2 and there is a second level of nesting in the JSON document. This represents the updated V2 output structure that is different from V1.
Next, repeat the curl tests from the previous section. First, do a request for a short time duration. Notice that the response structure is nested differently from V1 and this is a problem for our API consumers that expect V1 responses.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

{
"orders": [
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 8
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 92
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 84
}
]
}
Now, repeat the request for a longer time range and you’ll see that instead of an error message, you now get the first page of information with 5 orders and a nextToken that will let you request the next page. This is the paginated behavior of V2 of the API.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"orders": [
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 62
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 59
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 21
},
{
"orderId": "order-3",
"orderTs": "2015-10-04T00:00:00.000Z",
"orderAmount": 95
},
{
"orderId": "order-4",
"orderTs": "2015-10-05T00:00:00.000Z",
"orderAmount": 84
}
],
"nextToken": "MjAxNS0xMC0wNlQwMDowMDowMC4wMDBa"
}
It is clear from these tests that V2 will break the current V1 consumer’s code. Next, we show how to isolate your V1 consumers from this change using API Gateway mapping templates.
Cloning the API
Because you want both V1 and V2 of the API to be available simultaneously to your API consumers, you first clone the API to create a V2 API. You then modify the V1 API to make it behave as your V1 consumers expect.
Go back to the API Gateway console, and choose Create API. Configure the new API with the following values:

In API name, type StoreFrontV2
In Clone from API, choose StoreFront
In Description, type a description
Choose Create API to clone the StoreFront API as StoreFrontV2

Open the StoreFrontV2 API and choose the GET method of the /orders resource. Next, choose Integration Request. Choose the edit icon next to the getOrders Lambda function name.
Keep the name as getOrders and choose the check mark to accept. In the pop up, choose OK to allow the StoreFrontV2 to invoke the Lambda function.
Once you have granted API Gateway permissions to access your Lambda function, choose Deploy API. In Deployment stage, choose New stage. In Stage name, type prod, and then choose Deploy. Now you have a new StoreFrontV2 API that invokes the same Lambda function. Confirm that the API has V2 behavior by testing it with curl. Use the Invoke URL for the StoreFrontV2 API instead of the previously used Invoke URL.
Update the V1 of the API
Now you will use mapping templates to update the original StoreFront API to preserve V1 behavior. This enables existing consumers to continue to consume the API without having to make any changes to their code.
Navigate to the API Gateway console, choose the StoreFront API and open the GET method of the /orders resource. On the Method Execution details page, choose Integration Response.
Expand the default response mapping (HTTP status 200), and expand the Mapping Templates section. Choose Add Mapping Template.
In Content-type, type application/json and choose the check mark to accept. Choose the edit icon next to Output passthrough to edit the mapping templates. Select Mapping template from the drop down and copy and paste the mapping template below into the Template input box.
#set($nextToken = $input.path(‘$.nextToken’))

#if($nextToken && $nextToken.length() != 0)
{
"errorMessage" : "Too many results, try your request with a shorter duration"
}
#else
$input.json(‘$.orders[*]’)
#end
Choose the check mark to accept and save. The mapping template transforms the V2 output from the Lambda function into the original V1 response. The mapping template also generates an error if the V2 response indicates that there are more results than can fit in one page. This emulates V1 behavior.
Finally click Save on the response mapping page. Deploy your StoreFront API and choose prod as the stage to deploy your changes.
Verify V1 behavior
Now that you have updated the original API to emulate V1 behavior, you can verify that using curl again. You will essentially repeat the tests from the earlier section. First, confirm that you have the Invoke URL for the original StoreFront API. You can always find the Invoke URL by looking at the stage details for the API.
Try a test with a short time range.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

[
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 50
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 16
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 14
}
]
Try a test with a longer time range and note that the V1 behavior of returning an error is recovered.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"errorMessage": "Too many results, try your request with a shorter duration"
}
Congratulations, you have successfully used Amazon API Gateway mapping templates to expose both V1 and V2 versions of your API allowing your API consumers to migrate to V2 on their own schedule.
Be sure to delete the two APIs and the AWS Lambda function that you created for this walkthrough to avoid being charged for their use.

Using API Gateway mapping templates to handle changes in your back-end APIs

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-api-gateway-mapping-templates-to-handle-changes-in-your-back-end-apis/

Maitreya Ranganath Maitreya Ranganath, AWS Solutions Architect
Changes to APIs are always risky, especially if changes are made in ways that are not backward compatible. In this blog post, we show you how to use Amazon API Gateway mapping templates to isolate your API consumers from API changes. This enables your API consumers to migrate to new API versions on their own schedule.
For an example scenario, we start with a very simple Store Front API with one resource for orders and one GET method. For this example, the API target is implemented in AWS Lambda to keep things simple – but you can of course imagine the back end being your own endpoint.
The structure of the API V1 is:
Method: GET
Path: /orders
Query Parameters:
start = timestamp
end = timestamp

Response:
[
{
“orderId” : string,
“orderTs” : string,
“orderAmount” : number
}
]
The initial version (V1) of the API was implemented when there were few orders per day. The API was not paginated; if the number of orders that match the query is larger than 5, an error returns. The API consumer must then submit a request with a smaller time range.
The API V1 is exposed through API Gateway and you have several consumers of this API in Production.
After you upgrade the back end, the API developers make a change to support pagination. This makes the API more scalable and allows the API consumers to handle large lists of orders by paging through them with a token. This is a good design change but it breaks backward compatibility. It introduces a challenge because you have a large base of API consumers using V1 and their code can’t handle the changed nesting structure of this response.
The structure of API V2 is:
Method: GET
Path: /orders
Query Parameters:
start = timestamp
end = timestamp
token = string (optional)

Response:
{
“nextToken” : string,
“orders” : [
{
“orderId” : string,
“orderTs” : string
“orderAmount” : number
}
]
}
Using mapping templates, you can isolate your API consumers from this change: your existing V1 API consumers will not be impacted when you publish V2 of the API in parallel. You want to let your consumers migrate to V2 on their own schedule.
We’ll show you how to do that in this blog post. Let’s get started.
Deploying V1 of the API
To deploy V1 of the API, create a simple Lambda function and expose that through API Gateway:

Sign in to the AWS Lambda console.
Choose Create a Lambda function.
In Step 1: Select blueprint, choose Skip; you’ll enter the details for the Lambda function manually.
In Step 2: Configure function, use the following values:

In Name, type getOrders.
In Description, type Returns orders for a time-range.
In Runtime, choose Node.js.
For Code entry type, choose Edit code inline. Copy and paste the code snippet below into the code input box.

MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
console.log(‘start =’, event.start);
console.log(‘end =’, event.end);

start = Date.parse(decodeURIComponent(event.start));
end = Date.parse(decodeURIComponent(event.end));

if(isNaN(start)) {
context.fail("Invalid parameter ‘start’");
}
if(isNaN(end)) {
context.fail("Invalid parameter ‘end’");
}

duration = end – start;

if(duration 5 * MILISECONDS_DAY) {
context.fail("Too many results, try your request with a shorter duration");
}

orderList = [];
count = 0;

for(d = start; d < end; d += MILISECONDS_DAY) {
order = {
"orderId" : "order-" + count,
"orderTs" : (new Date(d).toISOString()),
"orderAmount" : Math.round(Math.random()*100.0)
};
count += 1;
orderList.push(order);
}

console.log(‘Generated’, count, ‘orders’);
context.succeed(orderList);
};

In Handler, leave the default value of index.handler.
In Role, choose Basic execution role or choose an existing role if you’ve created one for Lambda before.
In Advanced settings, leave the default values and choose Next.


Finally, review the settings in the next page and choose Create function.
Your Lambda function is now created. You can test it by sending a test event. Enter the following for your test event:
{
"start": "2015-10-01T00:00:00Z",
"end": "2015-10-04T00:00:00Z"
}
Check the execution result and log output to see the results of your test.

Next, choose the API endpoints tab and then choose Add API endpoint. In Add API endpoint, use the following values:

In API endpoint type, choose API Gateway
In API name, type StoreFront
In Resource name, type /orders
In Method, choose GET
In Deployment stage, use the default value of prod
In Security, choose Open to allow the API to be publicly accessed
Choose Submit to create the API


The API is created and the API endpoint URL is displayed for the Lambda function.
Next, switch to the API Gateway console and verify that the new API appears on the list of APIs. Choose StoreFront to view its details.
To view the method execution details, in the Resources pane, choose GET. Choose Integration Request to edit the method properties.

On the Integration Request details page, expand the Mapping Templates section and choose Add mapping template. In Content-Type, type application/json and choose the check mark to accept.

Choose the edit icon to the right of Input passthrough. From the drop down, choose Mapping template and copy and paste the mapping template text below into the Template input box. Choose the check mark to create the template.
{
#set($queryMap = $input.params().querystring)

#foreach( $key in $queryMap.keySet())
"$key" : "$queryMap.get($key)"
#if($foreach.hasNext),#end
#end
}
This step is needed because the Lambda function requires its input as a JSON document. The mapping template takes query string parameters from the GET request and creates a JSON input document. Mapping templates use Apache Velocity, expose a number of utility functions, and give you access to all of the incoming requests data and context parameters. You can learn more from the mapping template reference page.
Back to the GET method configuration page, in the left pane, choose the GET method and then open the Method Request settings. Expand the URL Query String Parameters section and choose Add query string. In Name, type start and choose the check mark to accept. Repeat the process to create a second parameter named end.
From the GET method configuration page, in the top left, choose Test to test your API. Type the following values for the query string parameters and then choose Test:

In start, type 2015-10-01T00:00:00Z
In end, type 2015-10-04T00:00:00Z

Verify that the response status is 200 and the response body contains a JSON response with 3 orders.
Now that your test is successful, you can deploy your changes to the production stage. In the Resources pane, choose Deploy API. In Deployment stage, choose prod. In Deployment description, type a description of the deployment, and then choose Deploy.
The prod Stage Editor page appears, displaying the Invoke URL. In the CloudWatch Settings section, choose Enable CloudWatch Logs so you can see logs and metrics from this stage. Keep in mind that CloudWatch logs are charged to your account separately from API Gateway.
You have now deployed an API that is backed by V1 of the Lambda function.
Testing V1 of the API
Now you’ll test V1 of the API with curl and confirm its behavior. First, copy the Invoke URL and add the query parameters ?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z and make a GET invocation using curl.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

[
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 82
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 3
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 75
}
]
This should output a JSON response with 3 orders. Next, check what happens if you use a longer time-range by changing the end timestamp to 2015-10-15T00:00:00Z:
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"errorMessage": "Too many results, try your request with a shorter duration"
}
You see that the API returns an error indicating the time range is too long. This is correct V1 API behavior, so you are all set.
Updating the Lambda Function to V2
Next, you will update the Lambda function code to V2. This simulates the scenario of the back end of your API changing in a manner that is not backward compatible.
Switch to the Lambda console and choose the getOrders function. In the code input box, copy and paste the code snippet below. Be sure to replace all of the existing V1 code with V2 code.
MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
console.log(‘start =’, event.start);
console.log(‘end =’, event.end);

start = Date.parse(decodeURIComponent(event.start));
end = Date.parse(decodeURIComponent(event.end));

token = NaN;
if(event.token) {
s = new Buffer(event.token, ‘base64’).toString();
token = Date.parse(s);
}

if(isNaN(start)) {
context.fail("Invalid parameter ‘start’");
}
if(isNaN(end)) {
context.fail("Invalid parameter ‘end’");
}
if(!isNaN(token)) {
start = token;
}

duration = end – start;

if(duration <= 0) {
context.fail("Invalid parameters ‘end’ must be greater than ‘start’");
}

orderList = [];
count = 0;

console.log(‘start=’, start, ‘ end=’, end);

for(d = start; d < end && count < 5; d += MILISECONDS_DAY) {
order = {
"orderId" : "order-" + count,
"orderTs" : (new Date(d).toISOString()),
"orderAmount" : Math.round(Math.random()*100.0)
};
count += 1;
orderList.push(order);
}

nextToken = null;
if(d < end) {
nextToken = new Buffer(new Date(d).toISOString()).toString(‘base64’);
}

console.log(‘Generated’, count, ‘orders’);

result = {
orders : orderList,
};

if(nextToken) {
result.nextToken = nextToken;
}
context.succeed(result);
};
Choose Save to save V2 of the code. Then choose Test. Note that the output structure is different in V2 and there is a second level of nesting in the JSON document. This represents the updated V2 output structure that is different from V1.
Next, repeat the curl tests from the previous section. First, do a request for a short time duration. Notice that the response structure is nested differently from V1 and this is a problem for our API consumers that expect V1 responses.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

{
"orders": [
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 8
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 92
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 84
}
]
}
Now, repeat the request for a longer time range and you’ll see that instead of an error message, you now get the first page of information with 5 orders and a nextToken that will let you request the next page. This is the paginated behavior of V2 of the API.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"orders": [
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 62
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 59
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 21
},
{
"orderId": "order-3",
"orderTs": "2015-10-04T00:00:00.000Z",
"orderAmount": 95
},
{
"orderId": "order-4",
"orderTs": "2015-10-05T00:00:00.000Z",
"orderAmount": 84
}
],
"nextToken": "MjAxNS0xMC0wNlQwMDowMDowMC4wMDBa"
}
It is clear from these tests that V2 will break the current V1 consumer’s code. Next, we show how to isolate your V1 consumers from this change using API Gateway mapping templates.
Cloning the API
Because you want both V1 and V2 of the API to be available simultaneously to your API consumers, you first clone the API to create a V2 API. You then modify the V1 API to make it behave as your V1 consumers expect.
Go back to the API Gateway console, and choose Create API. Configure the new API with the following values:

In API name, type StoreFrontV2
In Clone from API, choose StoreFront
In Description, type a description
Choose Create API to clone the StoreFront API as StoreFrontV2

Open the StoreFrontV2 API and choose the GET method of the /orders resource. Next, choose Integration Request. Choose the edit icon next to the getOrders Lambda function name.
Keep the name as getOrders and choose the check mark to accept. In the pop up, choose OK to allow the StoreFrontV2 to invoke the Lambda function.
Once you have granted API Gateway permissions to access your Lambda function, choose Deploy API. In Deployment stage, choose New stage. In Stage name, type prod, and then choose Deploy. Now you have a new StoreFrontV2 API that invokes the same Lambda function. Confirm that the API has V2 behavior by testing it with curl. Use the Invoke URL for the StoreFrontV2 API instead of the previously used Invoke URL.
Update the V1 of the API
Now you will use mapping templates to update the original StoreFront API to preserve V1 behavior. This enables existing consumers to continue to consume the API without having to make any changes to their code.
Navigate to the API Gateway console, choose the StoreFront API and open the GET method of the /orders resource. On the Method Execution details page, choose Integration Response.
Expand the default response mapping (HTTP status 200), and expand the Mapping Templates section. Choose Add Mapping Template.
In Content-type, type application/json and choose the check mark to accept. Choose the edit icon next to Output passthrough to edit the mapping templates. Select Mapping template from the drop down and copy and paste the mapping template below into the Template input box.
#set($nextToken = $input.path(‘$.nextToken’))

#if($nextToken && $nextToken.length() != 0)
{
"errorMessage" : "Too many results, try your request with a shorter duration"
}
#else
$input.json(‘$.orders[*]’)
#end
Choose the check mark to accept and save. The mapping template transforms the V2 output from the Lambda function into the original V1 response. The mapping template also generates an error if the V2 response indicates that there are more results than can fit in one page. This emulates V1 behavior.
Finally click Save on the response mapping page. Deploy your StoreFront API and choose prod as the stage to deploy your changes.
Verify V1 behavior
Now that you have updated the original API to emulate V1 behavior, you can verify that using curl again. You will essentially repeat the tests from the earlier section. First, confirm that you have the Invoke URL for the original StoreFront API. You can always find the Invoke URL by looking at the stage details for the API.
Try a test with a short time range.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

[
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 50
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 16
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 14
}
]
Try a test with a longer time range and note that the V1 behavior of returning an error is recovered.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"errorMessage": "Too many results, try your request with a shorter duration"
}
Congratulations, you have successfully used Amazon API Gateway mapping templates to expose both V1 and V2 versions of your API allowing your API consumers to migrate to V2 on their own schedule.
Be sure to delete the two APIs and the AWS Lambda function that you created for this walkthrough to avoid being charged for their use.