Tag Archives: Mapping Templates

Introducing AWS AppSync – Build data-driven apps with real-time and off-line capabilities

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/introducing-amazon-appsync/

In this day and age, it is almost impossible to do without our mobile devices and the applications that help make our lives easier. As our dependency on our mobile phone grows, the mobile application market has exploded with millions of apps vying for our attention. For mobile developers, this means that we must ensure that we build applications that provide the quality, real-time experiences that app users desire.  Therefore, it has become essential that mobile applications are developed to include features such as multi-user data synchronization, offline network support, and data discovery, just to name a few.  According to several articles, I read recently about mobile development trends on publications like InfoQ, DZone, and the mobile development blog AlleviateTech, one of the key elements in of delivering the aforementioned capabilities is with cloud-driven mobile applications.  It seems that this is especially true, as it related to mobile data synchronization and data storage.

That being the case, it is a perfect time for me to announce a new service for building innovative mobile applications that are driven by data-intensive services in the cloud; AWS AppSync. AWS AppSync is a fully managed serverless GraphQL service for real-time data queries, synchronization, communications and offline programming features. For those not familiar, let me briefly share some information about the open GraphQL specification. GraphQL is a responsive data query language and server-side runtime for querying data sources that allow for real-time data retrieval and dynamic query execution. You can use GraphQL to build a responsive API for use in when building client applications. GraphQL works at the application layer and provides a type system for defining schemas. These schemas serve as specifications to define how operations should be performed on the data and how the data should be structured when retrieved. Additionally, GraphQL has a declarative coding model which is supported by many client libraries and frameworks including React, React Native, iOS, and Android.

Now the power of the GraphQL open standard query language is being brought to you in a rich managed service with AWS AppSync.  With AppSync developers can simplify the retrieval and manipulation of data across multiple data sources with ease, allowing them to quickly prototype, build and create robust, collaborative, multi-user applications. AppSync keeps data updated when devices are connected, but enables developers to build solutions that work offline by caching data locally and synchronizing local data when connections become available.

Let’s discuss some key concepts of AWS AppSync and how the service works.

AppSync Concepts

  • AWS AppSync Client: service client that defines operations, wraps authorization details of requests, and manage offline logic.
  • Data Source: the data storage system or a trigger housing data
  • Identity: a set of credentials with permissions and identification context provided with requests to GraphQL proxy
  • GraphQL Proxy: the GraphQL engine component for processing and mapping requests, handling conflict resolution, and managing Fine Grained Access Control
  • Operation: one of three GraphQL operations supported in AppSync
    • Query: a read-only fetch call to the data
    • Mutation: a write of the data followed by a fetch,
    • Subscription: long-lived connections that receive data in response to events.
  • Action: a notification to connected subscribers from a GraphQL subscription.
  • Resolver: function using request and response mapping templates that converts and executes payload against data source

How It Works

A schema is created to define types and capabilities of the desired GraphQL API and tied to a Resolver function.  The schema can be created to mirror existing data sources or AWS AppSync can create tables automatically based the schema definition. Developers can also use GraphQL features for data discovery without having knowledge of the backend data sources. After a schema definition is established, an AWS AppSync client can be configured with an operation request, like a Query operation. The client submits the operation request to GraphQL Proxy along with an identity context and credentials. The GraphQL Proxy passes this request to the Resolver which maps and executes the request payload against pre-configured AWS data services like an Amazon DynamoDB table, an AWS Lambda function, or a search capability using Amazon Elasticsearch. The Resolver executes calls to one or all of these services within a single network call minimizing CPU cycles and bandwidth needs and returns the response to the client. Additionally, the client application can change data requirements in code on demand and the AppSync GraphQL API will dynamically map requests for data accordingly, allowing prototyping and faster development.

In order to take a quick peek at the service, I’ll go to the AWS AppSync console. I’ll click the Create API button to get started.

 

When the Create new API screen opens, I’ll give my new API a name, TarasTestApp, and since I am just exploring the new service I will select the Sample schema option.  You may notice from the informational dialog box on the screen that in using the sample schema, AWS AppSync will automatically create the DynamoDB tables and the IAM roles for me.It will also deploy the TarasTestApp API on my behalf.  After review of the sample schema provided by the console, I’ll click the Create button to create my test API.

After the TaraTestApp API has been created and the associated AWS resources provisioned on my behalf, I can make updates to the schema, data source, or connect my data source(s) to a resolver. I also can integrate my GraphQL API into an iOS, Android, Web, or React Native application by cloning the sample repo from GitHub and downloading the accompanying GraphQL schema.  These application samples are great to help get you started and they are pre-configured to function in offline scenarios.

If I select the Schema menu option on the console, I can update and view the TarasTestApp GraphQL API schema.


Additionally, if I select the Data Sources menu option in the console, I can see the existing data sources.  Within this screen, I can update, delete, or add data sources if I so desire.

Next, I will select the Query menu option which takes me to the console tool for writing and testing queries. Since I chose the sample schema and the AWS AppSync service did most of the heavy lifting for me, I’ll try a query against my new GraphQL API.

I’ll use a mutation to add data for the event type in my schema. Since this is a mutation and it first writes data and then does a read of the data, I want the query to return values for name and where.

If I go to the DynamoDB table created for the event type in the schema, I will see that the values from my query have been successfully written into the table. Now that was a pretty simple task to write and retrieve data based on a GraphQL API schema from a data source, don’t you think.


 Summary

AWS AppSync is currently in AWS AppSync is in Public Preview and you can sign up today. It supports development for iOS, Android, and JavaScript applications. You can take advantage of this managed GraphQL service by going to the AWS AppSync console or learn more by reviewing more details about the service by reading a tutorial in the AWS documentation for the service or checking out our AWS AppSync Developer Guide.

Tara

 

Authorizing Access Through a Proxy Resource to Amazon API Gateway and AWS Lambda Using Amazon Cognito User Pools

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/authorizing-access-through-a-proxy-resource-to-amazon-api-gateway-and-aws-lambda-using-amazon-cognito-user-pools/


Ed Lima, Solutions Architect

Want to create your own user directory that can scale to hundreds of millions of users? Amazon Cognito user pools are fully managed so that you don’t have to worry about the heavy lifting associated with building, securing, and scaling authentication to your apps.

The AWS Mobile blog post Integrating Amazon Cognito User Pools with API Gateway back in May explained how to integrate user pools with Amazon API Gateway using an AWS Lambda custom authorizer. Since then, we’ve released a new feature where you can directly configure a Cognito user pool authorizer to authenticate your API calls; more recently, we released a new proxy resource feature. In this post, I show how to use these new great features together to secure access to an API backed by a Lambda proxy resource.

Walkthrough

In this post, I assume that you have some basic knowledge about the services involved. If not, feel free to review our documentation and tutorials on:

Start by creating a user pool called “myApiUsers”, and enable verifications with optional MFA access for extra security:

cognitouserpoolsauth_1.png

Be mindful that if you are using a similar solution for production workloads you will need to request a SMS spending threshold limit increase from Amazon SNS in order to send SMS messages to users for phone number verification or for MFA. For the purposes of this article, since we are only testing our API authentication with a single user the default limit will suffice.

Now, create an app in your user pool, making sure to clear Generate client secret:

cognitouserpoolsauth_2.png

Using the client ID of your newly created app, add a user, “jdoe”, with the AWS CLI. The user needs a valid email address and phone number to receive MFA codes:

aws cognito-idp sign-up \
--client-id 12ioh8c17q3stmndpXXXXXXXX \
--username jdoe \
--password [email protected] \
--region us-east-1 \
--user-attributes '[{"Name":"given_name","Value":"John"},{"Name":"family_name","Value":"Doe"},{"Name":"email","Value":"[email protected]"},{"Name":"gender","Value":"Male"},{"Name":"phone_number","Value":"+61XXXXXXXXXX"}]'  

In the Cognito User Pools console, under Users, select the new user and choose Confirm User and Enable MFA:

cognitouserpoolsauth_3.png

Your Cognito user is now ready and available to connect.

Next, create a Node.js Lambda function called LambdaForSimpleProxy with a basic execution role. Here’s the code:

'use strict';
console.log('Loading CUP2APIGW2Lambda Function');

exports.handler = function(event, context) {
    var responseCode = 200;
    console.log("request: " + JSON.stringify(event));
    
    var responseBody = {
        message: "Hello, " + event.requestContext.authorizer.claims.given_name + " " + event.requestContext.authorizer.claims.family_name +"!" + " You are authenticated to your API using Cognito user pools!",
        method: "This is an authorized "+ event.httpMethod + " to Lambda from your API using a proxy resource.",
        body: event.body
    };

    //Response including CORS required header
    var response = {
        statusCode: responseCode,
        headers: {
            "Access-Control-Allow-Origin" : "*"
        },
        body: JSON.stringify(responseBody)
    };

    console.log("response: " + JSON.stringify(response))
    context.succeed(response);
};

For the last piece of the back-end puzzle, create a new API called CUP2Lambda from the Amazon API Gateway console. Under Authorizers, choose Create, Cognito User Pool Authorizer with the following settings:

cognitouserpoolsauth_4.png

Create an ANY method under the root of the API as follows:

cognitouserpoolsauth_5.png

After that, choose Save, OK to give API Gateway permissions to invoke the Lambda function. It’s time to configure the authorization settings for your ANY method. Under Method Request, enter the Cognito user pool as the authorization for your API:

cognitouserpoolsauth_6.png

Finally, choose Actions, Enable CORS. This creates an OPTIONS method in your API:

cognitouserpoolsauth_7.png

Now it’s time to deploy the API to a stage (such as prod) and generate a JavaScript SDK from the SDK Generation tab. You can use other methods to connect to your API however in this article I’ll show how to use the API Gateway SDK. Since we are using an ANY method the SDK does not have calls for specific methods other than the OPTIONS method created by Enable CORS, you have to add a couple of extra functions to the apigClient.js file so that your SDK can perform GET and POST operations to your API:


    apigClient.rootGet = function (params, body, additionalParams) {
        if(additionalParams === undefined) { additionalParams = {}; }
        
        apiGateway.core.utils.assertParametersDefined(params, [], ['body']);       

        var rootGetRequest = {
            verb: 'get'.toUpperCase(),
            path: pathComponent + uritemplate('/').expand(apiGateway.core.utils.parseParametersToObject(params, [])),
            headers: apiGateway.core.utils.parseParametersToObject(params, []),
            queryParams: apiGateway.core.utils.parseParametersToObject(params, []),
            body: body
        };
        

        return apiGatewayClient.makeRequest(rootGetRequest, authType, additionalParams, config.apiKey);
    };

    apigClient.rootPost = function (params, body, additionalParams) {
        if(additionalParams === undefined) { additionalParams = {}; }
     
        apiGateway.core.utils.assertParametersDefined(params, ['body'], ['body']);
       
        var rootPostRequest = {
            verb: 'post'.toUpperCase(),
            path: pathComponent + uritemplate('/').expand(apiGateway.core.utils.parseParametersToObject(params, [])),
            headers: apiGateway.core.utils.parseParametersToObject(params, []),
            queryParams: apiGateway.core.utils.parseParametersToObject(params, []),
            body: body
        };
        
        return apiGatewayClient.makeRequest(rootPostRequest, authType, additionalParams, config.apiKey);

    };

You can now use a little front end web page to authenticate users and test authorized calls to your API. In order for it to work, you need to add some external libraries and dependencies including the API Gateway SDK you just generated. You can find more details in our Cognito as well as API Gateway SDK documentation guides.

With the dependencies in place, you can use the following JavaScript code to authenticate your Cognito user pool user and connect to your API in order to perform authorized calls (replace your own user pool Id and client ID details accordingly):

<script type="text/javascript">
 //Configure the AWS client with the Cognito role and a blank identity pool to get initial credentials

  AWS.config.update({
    region: 'us-east-1',
    credentials: new AWS.CognitoIdentityCredentials({
      IdentityPoolId: ''
    })
  });

  AWSCognito.config.region = 'us-east-1';
  AWSCognito.config.update({accessKeyId: 'null', secretAccessKey: 'null'});
  var token = "";
 
  //Authenticate user with MFA

  document.getElementById("buttonAuth").addEventListener("click", function(){  
    var authenticationData = {
      Username : document.getElementById('username').value,
      Password : document.getElementById('password').value,
      };

    var showGetPut = document.getElementById('afterLogin');
    var hideLogin = document.getElementById('login');

    var authenticationDetails = new AWSCognito.CognitoIdentityServiceProvider.AuthenticationDetails(authenticationData);

   // Replace with your user pool details

    var poolData = { 
        UserPoolId : 'us-east-1_XXXXXXXXX', 
        ClientId : '12ioh8c17q3stmndpXXXXXXXX', 
        Paranoia : 7
    };

    var userPool = new AWSCognito.CognitoIdentityServiceProvider.CognitoUserPool(poolData);

    var userData = {
        Username : document.getElementById('user').value,
        Pool : userPool
    };

    var cognitoUser = new AWSCognito.CognitoIdentityServiceProvider.CognitoUser(userData);
    cognitoUser.authenticateUser(authenticationDetails, {
      onSuccess: function (result) {
        token = result.getIdToken().getJwtToken(); // CUP Authorizer = ID Token
        console.log('ID Token: ' + result.getIdToken().getJwtToken()); // Show ID Token in the console
        var cognitoGetUser = userPool.getCurrentUser();
        if (cognitoGetUser != null) {
          cognitoGetUser.getSession(function(err, result) {
            if (result) {
              console.log ("User Successfuly Authenticated!");  
            }
          });
        }

        //Hide Login form after successful authentication
        showGetPut.style.display = 'block';
        hideLogin.style.display = 'none';
      },
    onFailure: function(err) {
        alert(err);
    },
    mfaRequired: function(codeDeliveryDetails) {
            var verificationCode = prompt('Please input a verification code.' ,'');
            cognitoUser.sendMFACode(verificationCode, this);
        }
    });
  });

//Send a GET request to the API

document.getElementById("buttonGet").addEventListener("click", function(){
  var apigClient = apigClientFactory.newClient();
  var additionalParams = {
      headers: {
        Authorization: token
      }
    };

  apigClient.rootGet({},{},additionalParams)
      .then(function(response) {
        console.log(JSON.stringify(response));
        document.getElementById("output").innerHTML = ('<pre align="left"><code>Response: '+JSON.stringify(response.data, null, 2)+'</code></pre>');
      }).catch(function (response) {
        document.getElementById('output').innerHTML = ('<pre align="left"><code>Error: '+JSON.stringify(response, null, 2)+'</code></pre>');
        console.log(response);
    });
//}
});

//Send a POST request to the API

document.getElementById("buttonPost").addEventListener("click", function(){
  var apigClient = apigClientFactory.newClient();
  var additionalParams = {
      headers: {
        Authorization: token
      }
    };
    
 var body = {
        "message": "Sample POST payload"
  };

  apigClient.rootPost({},body,additionalParams)
      .then(function(response) {
        console.log(JSON.stringify(response));
        document.getElementById("output").innerHTML = ('<pre align="left"><code>Response: '+JSON.stringify(response.data, null, 2)+'</code></pre>');
      }).catch(function (response) {
        document.getElementById('output').innerHTML = ('<pre align="left"><code>Error: '+JSON.stringify(response, null, 2)+'</code></pre>');
        console.log(response);
    });
});
</script>

As far as the front end is concerned you can use some simple HTML code to test, such as the following snippet:

<body>
<div id="container" class="container">
    <br/>
    <img src="http://awsmedia.s3.amazonaws.com/AWS_Logo_PoweredBy_127px.png">
    <h1>Cognito User Pools and API Gateway</h1>
    <form name="myform">
        <ul>
          <li class="fields">
            <div id="login">
            <label>User Name: </label>
            <input id="username" size="60" class="req" type="text"/>
            <label>Password: </label>
            <input id="password" size="60" class="req" type="password"/>
            <button class="btn" type="button" id='buttonAuth' title="Log in with your username and password">Log In</button>
            <br />
            </div>
            <div id="afterLogin" style="display:none;"> 
            <br />
            <button class="btn" type="button" id='buttonPost'>POST</button>
            <button class="btn" type="button" id='buttonGet' >GET</button>
            <br />
          </li>
        </ul>
      </form>
  <br/>
    <div id="output"></div>
  <br/>         
  </div>        
  <br/>
  </div>
</body>

After adding some extra CSS styling of your choice (for example adding "list-style: none" to remove list bullet points), the front end is ready. You can test it by using a local web server in your computer or a static website on Amazon S3.

Enter the user name and password details for John Doe and choose Log In:

cognitouserpoolsauth_8.png

A MFA code is then sent to the user and can be validated accordingly:

cognitouserpoolsauth_9.png

After authentication, you can see the ID token generated by Cognito for further access testing:

cognitouserpoolsauth_10.png

If you go back to the API Gateway console and test your Cognito user pool authorizer with the same token, you get the authenticated user claims accordingly:

cognitouserpoolsauth_11.png

In your front end, you can now perform authenticated GET calls to your API by choosing GET.

cognitouserpoolsauth_12.png

Or you can perform authenticated POST calls to your API by choosing POST.

cognitouserpoolsauth_13.png

The calls reach your Lambda proxy and return a valid response accordingly. You can also test from the command line using cURL, by sending the user pool ID token that you retrieved from the developer console earlier, in the “Authorization” header:

cognitouserpoolsauth_14.png

It’s possible to improve this solution by integrating an Amazon DynamoDB table, for instance. You could detect the method request on event.httpMethod in the Lambda function and issue a GetItem call to a table for a GET request or a PutItem call to a table for a POST request. There are lots of possibilities for this kind of proxy resource integration.

Summary

The Cognito user pools integration with API Gateway provides a new way to secure your API workloads, and the new proxy resource for Lambda allows you to perform any business logic or transformations to your API calls from Lambda itself instead of using body mapping templates. These new features provide very powerful options to secure and handle your API logic.

I hope this post helps with your API workloads. If you have questions or suggestions, please comment below.

Binary Support for API Integrations with Amazon API Gateway

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/binary-support-for-api-integrations-with-amazon-api-gateway/

A year ago, the Microservices without the Servers post showed how Lambda can be used for creating image thumbnails. This required the client to Base64 encode the binary image file before calling the image conversion API as well as Base64 decode the response before it could be rendered. With the recent Amazon API Gateway launch of Binary Support for API Integrations, you can now specify media types that you want API Gateway to treat as binary and send and receive binary data through API endpoints hosted on Amazon API Gateway.

After this feature is configured, you can specify if you would like API Gateway to either pass the Integration Request and Response bodies through, convert them to text (Base64 encoding), or convert them to binary (Base64 decoding). These options are available for HTTP, AWS Service, and HTTP Proxy integrations. In the case of Lambda Function and Lambda Function Proxy Integrations, which currently only support JSON, the request body is always converted to JSON.

In this post, I show how you can use the new binary support in API Gateway to turn this Lambda function into a public API, which you can use to include a binary image file in a POST request and get a thumbnail version of that same image. I also show how you can now use API Gateway and Lambda to create a thumbnail service, which you can use to include a binary image file in a POST request and get a thumbnail version of the same image.

Walkthrough

To get started, log in to the AWS Management Console to set up a Lambda integration, using the image-processing-service blueprint.

Create the Lambda function

In the Lambda console, choose Create a Lambda Function.

In the blueprint filter step, for Select runtime , type in ‘image’ and then choose image-processing-service.

Do not set up a trigger. Choose Next.

In the Configure function step, specify the function name, such as ‘thumbnail’.

In the Lambda function handler and role step, for Role , choose Create new role from template(s), and specify the role name (e.g., ‘myMicroserviceRole’). Finally, choose Next. For more details, see AWS Lambda Permissions Model.

Review your Lambda function configuration and choose Create Function.

You have now successfully created the Lambda function that will create a thumbnail.

Create an API and POST method

In this section, you set up an API Gateway thumbnail API to expose a publically accessible RESTful endpoint.

In the API Gateway console, choose Create API.

For API name , enter ‘Thumbnail’, add a description, and choose Create API.

In the created API, choose Resources , Actions , and Create Method.

To create the method, choose POST and select the checkmark.

To set up the POST method, for Integration type , select Lambda Function , select the appropriate Lambda region, and enter ‘thumbnail’ for Lambda Function. Choose Save.

In the Add Permission to Lambda Function dialog box, choose OK to enable API Gateway to invoke the ‘thumbnail’ Lambda function.

Set up the integration

Now, you are ready to set up the integration. In the main page, open Integration Request.

On the Integration Request page, expand Body Mapping Templates.

For Request body passthrough , choose When there are no templates defined (recommended). For Content-Type , enter "image/png".

Choose Add mapping template and add the following template. The thumbnail Lambda function requires that you pass an operation to execute, in this case "thumbnail", and the image payload "base64Image" you are passing in, which is "$input.body". Review the following JSON and choose Save.

Specify which media types need to be handled as binary. Choose [API name], Binary Support.

Choose Edit , specify the media type (such as "image/png") to be handled as binary, and then choose Save.

Deployment

Now that the API is configured, you need to deploy it. On the thumbnail Resources page, choose Action , Deploy API.

For Deployment stage , select [New Stage], specify a stage name, and then choose Deploy.

A stage has been created for you; you receive the Invoke URL value to be used for your thumbnail API.

Testing

Now, you are ready to test the newly created API. Download your favorite .png image (such as apigateway.png), and issue the following curl command. Update the .png image file name and the Invoke URL value accordingly.

$ curl --request POST -H "Accept: image/png" -H "Content-Type: image/png" --data-binary "@apigateway.png" https://XXXXX.execute-api.us-east-1.amazonaws.com/prod > apigateway-thumb.png

You should now be able to open the created images in your favorite image viewer to confirm that resizing has occurred.

Summary

This is just one example of how you can leverage the new binary capabilities of Binary Support in API Gateway. For more examples, see the API Gateway Payload Encodings topic in the Amazon API Gateway Developer Guide.

If you have questions or suggestions, please comment below.

Powering Mobile Backend Services with AWS Lambda and Amazon API Gateway

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/powering-mobile-backend-services-with-aws-lambda-and-amazon-api-gateway/

Daniel Austin
Solutions Architect

Asif Khan
Solutions Architect

Have you ever wanted to create a mobile REST API quickly and easily to make database calls and manipulate data sources? The Node.js and Amazon DynamoDB tutorial shows how to perform CRUD operations (Create, Read, Update, and Delete) easily on DynamoDB tables using Node.js.

In this post, I extend that concept by adding a REST API that calls an AWS Lambda function from Amazon API Gateway. The API allows you to perform the same operations on DynamoDB, from any HTTP-enabled device, such as a browser or mobile phone. The client device doesn’t need to load any libraries, and with serverless architecture and API Gateway, you don’t need to maintain any servers at all!

Walkthrough

In this post, I show you how to write a Lambda function so that it can handle all of the API calls in a single code function, and then add a RESTful API on top of it. API Gateway tells you what function was called.

The problem to solve: how to use API Gateway, AWS Lambda, and DynamoDB to simplify DynamoDB access? Our approach involves using a single Lambda function to provide a CRUD façade on DynamoDB. This required solving two additional problems:

  1. Sending the date from API Gateway about which API method was called, along with POST information about the DynamoDB operations. This is solved by using a generic mapping template for each API call, sending all HTTP data to the Lambda function in a standard JSON format, including the path of the API call, i.e., ‘/movies/add-movie’.

  2. Providing a generic means in Node.js to use multiple function calls and properly use callbacks to send the function results back to the API, again in a standard JSON format. This required writing a generic callback mechanism (a very simple one) that is invoked by each function, and gathers the data for the response.

This is a very cool and easy way to implement basic DynamoDB functions to HTTP(S) calls from API Gateway. It works from any browser or mobile device that understands HTTP.

Mobile developers can write backend code in Java, Node.js, or Python and deploy on Lambda.

In this post, I continue the demonstration with a sample mobile movies database backend, written in Node.js, using DynamoDB. The API is hosted on API Gateway.

Optionally, you can use AWS Mobile Hub to develop and test the mobile client app.

The steps to deploy a mobile backend in Lambda are:

  1. Set up IAM users and roles to allow access to Lambda and DynamoDB.
  2. Download the sample application and edit to include your configuration.
  3. Create a table in DynamoDB using the console or the AWS CLI.Create a new Lambda function and upload the sample app.
  4. Create endpoints in API Gateway
  5. Test the API and Lambda function.

Set up IAM roles to allow access to Lambda and DynamoDB

To set up your API, you need an IAM user and role that has permissions to access DynamoDB and Lambda.

In the IAM console, choose Roles , Create role. Choose AWS Lambda from the list of service roles, then choose AmazonDynamoDBFullAccess and attach another policy, AWSLambdaFullAccess. You need to add this role to an IAM user: you can create a new user for this role, or use an existing one.

Download the sample application and edit to include your configuration

Now download the sample application and edit its configuration file.

The archive can be downloaded from GitHub: https://github.com/awslabs/aws-serverless-crud-sample

git clone https://github.com/awslabs/aws-serverless-crud-sample

After you download the archive, unzip it to an easily-found location and look for the file app_config.json. This file contains set up information for your Lambda function. Edit the file to include your access key ID and secret access key. If you created a new user in step 1, use those credentials. You also need to add your AWS region to the file – this is the region where you will create the DynamoDB table.

Create a table in DynamoDB using the console or the AWS CLI.

To create a table in DynamoDB, use the instructions in the Node.js and DynamoDB tutorial, in the Amazon DynamoDB Getting Started Guide. Next, run the file createMoviesTable.js from the downloaded code in the previous step. You could also use the AWS CLI with this input:

aws DynamoDB create-table  --cli-input-json file://movies-table.json  --region us-west-2  --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5

The file movies-table.json is in the code archive linked below. If you use the CLI, then the user must have sufficient permissions.

IMPORTANT : The table must be created before completing the rest of the walkthrough.

Create a new Lambda function and upload the sample app.

It can be a little tricky creating the archive for the Lambda function. Make sure you are not zipping the folder, but its contents. This is important; it should look like the following:

This creates a file called "Archive.zip", which is the file to be uploaded to Lambda.

In the Lambda console, choose Create a Lambda function and skip the Blueprints and Configure triggers sections. In the Configure function section, for Name , enter ‘movies-db’. For Runtime , choose ‘Node.js4.3’. For Code entry type, choose ‘Upload a zip file’. Choose Upload and select the archive file that you created in the previous step. For Handler , choose ‘movies-dynamodb.handler’, which is the name of the JavaScript function inside the archive that will be called via Lambda from API Gateway. For Role , choose Choose an existing role and select the role that you created in the first step.

You can leave the other options unchanged and then review and create your Lambda function. You can test the function using the following bit of JSON (this mimics the data that will be sent to the Lambda function from API Gateway):

{
  "method": "POST",
  "body" : { "title": "Godzilla vs. Dynamo", "year": "2016", "info": "New from Acme Films, starring John Smith."},
  "headers": {
      },
  "queryParams": {
      },
  "pathParams": {
      },
  "resourcePath": "/add-movie"
}

Create endpoints in API Gateway

Now, you create five API methods in API Gateway. First, navigate to the API Gateway console and choose Create API , New API. Give the API a name, such as ‘MoviesDP-API’.

In API Gateway, you first create resources and then the methods associated with those resources. The steps for each API call are basically the same:

  1. Create the resource (/movies or /movies/add-movie).
  2. Create a method for the resource – GET for /movies, POST for all others
  3. Choose Integration request , Lambda and select the node-movies Lambda function created earlier. All the API calls use the same Lambda function.
  4. Under Integration request , choose Body mapping templates , create a new template with type application/json , and copy the template file shown below into the form input.
  5. Choose Save.

Use these steps for the following API resources and methods:

  • /movies – lists all movies in the DynamoDB table
  • /movies/add-movie – add an item to DynamoDB
  • /movies/delete-movie – deletes a movie from DynamoDB
  • /movies/findbytitleandyear – finds a movie with a specific title and year
  • /movies/update-movie – modifies and existing movie item in DynamoDB

This body mapping template is a standard JSON template for passing information from API Gateway to Lambda. It provides the calling Lambda function with all of the HTTP input data – including any path variables, query strings, and most importantly for this purpose, the resourcePath variable, which contains the information from API Gateway about which function was called.

Here’s the template:

{
  "method": "$context.httpMethod",
  "body" : $input.json('$'),
  "headers": {
    #foreach($param in $input.params().header.keySet())
    "$param": "$util.escapeJavaScript($input.params().header.get($param))" #if($foreach.hasNext),#end
    #end
  },
  "queryParams": {
    #foreach($param in $input.params().querystring.keySet())
    "$param": "$util.escapeJavaScript($input.params().querystring.get($param))" #if($foreach.hasNext),#end
    #end
  },
  "pathParams": {
    #foreach($param in $input.params().path.keySet())
    "$param": "$util.escapeJavaScript($input.params().path.get($param))" #if($foreach.hasNext),#end
    #end
  },
  "resourcePath": "$context.resourcePath"
}

Notice the last line where the API Gateway variable $context.resourcePath is sent to the Lambda function as the JSON value of a field called (appropriately enough) resourcePath. This value is used by the Lambda function to perform the required action on the DynamoDB table.

(I originally found this template online, and modified it to add variables like resourcePath. Thanks to Kenn Brodhagen!)

As you create each API method, copy the requestTemplate.vel file from the original code archive and paste it into the Body mapping template field, using application/json as the type. Do this for each API call (using the same file). The file is the same one shown above, but it’s easier to copy the file than cut and paste from a web page!

After the five API calls are created, you are almost done. You need to test your API, and then deploy it before it can be used from the public web.

Test your API and Lambda function

To test the API, add a movie to the DB. The API Gateway calls expect a JSON payload that is sent via POST to your Lambda function. Here’s an example, it’s the same one used above to test the Lambda function:

{ "title": "Love and Friendship", "year": "2016", "info": "New from Amazon Studios, starring Kate Beckinsale."}

To add this movie to your DB, test the /movies/add-movie method, as shown here:

Check logs on the test page of your Lambda function and in API Gateway

[picture10.png]

Conclusion

In this post, I demonstrated a quick way to get started on AWS Lambda for your mobile backend. You uploaded a movie database app which performed CRUD operations on a movies DB table in DynamoDB. The API was hosted on API Gateway for scale and manageability. With the above deployment, you can focus on building your API and mobile clients with serverless architecture. You can reuse the mapping template in future API Gateway projects.

Interested in reading more? See Access Resources in a VPC from Your Lambda Functions and

AWS Mobile Hub – Build, Test, and Monitor Mobile Applications.

If you have questions or suggestions, please comment below.

Yemeksepeti: Our Shift to Serverless Architecture

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/yemeksepeti-our-shift-to-serverless-architecture/

AWS Community Hero Onur Salk wrote the guest post below in order to tell you how he helped his employer to move to a serverless architecture.


Jeff;


I’m Onur Salk, AWS Community Hero, AWS Certified Solutions Architect – Professional, and organizer of the AWS user group in Turkey. As a Hero, I like to share my AWS experience and knowledge with the community on my personal blog and through meetups with the community. Today I want to share the story behind Yemeksepeti and our shift to serverless architecture.

The Story Behind Yemeksepeti
Yemeksepeti is the biggest online ordering company in Turkey. It lets users place food orders from affiliated network restaurants without charging any extra fees. At Yemeksepeti, we needed to set up a globally distributed service that is scalable, high-performing, and cost-effective. Our belief is that by designing a serverless architecture, we won’t have to worry about managing our servers and can remove a lot of operational burdens from our team. This means we can focus on running our code at scale.

At Yemeksepeti.com, we developed a real-time discount system called Joker about four years ago. The purpose of this system is to suggest discounts to customers that they normally cannot find for restaurants. The original Joker platform was developed in .NET and then integrated with the website and mobile devices using its REST API. We were asked to open the platform’s API to our sister companies operating in 34 countries, so that they can also provide real-time Joker discounts to their customers.

Initially, we thought we would share our code and let them integrate their applications. However, most of other countries were using different technology stack (programming languages, database, and so on). Although using our code might accelerate their development at first,  they would have to maintain an unfamiliar system. We needed to find an integration method that was easier to implement and cheaper to maintain.

Our Requirements
This was a global project, and these were our five focus areas:

  • Ease of management
  • High availability
  • Scalability
  • Use in several regions
  • Cost advantage

We evaluated these focus areas against several different processing models and came up with the following matrix:

Ease of ManagementHigh AvailabilityScalabilityUse in Several RegionsCost Advantage
IaaS

We could spin up some EC2 instances running IIS on top of Microsoft Windows Server and connected to an RDS DB instance.
 No. We need to take care of our servers. Yes. We can distribute our servers to different AZs. Yes. We can use Auto Scaling Yes. We can use AMIs and copy between regions Partially. There will be license fees and costs for running EC2 instances .
 PaaS

We could use AWS Elastic Beanstalk.
 Partially. We need to take care of our servers. Yes. We can distribute our servers to different AZs. Yes. We can use Auto Scaling. Yes. We can use environment configurations, AMIs, etc. Partially. There will be license fees and costs for running EC2 instances.
 FaaS

We could use AWS Lambda.
 Yes. AWS takes care of the services. Yes. It is already highly available Yes. It performs at any scale Yes. We can export/import/upload our configurations easily. Yes. There are no licenses and we pay only for what we use.

We decided to use Faas (Functions as a Service). We started our project in the Europe (Ireland) regions using the following services:

Architecture
Our architecture looks like this:

Amazon VPC: We use Amazon VPC to launch our resources in our private network.

Amazon API Gateway: During the development phase, we started to develop the service in the Europe (Ireland) region. At that time, AWS Lambda was not available in Europe (Frankfurt). We created two APIs: one for web integration and the other for the admin interface. We used custom authorizers with JSON Web Tokens (JWT) to enable token-based authorization for our APIs. We used mapping templates to pass our variables to our Lambda functions.

In the development phase, there was only a test stage for each API.

During the production phase, AWS Lambda became available in Frankfurt. We decided to move the service there to benefit from low latency access from Turkey. We used the API Gateway Export API feature to export our configuration in Swagger format, and then imported it into Frankfurt. (Before the import, we changed the region definitions in the exported file to eu-central-1.) After that, we created a production stage and used stage variables to parameterize our database definitions of the Amazon RDS instances (like host, username, and so on). We also wanted to use our custom domain name. After we bought an SSL certificate for our domain, we created a custom domain name in the Amazon API Gateway console and created an alias for our CloudFront distribution name (Amazon API Gateway uses Amazon CloudFront in the background). Finally, we created an IAM role to enable Amazon CloudWatch logging for API calls, latency, and more.

Metrics for Get_Joker_offer resource:

AWS Lambda: During the development phase, we used Python to develop our service and created 65 functions for integrating our API methods and scheduled tasks using CloudWatch Events Lambda triggers. Lambda VPC integration became available during the production phase, so we uploaded our functions to the Frankfurt region and integrated them with VPC.

Invocation count of Get_joker_offer Lambda function (The peaks correspond to lunch and dinner times (when people are hungry)):

Amazon RDS: During the development phase, we chose to use Amazon RDS for PostgreSQL. We created a single-AZ RDS instance to test our service. During the production phase, we needed to move our database because we migrated our APIs and functions to Frankfurt. We created a snapshot of our instance and using the Copy snapshot feature of RDS, we successfully moved our database. We launched two instances in our VPC: a multi-AZ instance for production and a single-AZ instance for test purposes. In our API stage variables, we defined the endpoint names of our RDS instances to map the staging to the appropriate instance. We also enabled automated backups for both instances.

Amazon S3: The Joker platform has an admin panel that’s used for managing and reporting Joker offers. To host this administration interface, which is basically a Single Page Application (SPA) with AngularJS, we used the static website hosting feature of Amazon S3. All of the logic and functionality is provided by methods running on Lambda, so we didn’t need a server for the admin interface:

Amazon CloudWatch: We use the service to monitor the usage of our important assets and to get alerts if something goes wrong. We created a custom dashboard to monitor the CPU of our production database, connection count, critical API latencies and function counts and durations.

In our Python code, we log the durations of each inner method in CloudWatch to track performance and find any bottlenecks:

Here’s our CloudWatch dashboard:

Amazon ElasticSearch: During the development phase, Cloudwatch Logs streaming to Amazon ES became available in the Ireland region. Using this feature, we created a Kibana dashboard to monitor some other metrics from the logs we generate from our code. As soon as Amazon ES is available in the Frankfurt region, we will use it again.

Initial Results
The Joker system is in production now, as a pilot for a small region of a country. As you can see from the following chart, the growth of the number of orders is promising. By leveraging serverless architecture, we didn’t have to install and manage an operating system and dependencies. Using Amazon API Gateway, AWS Lambda, Amazon S3, and Amazon RDS, our architecture runs in a highly available environment. We don’t need to learn and manage any master-slave replication features or third-party tools. As our service gets more requests, AWS Lambda adds more Lambda instance, so it runs at any scale. We are able to copy our service to another region using the features of AWS services as we did before going into production. Finally, we don’t run any servers, so we benefit from the cost advantage of serverless architecture.

Here is a representation of the number of orders placed through Joker:

What’s Next
We hope this service will spread to all 34 of the sister companies within Delivery Hero Holding. As the service is rolled out globally, we will deploy to other AWS regions. We plan to choose the region nearest to the company. To optimize our costs, we will purchase reserved instances for our RDS instances. Also, as we monitor our inner methods’ duration, we can re-factor and optimize our code and so that we can decrease our Lambda functions’ execution times.

We believe the future of the cloud is FaaS. We would like to experiment more as other features, services,  and functions become available.

As an AWS Community Hero, I look forward to sharing the Yemeksepeti story with the AWS user group in Turkey. I’d like to help people explore and leverage serverless architecture.

— Onur Salk

 

Redirection in a Serverless API with AWS Lambda and Amazon API Gateway

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/redirection-in-a-serverless-api-with-aws-lambda-and-amazon-api-gateway/

Ronald Widha

Ronald Widha @ronaldwidha
Partner Solutions Architect

Redirection is neither a success nor an error response. You return redirection when the requested resource resides either temporarily or permanently under a different URI. The client needs to issue subsequent calls to the new location in order to retrieve the requested resource. Even though you typically see 302 and 301 redirects when requesting text/html, these response types also apply to REST JSON endpoints.

Many of you are already familiar how to return success and error responses. Amazon API Gateway and AWS Lambda help development teams to build scalable web endpoints very quickly. In a previous post (Error Handling Patterns in Amazon API Gateway and AWS Lambda), we discussed several error handling patterns to implement an explicit contract for the types of error responses that an API can produce and how they translate into HTTP status codes. However, so far this blog has not discussed how to handle redirection.

This post shows the recommended patterns for handling redirection in your serverless API built on API Gateway and Lambda.

Routing Lambda errors to API Gateway HTTP 30x responses

In HTTP, there are several types of redirection codes. You return these status codes as part of an HTTP response to the client.

TypeHTTP Status CodeDescription
Multiple types available300The requested resource is available in multiple representations with its own specific location (not commonly used).
Moved permanently301The requested resource has been moved a new permanent URI.
Found302The requested resource is temporarily available under a different URI.

For more information about HTTP server status codes, see RFC2616 section 10.5 on the W3C website.

In this specific scenario, you would like your API Gateway method to return the following HTTP response. Note that there are two key parameters to return:

  1. The status code, i.e., 301 or 302
  2. The new URI of the resource
HTTP/1.1 302 Found
Content-Type: text/html
Content-Length: 503
Connection: keep-alive
Date: Fri, 08 Jul 2016 16:16:44 GMT
Location: http://www.amazon.com/

In API Gateway, AWS recommends that you model the various HTTP response types that your API method may produce, and define a mapping from the return value of your Lambda function to these HTTP responses. To do that, you need to do two things:

  1. API Gateway can only map responses to different method responses on error. So even though redirection isn’t strictly a failure, you still throw an exception from Lambda to communicate back to API Gateway when it needs to issue a 30x response.
  2. Unlike HTTP 200 Success or any of the HTTP Error status codes, a redirection requires two values: the status code and the location. API Gateway do not support VTL for JSON serialization in the header mappings; thus, in this case, you need to take advantage of how the Lambda runtime handles exceptions to pass these values in two separate fields back to API Gateway.

Node.JS: Using the exception name to store the redirection URI

API Gateway > Lambda high level diagram

The mapping from a Lambda function error to an API Gateway method response is defined by an integration response, which defines a selection pattern used to match the Lambda function errorMessage and routes it to an associated method response. In this example, you use the prefix-based error handling pattern:

API Gateway SettingsValue
Integration Response: Lambda Error Regex^HandlerDemo.ResponseFound.*
Method Response: HTTP Status302

Because you don’t have access to VTL in the API Gateway header mappings, retrieve your redirection URI from errorType.

API Gateway SettingsValue
Integration Response: Header MappingsLocation: integration.response.body.errorType

In the Lambda function Node.JS runtime, errorType can be assigned to any value. In this case, use it to store the redirection URI. In the handler.js, you have the following:

// Returns 302 or 301
var err = new Error("HandlerDemo.ResponseFound Redirection: Resource found elsewhere");
err.name = "http://a-different-uri";
context.done(err, {});

The same technique applies to 301 permanent redirects except you replace the API Gateway regex above to detect HandlerDemo.MovedPermanently.

Node.JS: Handling other response types

In order to keep it consistent, you can leverage the same pattern for other error codes. You can leverage errorType for any end user–visible messages.

//404 error
var err = new Error(“HandlerDemo.ResponseNotFound: Resource not found.”);
err.name = “Sorry, we can't find what you're looking for. Are you sure the address is correct?";
context.done(err, {});

Just as an example, upon failure, you return a “Fail Whale” HTML webpage to the user.

API Gateway SettingsValue
Integration Response: Lambda Error Regex^HandlerDemo.ResponseNotFound.*
Method Response: HTTP Status404
Integration Response: Content-Type (Body Mapping Templates)text/html
Integration Response: Template<html>

   <img src=”fail-whale.gif” />

   $input.path(‘$.errorType’)

</html>

HTTP 200 Success can be handled as you normally would in your handler.js:

//200 OK
context.done(null, response);

Java: Using the Java inner exception to store the redirection URI

API Gateway > Lambda high level diagram

Because Java is a strongly typed language, the errorType Lambda function return value contains the Java exception type which you cannot override (nor should you). Thus, you will be using a different property to retrieve the redirection URI:

API Gateway SettingsValue
Method Response: HTTP Status302
Integration Response: Header MappingsLocation: integration.response.body.cause.errorMessage

The cause.errorMessage parameter is accessible in the Lambda function Java runtime as an inner exception.

throw new Exception(new ResponseFound("http://www.amazon.com"));

ResponseFound is a class that extends Throwable.

package HandlerDemo;

public class ResponseFound extends Throwable {
  public ResponseFound(String uri) { super(uri); }
}

The full response from Lambda received by API Gateway is the following:

{
  "errorMessage": "HandlerDemo.ResponseFound: http://www.amazon.com",
  "errorType": "java.lang.Exception",
  "stackTrace": [ ... ],
  "cause": {
    "errorMessage": "http://www.amazon.com",
    "errorType": "HandlerDemo.ResponseFound",
    "stackTrace": [ ... ]
  }
}

Because Lambda serializes the inner exception type to the outer errorMessage. you can still use the same Lambda error regex in API Gateway to detect a redirect from errorMessage.

API Gateway SettingsValue
Integration Response: Lambda Error Regex^HandlerDemo.ResponseFound.*

Conclusion

In this post, I showed how to emit different HTTP responses, including a redirect response, on Amazon API Gateway and AWS Lambda for Node.JS and Java runtimes. I encourage developers to wrap these patterns into a helper class where it handles the inner working of returning a success, error, or redirection response.

One thing you may notice missing is an example of handling redirection in Python. At the time of writing, the Chalice community is considering support for this use case as part of the framework.

If you have questions or suggestions, please comment below.

Error Handling Patterns in Amazon API Gateway and AWS Lambda

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/error-handling-patterns-in-amazon-api-gateway-and-aws-lambda/


Ryan Green @ryangtweets
Software Development Engineer, API Gateway

A common API design practice is to define an explicit contract for the types of error responses that the API can produce. This allows API consumers to implement a robust error-handling mechanism which may include user feedback or automatic retries, improving the usability and reliability of applications consuming your API.

In addition, well-defined input and output contracts, including error outcomes, allows strongly-typed SDK client generation which further improves the application developer experience. Similarly, your API backend should be prepared to handle the various types of errors that may occur and/or surface them to the client via the API response.

This post discusses some recommended patterns and tips for handling error outcomes in your serverless API built on Amazon API Gateway and AWS Lambda.

HTTP status codes

In HTTP, error status codes are generally divided between client (4xx) and server (5xx) errors. It’s up to your API to determine which errors are appropriate for your application. The table shows some common patterns of basic API errors.

TypeHTTP status codeDescription
Data Validation400 (Bad Request)The client sends some invalid data in the request, for example, missing or incorrect content in the payload or parameters. Could also represent a generic client error.
Authentication/Authorization401 (Unauthorized)
403 (Forbidden)
The client is not authenticated (403) or is not authorized to access the requested resource (401).
Invalid Resource404 (Not Found)The client is attempting to access a resource that doesn’t exist.
Throttling429 (Too Many Requests)The client is sending more than the allowed number of requests per unit time.
Dependency Issues502 (Bad Gateway)
504 (Gateway Timeout)
A dependent service is throwing errors (502) or timing out (504).
Unhandled Errors500 (Internal Server Error)
503 (Service Unavailable)
The service failed in an unexpected way (500), or is failing but is expected to recover (503).

For more information about HTTP server status codes, see RFC2616 section 10.5 on the W3C website.

Routing Lambda function errors to API Gateway HTTP responses

In API Gateway, AWS recommends that you model the various types of HTTP responses that your API method may produce, and define a mapping from the various error outcomes in your backend Lambda implementation to these HTTP responses.

In Lambda, function error messages are always surfaced in the “errorMessage” field in the response. Here’s how it’s populated in the various runtimes:

Node.js (4.3):

exports.handler = function(event, context, callback) {
    callback(new Error("the sky is falling!");
};

Java:

public class LambdaFunctionHandler implements RequestHandler<String, String> {
  @Override
    public String handleRequest(String input, Context context) {
        throw new RuntimeException("the sky is falling!");
   }
}

Python:

def lambda_handler(event, context):
    raise Exception('the sky is falling!')
Each results in the following Lambda response body:
{
  "errorMessage" : "the sky is falling!",
    …
}

The routing of Lambda function errors to HTTP responses in API Gateway is achieved by pattern matching against this “errorMessage” field in the Lambda response. This allows various function errors to be routed to API responses with an appropriate HTTP status code and response body.

The Lambda function must exit with an error in order for the response pattern to be evaluated – it is not possible to “fake” an error response by simply returning an “errorMessage” field in a successful Lambda response.

Note: Lambda functions failing due to a service error, i.e. before the Lambda function code is executed, are not subject to the API Gateway routing mechanism. These types of errors include internal server errors, Lambda function or account throttling, or failure of Lambda to parse the request body. Generally, these types of errors are returned by API Gateway as a 500 response. AWS recommends using CloudWatch Logs to troubleshoot these types of errors.

API Gateway method response and integration response

In API Gateway, the various HTTP responses supported by your method are represented by method responses. These define an HTTP status code as well as a model schema for the expected shape of the payload for the response.

Model schemas are not required on method responses but they enable support for strongly-typed SDK generation. For example, the generated SDKs can unmarshall your API error responses into appropriate exception types which are thrown from the SDK client.

The mapping from a Lambda function error to an API Gateway method responseis defined by an integration response. An integration response defines a selection pattern used to match the Lambda function “errorMessage” and routes it to an associated method response.

Note: API Gateway uses Java pattern-style regexes for response mapping. For more information, see Pattern in the Oracle documentation.

Example:

Lambda function (Node.js 4.3):

exports.handler = (event, context, callback) => {
   callback ("the sky is falling!");
};

Lambda response body:

{
  "errorMessage": "the sky is falling!"
}

API Gateway integration response:

Selection pattern : “the sky is falling!”

Method response : 500

API Gateway response:

Status: 500

Response body:

{
  "errorMessage": "the sky is falling!"
}

In this example, API Gateway returns the Lambda response body verbatim, a.k.a. “passthrough”. It is possible to define mapping templates on the integration response to transform the Lambda response body into a different form for the API Gateway method response. This is useful when you want to format or filter the response seen by the API client.

When a Lambda function completes successfully or if none of the integration response patterns match the error message, API Gateway responds with the default integration response (typically, HTTP status 200). For this reason, it is imperative that you design your integration response patterns such that they capture every possible error outcome from your Lambda function. Because the evaluation order is undefined, it is unadvisable to define a “catch-all” (i.e., “.*”) error pattern which may be evaluated before the default response.

Common patterns for error handling in API Gateway and Lambda

There are many ways to structure your serverless API to handle error outcomes. The following section will identify two successful patterns to consider when designing your API.

Simple prefix-based

This common pattern uses a prefix in the Lambda error message string to route error types.

You would define a static set of prefixes, and create integration responses to capture each and route them to the appropriate method response. An example mapping might look like the following:

PrefixMethod response status
[BadRequest]400
[Forbidden]403
[NotFound]404
[InternalServerError]500

Example:

Lambda function (NodeJS):

exports.handler = (event, context, callback) => {
    callback("[BadRequest] Validation error: Missing field 'name'");
};

Lambda output:

{
  "errorMessage": "[BadRequest] Validation error: Missing field 'name'"
}

API Gateway integration response:

Selection pattern: “^[BadRequest].*”

Method response: 400

API Gateway response:

Status: 400

Response body:

{
  "errorMessage": "[BadRequest] Validation error: Missing field 'name'"
}

If you don’t want to expose the error prefix to API consumers, you can perform string processing within a mapping template and strip the prefix from the errorMessage field.

Custom error object serialization

Lambda functions can return a custom error object serialized as a JSON string, and fields in this object can be used to route to the appropriate API Gateway method response.

This pattern uses a custom error object with an “httpStatus” field and defines an explicit 1-to-1 mapping from the value of this field to the method response.

An API Gateway mapping template is defined to deserialize the custom error object and build a custom response based on the fields in the Lambda error.

Lambda function (Node.js 4.3):

exports.handler = (event, context, callback) => {
    var myErrorObj = {
        errorType : "InternalServerError",
        httpStatus : 500,
        requestId : context.awsRequestId,
        message : "An unknown error has occurred. Please try again."
    }
    callback(JSON.stringify(myErrorObj));
};

Lambda function (Java):

public class LambdaFunctionHandler implements RequestHandler<String, String> {
  @Override
    public String handleRequest(String input, Context context) {

        Map<String, Object> errorPayload = new HashMap();
        errorPayload.put("errorType", "BadRequest");
        errorPayload.put("httpStatus", 400);
        errorPayload.put("requestId", context.getAwsRequestId());
        errorPayload.put("message", "An unknown error has occurred. Please try again.");
        String message = new ObjectMapper().writeValueAsString(errorPayload);
        
        throw new RuntimeException(message);
    }
}

Note: this example uses Jackson ObjectMapper for JSON serialization. For more information, see ObjectMapper on the FasterXML website.

Lambda output:

{
  "errorMessage": "{\"errorType\":\"InternalServerError\",\"httpStatus\":500,\"requestId\":\"40cd9bf6-0819-11e6-98f3-415848322efb\",\"message\":\"An unknown error has occurred. Please try again.\"}"
}

Integration response:

Selection pattern: “.*httpStatus”:500.*”

Method response: 500

Mapping template:

#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))
{
  "type" : "$errorMessageObj.errorType",
  "message" : "$errorMessageObj.message",
  "request-id" : "$errorMessageObj.requestId"
}

Note: This template makes use of the $util.parseJson() function to parse elements from the custom Lambda error object. For more information, see Accessing the $util Variable.

API Gateway response:

Status: 500

Response body:

{
  "type": "InternalServerError",
  "message": " An unknown error has occurred. Please try again.",
  "request-id": "e308b7b7-081a-11e6-9ab9-117c7feffb09"
}

This is a full Swagger example of the custom error object serialization pattern. This can be imported directly into API Gateway for testing or as a starting point for your API.

{
  "swagger": "2.0",
  "info": {
    "version": "2016-04-21T23:52:49Z",
    "title": "Best practices for API error responses with API Gateway and Lambda"
  },
  "schemes": [
    "https"
  ],
  "paths": {
    "/lambda": {
      "get": {
        "consumes": [
          "application/json"
        ],
        "produces": [
          "application/json"
        ],
        "parameters": [
          {
            "name": "status",
            "in": "query",
            "required": true,
            "type": "string"
          }
        ],
        "responses": {
          "200": {
            "description": "200 response",
            "schema": {
              "$ref": "#/definitions/Empty"
            }
          },
          "400": {
            "description": "400 response",
            "schema": {
              "$ref": "#/definitions/Error"
            }
          },
          "403": {
            "description": "403 response",
            "schema": {
              "$ref": "#/definitions/Error"
            }
          },
          "404": {
            "description": "404 response",
            "schema": {
              "$ref": "#/definitions/Error"
            }
          },
          "500": {
            "description": "500 response",
            "schema": {
              "$ref": "#/definitions/Error"
            }
          }
        },
        "x-amazon-apigateway-integration": {
          "responses": {
            "default": {
              "statusCode": "200"
            },
            ".\*httpStatus\\\":404.\*": {
              "statusCode": "404",
              "responseTemplates": {
                "application/json": "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n  \"type\" : \"$errorMessageObj.errorType\",\n  \"message\" : \"$errorMessageObj.message\",\n  \"request-id\" : \"$errorMessageObj.requestId\"\n}"
              }
            },
            ".\*httpStatus\\\":403.\*": {
              "statusCode": "403",
              "responseTemplates": {
                "application/json": "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n  \"type\" : \"$errorMessageObj.errorType\",\n  \"message\" : \"$errorMessageObj.message\",\n  \"request-id\" : \"$errorMessageObj.requestId\"\n}"
              }
            },
            ".\*httpStatus\\\":400.\*": {
              "statusCode": "400",
              "responseTemplates": {
                "application/json": "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n  \"type\" : \"$errorMessageObj.errorType\",\n  \"message\" : \"$errorMessageObj.message\",\n  \"request-id\" : \"$errorMessageObj.requestId\"\n}"
              }
            },
            ".\*httpStatus\\\":500.\*": {
              "statusCode": "500",
              "responseTemplates": {
                "application/json": "#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))\n#set ($bodyObj = $util.parseJson($input.body))\n{\n  \"type\" : \"$errorMessageObj.errorType\",\n  \"message\" : \"$errorMessageObj.message\",\n  \"request-id\" : \"$errorMessageObj.requestId\"\n}"
              }
            }
          },
          "httpMethod": "POST",
          "requestTemplates": {
            "application/json": "{\"failureStatus\" : $input.params('status')\n}"
          },
          "uri": "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/[MY_FUNCTION_ARN]/invocations",
          "type": "aws"
        }
      }
    }
  },
  "definitions": {
    "Empty": {
      "type": "object"
    },
    "Error": {
      "type": "object",
      "properties": {
        "message": {
          "type": "string"
        },
        "type": {
          "type": "string"
        },
        "request-id": {
          "type": "string"
        }
      }
    }
  }
}

Conclusion

There are many ways to represent errors in your API. While API Gateway and Lambda provide the basic building blocks, it is helpful to follow some best practices when designing your API. This post highlights a few successful patterns that we have identified but we look forward to seeing other patterns emerge from our serverless API users.

Using API Gateway with VPC endpoints via AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-api-gateway-with-vpc-endpoints-via-aws-lambda/

To isolate critical parts of their app’s architecture, customers often rely on Virtual Private Cloud (VPC) and private subnets. Today, Amazon API Gateway cannot directly integrate with endpoints that live within a VPC without internet access. However, it is possible to proxy calls to your VPC endpoints using AWS Lambda functions.

This post guides you through the setup necessary to configure API Gateway, Lambda, and your VPC to proxy requests from API Gateway to HTTP endpoints in your VPC private subnets. With this solution, you can use API Gateway for authentication, authorization, and throttling before a request reaches your HTTP endpoint.

For this example, we have written a very basic Express application that receives a GET and POST method on its root resource (“/”). The application is deployed on an EC2 instance within a private subnet of a VPC. We use a Lambda function that connects to our private subnet to proxy requests from API Gateway to the Express HTTP endpoint. The CloudFormation template below deploys the API Gateway API, the AWS Lambda functions, and sets the correct permissions on both resources. Our CloudFormation template requires 4 parameters:

  • The IP address or DNS name of the instance running your express application (for example, 10.0.1.16)
  • The port used by the Express app (for example, 8080)
  • The security group of the EC2 instance (for example, sg-xx3xx6x0)
  • The subnet ID of your VPC’s private subnet (for example, subnet-669xx03x)

Click the link below to deploy the CloudFormation template, the rest of this blog post dives deeper on each component of the architecture.



The Express application

We have written a very simple web service using Express and Node.js. The service accepts GET and POST requests to its root resource and responds with a JSON object. You can use the sample code below to start the application on your instance. Before you create the application, make sure that you have installed Node.js on your instance.

Create a new folder on your web server called vpcproxy. In the new folder, create a new file called index.js and paste the code below in the file.

var express = require('express');
var bodyParser = require('body-parser');

var app = express();
app.use(bodyParser.json());

app.get('/', function(req, res) {
        if (req.query.error) {
                res.status(403).json({error: "Random error"}).end();
                return;
        }
        res.json({ message: 'Hello World!' });
});

app.post('/', function(req, res) {
        console.log("post");
        console.log(req.body);
        res.json(req.body).end();
});
app.listen(8080, function() {
        console.log("app started");
});

To install the required dependencies, from the vpcproxy folder, run the following command: npm install express body-parser

After the dependencies are installed, you can start the application by running: node index.js

API Gateway configuration

The API Gateway API declares all of the same methods that your Express application supports. Each method is configured to transform requests into a JSON structure that AWS Lambda can understand, and responses are generated using mapping templates from the Lambda output.

The first step is to transform a request into an event for Lambda. The mapping template below captures all of the request information and includes the configuration of the backend endpoint that the Lambda function should interact with. This template is applied to all requests for any endpoint.

#set($allParams = $input.params())
{
  "requestParams" : {
    "hostname" : "10.0.1.16",
    "port" : "8080",
    "path" : "$context.resourcePath",
    "method" : "$context.httpMethod"
  },
  "bodyJson" : $input.json('$'),
  "params" : {
    #foreach($type in $allParams.keySet())
      #set($params = $allParams.get($type))
      "$type" : {
        #foreach($paramName in $params.keySet())
          "$paramName" : "$util.escapeJavaScript($params.get($paramName))"
          #if($foreach.hasNext),#end
        #end
      }
      #if($foreach.hasNext),#end
    #end
  },
  "stage-variables" : {
    #foreach($key in $stageVariables.keySet())
      "$key" : "$util.escapeJavaScript($stageVariables.get($key))"
      #if($foreach.hasNext),#end
    #end
  },
  "context" : {
    "account-id" : "$context.identity.accountId",
    "api-id" : "$context.apiId",
    "api-key" : "$context.identity.apiKey",
    "authorizer-principal-id" : "$context.authorizer.principalId",
    "caller" : "$context.identity.caller",
    "cognito-authentication-provider" : "$context.identity.cognitoAuthenticationProvider",
    "cognito-authentication-type" : "$context.identity.cognitoAuthenticationType",
    "cognito-identity-id" : "$context.identity.cognitoIdentityId",
    "cognito-identity-pool-id" : "$context.identity.cognitoIdentityPoolId",
    "http-method" : "$context.httpMethod",
    "stage" : "$context.stage",
    "source-ip" : "$context.identity.sourceIp",
    "user" : "$context.identity.user",
    "user-agent" : "$context.identity.userAgent",
    "user-arn" : "$context.identity.userArn",
    "request-id" : "$context.requestId",
    "resource-id" : "$context.resourceId",
    "resource-path" : "$context.resourcePath"
  }
}

After the Lambda function has processed the request and response, API Gateway is configured to transform the output into an HTTP response. The output from the Lambda function is a JSON structure that contains the response status code, body, and headers:

{  
   "status":200,
   "bodyJson":{  
      "message":"Hello World!"
   },
   "headers":{  
      "x-powered-by":"Express",
      "content-type":"application/json; charset=utf-8",
      "content-length":"26",
      "etag":"W/\"1a-r2dz039gtg5rjLoq32eF4w\"",
      "date":"Wed, 25 May 2016 18:41:22 GMT",
      "connection":"keep-alive"
   }
}

These values are then mapped in API Gateway using header mapping expressions and mapping templates for the response body.

First, all known headers are mapped:

"responseParameters": {
    "method.response.header.etag": "integration.response.body.headers.etag",
    "method.response.header.x-powered-by": "integration.response.body.headers.x-powered-by",
    "method.response.header.date": "integration.response.body.headers.date",
    "method.response.header.content-length": "integration.response.body.headers.content-length"
}

Then the body is extracted from Lambda’s output JSON using a very simple body mapping template: $input.json('$.bodyJson')

Response codes other than 200 are handled using using regular expressions to match the status code in API Gateway (for example \{\"status\"\:400.*), and the parseJson method of the $util object to extract the response body.

#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))
$errorMessageObj.bodyJson

All of this configuration is included in Swagger format in the CloudFormation template of this tutorial. The Swagger is generated dynamically based on the four parameters that the template requires using the Fn::Join function.

The AWS Lambda function

The proxy Lambda function is written in JavaScript and captures all of the request details forwarded by API Gateway, creates similar request using the standard Node.js http package, and forwards it to the private endpoint. Responses from the private endpoint are encapsulated in a JSON object which API Gateway turns into an HTTP response. The private endpoint configuration is passed to the Lambda function from API Gateway in the event model. The Lambda function code is also included in the CloudFormation template.

var http = require('http');

exports.myHandler = function(event, context, callback) {
    // setup request options and parameters
    var options = {
      host: event.requestParams.hostname,
      port: event.requestParams.port,
      path: event.requestParams.path,
      method: event.requestParams.method
    };
    
    // if you have headers set them otherwise set the property to an empty map
    if (event.params && event.params.header && Object.keys(event.params.header).length > 0) {
        options.headers = event.params.header
    } else {
        options.headers = {};
    }
    
    // Force the user agent and the "forwaded for" headers because we want to 
    // take them from the API Gateway context rather than letting Node.js set the Lambda ones
    options.headers["User-Agent"] = event.context["user-agent"];
    options.headers["X-Forwarded-For"] = event.context["source-ip"];
    // if I don't have a content type I force it application/json
    // Test invoke in the API Gateway console does not pass a value
    if (!options.headers["Content-Type"]) {
        options.headers["Content-Type"] = "application/json";
    }
    // build the query string
    if (event.params && event.params.querystring && Object.keys(event.params.querystring).length > 0) {
        var queryString = generateQueryString(event.params.querystring);
        
        if (queryString !== "") {
            options.path += "?" + queryString;
        }
    }
    
    // Define my callback to read the response and generate a JSON output for API Gateway.
    // The JSON output is parsed by the mapping templates
    callback = function(response) {
        var responseString = '';
    
        // Another chunk of data has been recieved, so append it to `str`
        response.on('data', function (chunk) {
            responseString += chunk;
        });
      
        // The whole response has been received
        response.on('end', function () {
            // Parse response to json
            var jsonResponse = JSON.parse(responseString);
    
            var output = {
                status: response.statusCode,
                bodyJson: jsonResponse,
                headers: response.headers
            };
            
            // if the response was a 200 we can just pass the entire JSON back to
            // API Gateway for parsing. If the backend returned a non 200 status 
            // then we return it as an error
            if (response.statusCode == 200) {
                context.succeed(output);
            } else {
                // set the output JSON as a string inside the body property
                output.bodyJson = responseString;
                // stringify the whole thing again so that we can read it with 
                // the $util.parseJson method in the mapping templates
                context.fail(JSON.stringify(output));
            }
        });
    }
    
    var req = http.request(options, callback);
    
    if (event.bodyJson && event.bodyJson !== "") {
        req.write(JSON.stringify(event.bodyJson));
    }
    
    req.on('error', function(e) {
        console.log('problem with request: ' + e.message);
        context.fail(JSON.stringify({
            status: 500,
            bodyJson: JSON.stringify({ message: "Internal server error" })
        }));
    });
    
    req.end();
}

function generateQueryString(params) {
    var str = [];
    for(var p in params) {
        if (params.hasOwnProperty(p)) {
            str.push(encodeURIComponent(p) + "=" + encodeURIComponent(params[p]));
        }
    }
    return str.join("&");
}

Conclusion

You can use Lambda functions to proxy HTTP requests from API Gateway to an HTTP endpoint within a VPC without Internet access. This allows you to keep your EC2 instances and applications completely isolated from the internet while still exposing them via API Gateway. By using API Gateway to front your existing endpoints, you can configure authentication and authorization rules as well as throttling rules to limit the traffic that your backend receives.

If you have any questions or suggestions, please comment below.

Surviving the Zombie Apocalypse with Serverless Microservices

Post Syndicated from Aaron Kao original https://aws.amazon.com/blogs/compute/surviving-the-zombie-apocalypse-with-serverless-microservices/

Run Apps without the Bite!

by: Kyle Somers – Associate Solutions Architect

Let’s face it, managing servers is a pain! Capacity management and scaling is even worse. Now imagine dedicating your time to SysOps during a zombie apocalypse — barricading the door from flesh eaters with one arm while patching an OS with the other.

This sounds like something straight out of a nightmare. Lucky for you, this doesn’t have to be the case. Over at AWS, we’re making it easier than ever to build and power apps at scale with powerful managed services, so you can focus on your core business – like surviving – while we handle the infrastructure management that helps you do so.

Join the AWS Lambda Signal Corps!

At AWS re:Invent in 2015, we piloted a workshop where participants worked in groups to build a serverless chat application for zombie apocalypse survivors, using Amazon S3, Amazon DynamoDB, Amazon API Gateway, and AWS Lambda. Participants learned about microservices design patterns and best practices. They then extended the functionality of the serverless chat application with various add-on functionalities – such as mobile SMS integration, and zombie motion detection – using additional services like Amazon SNS and Amazon Elasticsearch Service.

Between the widespread interest in serverless architectures and AWS Lambda by our customers, we’ve recognized the excitement around this subject. Therefore, we are happy to announce that we’ll be taking this event on the road in the U.S. and abroad to recruit new developers for the AWS Lambda Signal Corps!

 

Help us save humanity! Learn More and Register Here!

 

Washington, DC | March 10 – Mission Accomplished!

San Francisco, CA @ AWS Loft | March 24 – Mission Accomplished!

New York City, NY @ AWS Loft | April 13 – Mission Accomplished!

London, England @ AWS Loft | April 25

Austin, TX | April 26

Atlanta, GA | May 4

Santa Monica, CA | June 7

Berlin, Germany | July 19

San Francisco, CA @ AWS Loft | August 16

New York City, NY @ AWS Loft | August 18

 

If you’re unable to join us at one of these workshops, that’s OK! In this post, I’ll show you how our survivor chat application incorporates some important microservices design patterns and how you can power your apps in the same way using a serverless architecture.


 

What Are Serverless Architectures?

At AWS, we know that infrastructure management can be challenging. We also understand that customers prefer to focus on delivering value to their business and customers. There’s a lot of undifferentiated heavy lifting to be building and running applications, such as installing software, managing servers, coordinating patch schedules, and scaling to meet demand. Serverless architectures allow you to build and run applications and services without having to manage infrastructure. Your application still runs on servers, but all the server management is done for you by AWS. Serverless architectures can make it easier to build, manage, and scale applications in the cloud by eliminating much of the heavy lifting involved with server management.

Key Benefits of Serverless Architectures

  • No Servers to Manage: There are no servers for you to provision and manage. All the server management is done for you by AWS.
  • Increased Productivity: You can now fully focus your attention on building new features and apps because you are freed from the complexities of server management, allowing you to iterate faster and reduce your development time.
  • Continuous Scaling: Your applications and services automatically scale up and down based on size of the workload.

What Should I Expect to Learn at a Zombie Microservices Workshop?

The workshop content we developed is designed to demonstrate best practices for serverless architectures using AWS. In this post we’ll discuss the following topics:

  • Which services are useful when designing a serverless application on AWS (see below!)
  • Design considerations for messaging, data transformation, and business or app-tier logic when building serverless microservices.
  • Best practices demonstrated in the design of our zombie survivor chat application.
  • Next steps for you to get started building your own serverless microservices!

Several AWS services were used to design our zombie survivor chat application. Each of these services are managed and highly scalable. Let’s take a quick at look at which ones we incorporated in the architecture:

  • AWS Lambda allows you to run your code without provisioning or managing servers. Just upload your code (currently Node.js, Python, or Java) and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. Lambda is used to power many use cases, such as application back ends, scheduled administrative tasks, and even big data workloads via integration with other AWS services such as Amazon S3, DynamoDB, Redshift, and Kinesis.
  • Amazon Simple Storage Service (Amazon S3) is our object storage service, which provides developers and IT teams with secure, durable, and scalable storage in the cloud. S3 is used to support a wide variety of use cases and is easy to use with a simple interface for storing and retrieving any amount of data. In the case of our survivor chat application, it can even be used to host static websites with CORS and DNS support.
  • Amazon API Gateway makes it easy to build RESTful APIs for your applications. API Gateway is scalable and simple to set up, allowing you to build integrations with back-end applications, including code running on AWS Lambda, while the service handles the scaling of your API requests.
  • Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.

Overview of the Zombie Survivor Chat App

The survivor chat application represents a completely serverless architecture that delivers a baseline chat application (written using AngularJS) to workshop participants upon which additional functionality can be added. In order to deliver this baseline chat application, an AWS CloudFormation template is provided to participants, which spins up the environment in their account. The following diagram represents a high level architecture of the components that are launched automatically:

High-Level Architecture of Survivor Serverless Chat App

  • Amazon S3 bucket is created to store the static web app contents of the chat application.
  • AWS Lambda functions are created to serve as the back-end business logic tier for processing reads/writes of chat messages.
  • API endpoints are created using API Gateway and mapped to Lambda functions. The API Gateway POST method points to a WriteMessages Lambda function. The GET method points to a GetMessages Lambda function.
  • A DynamoDB messages table is provisioned to act as our data store for the messages from the chat application.

Serverless Survivor Chat App Hosted on Amazon S3

With the CloudFormation stack launched and the components built out, the end result is a fully functioning chat app hosted in S3, using API Gateway and Lambda to process requests, and DynamoDB as the persistence for our chat messages.

With this baseline app, participants join in teams to build out additional functionality, including the following:

  • Integration of SMS/MMS via Twilio. Send messages to chat from SMS.
  • Motion sensor detection of nearby zombies with Amazon SNS and Intel® Edison and Grove IoT Starter Kit. AWS provides a shared motion sensor for the workshop, and you consume its messages from SNS.
  • Help-me panic button with IoT.
  • Integration with Slack for messaging from another platform.
  • Typing indicator to see which survivors are typing.
  • Serverless analytics of chat messages using Amazon Elasticsearch Service (Amazon ES).
  • Any other functionality participants can think of!

As a part of the workshop, AWS provides guidance for most of these tasks. With these add-ons completed, the architecture of the chat system begins to look quite a bit more sophisticated, as shown below:

Architecture of Survivor Chat with Additional Add-on Functionality

Architectural Tenants of the Serverless Survivor Chat

For the most part, the design patterns you’d see in a traditional server-yes environment you will also find in a serverless environment. No surprises there. With that said, it never hurts to revisit best practices while learning new ones. So let’s review some key patterns we incorporated in our serverless application.

Decoupling Is Paramount

In the survivor chat application, Lambda functions are serving as our tier for business logic. Since users interact with Lambda at the function level, it serves you well to split up logic into separate functions as much as possible so you can scale the logic tier independently from the source and destinations upon which it serves.

As you’ll see in the architecture diagram in the above section, the application has separate Lambda functions for the chat service, the search service, the indicator service, etc. Decoupling is also incorporated through the use of API Gateway, which exposes our back-end logic via a unified RESTful interface. This model allows us to design our back-end logic with potentially different programming languages, systems, or communications channels, while keeping the requesting endpoints unaware of the implementation. Use this pattern and you won’t cry for help when you need to scale, update, add, or remove pieces of your environment.

Separate Your Data Stores

Treat each data store as an isolated application component of the service it supports. One common pitfall when following microservices architectures is to forget about the data layer. By keeping the data stores specific to the service they support, you can better manage the resources needed at the data layer specifically for that service. This is the true value in microservices.

In the survivor chat application, this practice is illustrated with the Activity and Messages DynamoDB tables. The activity indicator service has its own data store (Activity table) while the chat service has its own (Messages). These tables can scale independently along with their respective services. This scenario also represents a good example of statefuless. The implementation of the talking indicator add-on uses DynamoDB via the Activity table to track state information about which users are talking. Remember, many of the benefits of microservices are lost if the components are still all glued together at the data layer in the end, creating a messy common denominator for scaling.

Leverage Data Transformations up the Stack

When designing a service, data transformation and compatibility are big components. How will you handle inputs from many different clients, users, systems for your service? Will you run different flavors of your environment to correspond with different incoming request standards?  Absolutely not!

With API Gateway, data transformation becomes significantly easier through built-in models and mapping templates. With these features you can build data transformation and mapping logic into the API layer for requests and responses. This results in less work for you since API Gateway is a managed service. In the case of our survivor chat app, AWS Lambda and our survivor chat app require JSON while Twilio likes XML for the SMS integration. This type of transformation can be offloaded to API Gateway, leaving you with a cleaner business tier and one less thing to design around!

Use API Gateway as your interface and Lambda as your common backend implementation. API Gateway uses Apache Velocity Template Language (VTL) and JSONPath for transformation logic. Of course, there is a trade-off to be considered, as a lot of transformation logic could be handled in your business-logic tier (Lambda). But, why manage that yourself in application code when you can transparently handle it in a fully managed service through API Gateway? Here are a few things to keep in mind when handling transformations using API Gateway and Lambda:

  • Transform first; then call your common back-end logic.
  • Use API Gateway VTL transformations first when possible.
  • Use Lambda to preprocess data in ways that VTL can’t.

Using API Gateway VTL for Input/Output Data Transformations

 

Security Through Service Isolation and Least Privilege

As a general recommendation when designing your services, always utilize least privilege and isolate components of your application to provide control over access. In the survivor chat application, a permissions-based model is used via AWS Identity and Access Management (IAM). IAM is integrated in every service on the AWS platform and provides the capability for services and applications to assume roles with strict permission sets to perform their least-privileged access needs. Along with access controls, you should implement audit and access logging to provide the best visibility into your microservices. This is made easy with Amazon CloudWatch Logs and AWS CloudTrail. CloudTrail enables audit capability of API calls made on the platform while CloudWatch Logs enables you to ship custom log data to AWS. Although our implementation of Amazon Elasticsearch in the survivor chat is used for analyzing chat messages, you can easily ship your log data to it and perform analytics on your application. You can incorporate security best practices in the following ways with the survivor chat application:

  • Each Lambda function should have an IAM role to access only the resources it needs. For example, the GetMessages function can read from the Messages table while the WriteMessages function can write to it. But they cannot access the Activities table that is used to track who is typing for the indicator service.
  • Each API Gateway endpoint must have IAM permissions to execute the Lambda function(s) it is tied to. This model ensures that Lambda is only executed from the principle that is allowed to execute it, in this case the API Gateway method that triggers the back end function.
  • DynamoDB requires read/write permissions via IAM, which limits anonymous database activity.
  • Use AWS CloudTrail to audit API activity on the platform and among the various services. This provides traceability, especially to see who is invoking your Lambda functions.
  • Design Lambda functions to publish meaningful outputs, as these are logged to CloudWatch Logs on your behalf.

FYI, in our application, we allow anonymous access to the chat API Gateway endpoints. We want to encourage all survivors to plug into the service without prior registration and start communicating. We’ve assumed zombies aren’t intelligent enough to hack into our communication channels. Until the apocalypse, though, stay true to API keys and authorization with signatures, which API Gateway supports!

Don’t Abandon Dev/Test

When developing with microservices, you can still leverage separate development and test environments as a part of the deployment lifecycle. AWS provides several features to help you continue building apps along the same trajectory as before, including these:

  • Lambda function versioning and aliases: Use these features to version your functions based on the stages of deployment such as development, testing, staging, pre-production, etc. Or perhaps make changes to an existing Lambda function in production without downtime.
  • Lambda service blueprints: Lambda comes with dozens of blueprints to get you started with prewritten code that you can use as a skeleton, or a fully functioning solution, to complete your serverless back end. These include blueprints with hooks into Slack, S3, DynamoDB, and more.
  • API Gateway deployment stages: Similar to Lambda versioning, this feature lets you configure separate API stages, along with unique stage variables and deployment versions within each stage. This allows you to test your API with the same or different back ends while it progresses through changes that you make at the API layer.
  • Mock Integrations with API Gateway: Configure dummy responses that developers can use to test their code while the true implementation of your API is being developed. Mock integrations make it faster to iterate through the API portion of a development lifecycle by streamlining pieces that used to be very sequential/waterfall.

Using Mock Integrations with API Gateway

Stay Tuned for Updates!

Now that you’ve got the necessary best practices to design your microservices, do you have what it takes to fight against the zombie hoard? The serverless options we explored are ready for you to get started with and the survivors are counting on you!

Be sure to keep an eye on the AWS GitHub repo. Although I didn’t cover each component of the survivor chat app in this post, we’ll be deploying this workshop and code soon for you to launch on your own! Keep an eye out for Zombie Workshops coming to your city, or nominate your city for a workshop here.

For more information on how you can get started with serverless architectures on AWS, refer to the following resources:

Whitepaper – AWS Serverless Multi-Tier Architectures

Reference Architectures and Sample Code

*Special thanks to my colleagues Ben Snively, Curtis Bray, Dean Bryen, Warren Santner, and Aaron Kao at AWS. They were instrumental to our team developing the content referenced in this post.

Amazon API Gateway mapping improvements

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/amazon-api-gateway-mapping-improvements/

Yesterday we announced the new Swagger import API. You may have also noticed a new first time user experience in the API Gateway console that automatically creates a sample Pet Store API and guides you though API Gateway features. That is not all we’ve been doing:

Over the past few weeks, we’ve made mapping requests and responses easier. This post takes you through the new features we introduced and gives practical examples of how to use them.

Multiple 2xx responses

We heard from many of you that you want to return more than one 2xx response code from your API. You can now configure Amazon API Gateway to return multiple 2xx response codes, each with its own header and body mapping templates. For example, when creating resources, you can return 201 for “created” and 202 for “accepted”.

Context variables in parameter mapping

We have added the ability to reference context variables from the parameter mapping fields. For example, you can include the identity principal or the stage name from the context variable in a header to your HTTP backend. To send the principalId returned by a custom authorizer in an X-User-ID header to your HTTP backend, use this mapping expression:

context.authorizer.principalId

For more information, see the context variable in the Mapping Template Reference page of the documentation.

Access to raw request body

Mapping templates in API Gateway help you transform incoming requests and outgoing responses from your API’s back end. The $input variable in mapping templates enables you to read values from a JSON body and its properties. You can now also return the raw payload, whether it’s JSON, XML, or a string using the $input.body property.

For example, if you have configured your API to receive raw data and pass it to Amazon Kinesis using an AWS service proxy integration, you can use the body property to read the incoming body and the $util variable to encode it for an Amazon Kinesis stream.

{
  "Data" : "$util.base64Encode($input.body)",
  "PartitionKey" : "key",
  "StreamName" : "Stream"
}

JSON parse function

We have also added a parseJson() method to the $util object in mapping templates. The parseJson() method parses stringified JSON input into its object representation. You can manipulate this object representation in the mapping templates. For example, if you need to return an error from AWS Lambda, you can now return it like this:

exports.handler = function(event, context) {
    var myErrorObj = {
        errorType : "InternalFailure",
        errorCode : 9130,
        detailedMessage : "This is my error message",
        stackTrace : ["foo1", "foo2", "foo3"],
        data : {
            numbers : [1, 2, 3]
        }
    }
    
    context.fail(JSON.stringify(myErrorObj));
};

Then, you can use the parseJson() method in the mapping template to extract values from the error and return a meaningful message from your API, like this:

#set ($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))
#set ($bodyObj = $util.parseJson($input.body))

{
"type" : "$errorMessageObj.errorType",
"code" : $errorMessageObj.errorCode,
"message" : "$errorMessageObj.detailedMessage",
"someData" : "$errorMessageObj.data.numbers[2]"
}

This will produce a response that looks like this:

{
"type" : "InternalFailure",
"code" : 9130,
"message" : "This is my error message",
"someData" : "3"
}

Conclusion

We continuously release new features and improvements to Amazon API Gateway. Your feedback is extremely important and guides our priorities. Keep sending us feedback on the API Gateway forum and on social media.

Using Amazon API Gateway as a proxy for DynamoDB

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/

Andrew Baird Andrew Baird, AWS Solutions Architect
Amazon API Gateway has a feature that enables customers to create their own API definitions directly in front of an AWS service API. This tutorial will walk you through an example of doing so with Amazon DynamoDB.
Why use API Gateway as a proxy for AWS APIs?
Many AWS services provide APIs that applications depend on directly for their functionality. Examples include:

Amazon DynamoDB – An API-accessible NoSQL database.
Amazon Kinesis – Real-time ingestion of streaming data via API.
Amazon CloudWatch – API-driven metrics collection and retrieval.

If AWS already exposes internet-accessible APIs, why would you want to use API Gateway as a proxy for them? Why not allow applications to just directly depend on the AWS service API itself?
Here are a few great reasons to do so:

You might want to enable your application to integrate with very specific functionality that an AWS service provides, without the need to manage access keys and secret keys that AWS APIs require.
There may be application-specific restrictions you’d like to place on the API calls being made to AWS services that you would not be able to enforce if clients integrated with the AWS APIs directly.
You may get additional value out of using a different HTTP method from the method that is used by the AWS service. For example, creating a GET request as a proxy in front of an AWS API that requires an HTTP POST so that the response will be cached.
You can accomplish the above things without having to introduce a server-side application component that you need to manage or that could introduce increased latency. Even a lightweight Lambda function that calls a single AWS service API is code that you do not need to create or maintain if you use API Gateway directly as an AWS service proxy.

Here, we will walk you through a hypothetical scenario that shows how to create an Amazon API Gateway AWS service proxy in front of Amazon DynamoDB.
The Scenario
You would like the ability to add a public Comments section to each page of your website. To achieve this, you’ll need to accept and store comments and you will need to retrieve all of the comments posted for a particular page so that the UI can display them.
We will show you how to implement this functionality by creating a single table in DynamoDB, and creating the two necessary APIs using the AWS service proxy feature of Amazon API Gateway.
Defining the APIs
The first step is to map out the APIs that you want to create. For both APIs, we’ve linked to the DynamoDB API documentation. Take note of how the API you define below differs in request/response details from the native DynamoDB APIs.
Post Comments
First, you need an API that accepts user comments and stores them in the DynamoDB table. Here’s the API definition you’ll use to implement this functionality:
Resource: /comments
HTTP Method: POST
HTTP Request Body:
{
"pageId": "example-page-id",
"userName": "ExampleUserName",
"message": "This is an example comment to be added."
}
After you create it, this API becomes a proxy in front of the DynamoDB API PutItem.
Get Comments
Second, you need an API to retrieve all of the comments for a particular page. Use the following API definition:
Resource: /comments/{pageId}
HTTP Method: GET
The curly braces around {pageId} in the URI path definition indicate that pageId will be treated as a path variable within the URI.
This API will be a proxy in front of the DynamoDB API Query. Here, you will notice the benefit: your API uses the GET method, while the DynamoDB GetItem API requires an HTTP POST and does not include any cache headers in the response.
Creating the DynamoDB Table
First, Navigate to the DynamoDB console and select Create Table. Next, name the table Comments, with commentId as the Primary Key. Leave the rest of the default settings for this example, and choose Create.

After this table is populated with comments, you will want to retrieve them based on the page that they’ve been posted to. To do this, create a secondary index on an attribute called pageId. This secondary index enables you to query the table later for all comments posted to a particular page. When viewing your table, choose the Indexes tab and choose Create index.

When querying this table, you only want to retrieve the pieces of information that matter to the client: in this case, these are the pageId, the userName, and the message itself. Any other data you decide to store with each comment does not need to be retrieved from the table for the publically accessible API. Type the following information into the form to capture this and choose Create index:

Creating the APIs
Now, using the AWS service proxy feature of Amazon API Gateway, we’ll demonstrate how to create each of the APIs you defined. Navigate to the API Gateway service console, and choose Create API. In API name, type CommentsApi and type a short description. Finally, choose Create API.

Now you’re ready to create the specific resources and methods for the new API.
Creating the Post Comments API
In the editor screen, choose Create Resource. To match the description of the Post Comments API above, provide the appropriate details and create the first API resource:

Now, with the resource created, set up what happens when the resource is called with the HTTP POST method. Choose Create Method and select POST from the drop down. Click the checkmark to save.
To map this API to the DynamoDB API needed, next to Integration type, choose Show Advanced and choose AWS Service Proxy.
Here, you’re presented with options that define which specific AWS service API will be executed when this API is called, and in which region. Fill out the information as shown, matching the DynamoDB table you created a moment ago. Before you proceed, create an AWS Identity and Access Management (IAM) role that has permission to call the DynamoDB API PutItem for the Comments table; this role must have a service trust relationship to API Gateway. For more information on IAM policies and roles, see the Overview of IAM Policies topic.
After inputting all of the information as shown, choose Save.

If you were to deploy this API right now, you would have a working service proxy API that only wraps the DynamoDB PutItem API. But, for the Post Comments API, you’d like the client to be able to use a more contextual JSON object structure. Also, you’d like to be sure that the DynamoDB API PutItem is called precisely the way you expect it to be called. This eliminates client-driven error responses and removes the possibility that the new API could be used to call another DynamoDB API or table that you do not intend to allow.
You accomplish this by creating a mapping template. This enables you to define the request structure that your API clients will use, and then transform those requests into the structure that the DynamoDB API PutItem requires.
From the Method Execution screen, choose Integration Request:

In the Integration Request screen expand the Mapping Templates section and choose Add mapping template. Under Content-Type, type application/json and then choose the check mark:

Next, choose the pencil icon next to Input passthrough and choose Mapping template from the dropdown. Now, you’ll be presented with a text box where you create the mapping template. For more information on creating mapping templates, see API Gateway Mapping Template Reference.
The mapping template will be as follows. We’ll walk through what’s important about it next:
{
"TableName": "Comments",
"Item": {
"commentId": {
"S": "$context.requestId"
},
"pageId": {
"S": "$input.path(‘$.pageId’)"
},
"userName": {
"S": "$input.path(‘$.userName’)"
},
"message": {
"S": "$input.path(‘$.message’)"
}
}
}
This mapping template creates the JSON structure required by the DynamoDB PutItem API. The entire mapping template is static. The three input variables are referenced from the request JSON using the $input variable and each comment is stamped with a unique identifier. This unique identifier is the commentId and is extracted directly from the API request’s $context variable. This $context variable is set by the API Gateway service itself. To review other parameters that are available to a mapping template, see API Gateway Mapping Template Reference. You may decide that including information like sourceIp or other headers could be valuable to you.
With this mapping template, no matter how your API is called, the only variance from the DynamoDB PutItem API call will be the values of pageId, userName, and message. Clients of your API will not be able to dictate which DynamoDB table is being targeted (because “Comments” is statically listed), and they will not have any control over the object structure that is specified for each item (each input variable is explicitly declared a string to the PutItem API).
Back in the Method Execution pane click TEST.
Create an example Request Body that matches the API definition documented above and then choose Test. For example, your request body could be:
{
"pageId": "breaking-news-story-01-18-2016",
"userName": "Just Saying Thank You",
"message": "I really enjoyed this story!!"
}
Navigate to the DynamoDB console and view the Comments table to show that the request really was successfully processed:

Great! Try including a few more sample items in the table to further test the Get Comments API.
If you deployed this API, you would be all set with a public API that has the ability to post public comments and store them in DynamoDB. For some use cases you may only want to collect data through a single API like this: for example, when collecting customer and visitor feedback, or for a public voting or polling system. But for this use case, we’ll demonstrate how to create another API to retrieve records from a DynamoDB table as well. Many of the details are similar to the process above.
Creating the Get Comments API
Return to the Resources view, choose the /comments resource you created earlier and choose Create Resource, like before.
This time, include a request path parameter to represent the pageId of the comments being retrieved. Input the following information and then choose Create Resource:

In Resources, choose your new /{pageId} resource and choose Create Method. The Get Comments API will be retrieving data from our DynamoDB table, so choose GET for the HTTP method.
In the method configuration screen choose Show advanced and then select AWS Service Proxy. Fill out the form to match the following. Make sure to use the appropriate AWS Region and IAM execution role; these should match what you previously created. Finally, choose Save.

Modify the Integration Request and create a new mapping template. This will transform the simple pageId path parameter on the GET request to the needed DynamoDB Query API, which requires an HTTP POST. Here is the mapping template:
{
"TableName": "Comments",
"IndexName": "pageId-index",
"KeyConditionExpression": "pageId = :v1",
"ExpressionAttributeValues": {
":v1": {
"S": "$input.params(‘pageId’)"
}
}
}
Now test your mapping template. Navigate to the Method Execution pane and choose the Test icon on the left. Provide one of the pageId values that you’ve inserted into your Comments table and choose Test.

You should see a response like the following; it is directly passing through the raw DynamoDB response:

Now you’re close! All you need to do before you deploy your API is to map the raw DynamoDB response to the similar JSON object structure that you defined on the Post Comment API.
This will work very similarly to the mapping template changes you already made. But you’ll configure this change on the Integration Response page of the console by editing the default mapping response’s mapping template.
Navigate to Integration Response and expand the 200 response code by choosing the arrow on the left. In the 200 response, expand the Mapping Templates section. In Content-Type choose application/json then choose the pencil icon next to Output Passthrough.

Now, create a mapping template that extracts the relevant pieces of the DynamoDB response and places them into a response structure that matches our use case:
#set($inputRoot = $input.path(‘$’))
{
"comments": [
#foreach($elem in $inputRoot.Items) {
"commentId": "$elem.commentId.S",
"userName": "$elem.userName.S",
"message": "$elem.message.S"
}#if($foreach.hasNext),#end
#end
]
}
Now choose the check mark to save the mapping template, and choose Save to save this default integration response. Return to the Method Execution page and test your API again. You should now see a formatted response.
Now you have two working APIs that are ready to deploy! See our documentation to learn about how to deploy API stages.
But, before you deploy your API, here are some additional things to consider:

Authentication: you may want to require that users authenticate before they can leave comments. Amazon API Gateway can enforce IAM authentication for the APIs you create. To learn more, see Amazon API Gateway Access Permissions.
DynamoDB capacity: you may want to provision an appropriate amount of capacity to your Comments table so that your costs and performance reflect your needs.
Commenting features: Depending on how robust you’d like commenting to be on your site, you might like to introduce changes to the APIs described here. Examples are attributes that track replies or timestamp attributes.

Conclusion
Now you’ve got a fully functioning public API to post and retrieve public comments for your website. This API communicates directly with the Amazon DynamoDB API without you having to manage a single application component yourself!

Using Amazon API Gateway as a proxy for DynamoDB

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/

Andrew Baird Andrew Baird, AWS Solutions Architect
Amazon API Gateway has a feature that enables customers to create their own API definitions directly in front of an AWS service API. This tutorial will walk you through an example of doing so with Amazon DynamoDB.
Why use API Gateway as a proxy for AWS APIs?
Many AWS services provide APIs that applications depend on directly for their functionality. Examples include:

Amazon DynamoDB – An API-accessible NoSQL database.
Amazon Kinesis – Real-time ingestion of streaming data via API.
Amazon CloudWatch – API-driven metrics collection and retrieval.

If AWS already exposes internet-accessible APIs, why would you want to use API Gateway as a proxy for them? Why not allow applications to just directly depend on the AWS service API itself?
Here are a few great reasons to do so:

You might want to enable your application to integrate with very specific functionality that an AWS service provides, without the need to manage access keys and secret keys that AWS APIs require.
There may be application-specific restrictions you’d like to place on the API calls being made to AWS services that you would not be able to enforce if clients integrated with the AWS APIs directly.
You may get additional value out of using a different HTTP method from the method that is used by the AWS service. For example, creating a GET request as a proxy in front of an AWS API that requires an HTTP POST so that the response will be cached.
You can accomplish the above things without having to introduce a server-side application component that you need to manage or that could introduce increased latency. Even a lightweight Lambda function that calls a single AWS service API is code that you do not need to create or maintain if you use API Gateway directly as an AWS service proxy.

Here, we will walk you through a hypothetical scenario that shows how to create an Amazon API Gateway AWS service proxy in front of Amazon DynamoDB.
The Scenario
You would like the ability to add a public Comments section to each page of your website. To achieve this, you’ll need to accept and store comments and you will need to retrieve all of the comments posted for a particular page so that the UI can display them.
We will show you how to implement this functionality by creating a single table in DynamoDB, and creating the two necessary APIs using the AWS service proxy feature of Amazon API Gateway.
Defining the APIs
The first step is to map out the APIs that you want to create. For both APIs, we’ve linked to the DynamoDB API documentation. Take note of how the API you define below differs in request/response details from the native DynamoDB APIs.
Post Comments
First, you need an API that accepts user comments and stores them in the DynamoDB table. Here’s the API definition you’ll use to implement this functionality:
Resource: /comments
HTTP Method: POST
HTTP Request Body:
{
"pageId": "example-page-id",
"userName": "ExampleUserName",
"message": "This is an example comment to be added."
}
After you create it, this API becomes a proxy in front of the DynamoDB API PutItem.
Get Comments
Second, you need an API to retrieve all of the comments for a particular page. Use the following API definition:
Resource: /comments/{pageId}
HTTP Method: GET
The curly braces around {pageId} in the URI path definition indicate that pageId will be treated as a path variable within the URI.
This API will be a proxy in front of the DynamoDB API Query. Here, you will notice the benefit: your API uses the GET method, while the DynamoDB GetItem API requires an HTTP POST and does not include any cache headers in the response.
Creating the DynamoDB Table
First, Navigate to the DynamoDB console and select Create Table. Next, name the table Comments, with commentId as the Primary Key. Leave the rest of the default settings for this example, and choose Create.

After this table is populated with comments, you will want to retrieve them based on the page that they’ve been posted to. To do this, create a secondary index on an attribute called pageId. This secondary index enables you to query the table later for all comments posted to a particular page. When viewing your table, choose the Indexes tab and choose Create index.

When querying this table, you only want to retrieve the pieces of information that matter to the client: in this case, these are the pageId, the userName, and the message itself. Any other data you decide to store with each comment does not need to be retrieved from the table for the publically accessible API. Type the following information into the form to capture this and choose Create index:

Creating the APIs
Now, using the AWS service proxy feature of Amazon API Gateway, we’ll demonstrate how to create each of the APIs you defined. Navigate to the API Gateway service console, and choose Create API. In API name, type CommentsApi and type a short description. Finally, choose Create API.

Now you’re ready to create the specific resources and methods for the new API.
Creating the Post Comments API
In the editor screen, choose Create Resource. To match the description of the Post Comments API above, provide the appropriate details and create the first API resource:

Now, with the resource created, set up what happens when the resource is called with the HTTP POST method. Choose Create Method and select POST from the drop down. Click the checkmark to save.
To map this API to the DynamoDB API needed, next to Integration type, choose Show Advanced and choose AWS Service Proxy.
Here, you’re presented with options that define which specific AWS service API will be executed when this API is called, and in which region. Fill out the information as shown, matching the DynamoDB table you created a moment ago. Before you proceed, create an AWS Identity and Access Management (IAM) role that has permission to call the DynamoDB API PutItem for the Comments table; this role must have a service trust relationship to API Gateway. For more information on IAM policies and roles, see the Overview of IAM Policies topic.
After inputting all of the information as shown, choose Save.

If you were to deploy this API right now, you would have a working service proxy API that only wraps the DynamoDB PutItem API. But, for the Post Comments API, you’d like the client to be able to use a more contextual JSON object structure. Also, you’d like to be sure that the DynamoDB API PutItem is called precisely the way you expect it to be called. This eliminates client-driven error responses and removes the possibility that the new API could be used to call another DynamoDB API or table that you do not intend to allow.
You accomplish this by creating a mapping template. This enables you to define the request structure that your API clients will use, and then transform those requests into the structure that the DynamoDB API PutItem requires.
From the Method Execution screen, choose Integration Request:

In the Integration Request screen expand the Mapping Templates section and choose Add mapping template. Under Content-Type, type application/json and then choose the check mark:

Next, choose the pencil icon next to Input passthrough and choose Mapping template from the dropdown. Now, you’ll be presented with a text box where you create the mapping template. For more information on creating mapping templates, see API Gateway Mapping Template Reference.
The mapping template will be as follows. We’ll walk through what’s important about it next:
{
"TableName": "Comments",
"Item": {
"commentId": {
"S": "$context.requestId"
},
"pageId": {
"S": "$input.path(‘$.pageId’)"
},
"userName": {
"S": "$input.path(‘$.userName’)"
},
"message": {
"S": "$input.path(‘$.message’)"
}
}
}
This mapping template creates the JSON structure required by the DynamoDB PutItem API. The entire mapping template is static. The three input variables are referenced from the request JSON using the $input variable and each comment is stamped with a unique identifier. This unique identifier is the commentId and is extracted directly from the API request’s $context variable. This $context variable is set by the API Gateway service itself. To review other parameters that are available to a mapping template, see API Gateway Mapping Template Reference. You may decide that including information like sourceIp or other headers could be valuable to you.
With this mapping template, no matter how your API is called, the only variance from the DynamoDB PutItem API call will be the values of pageId, userName, and message. Clients of your API will not be able to dictate which DynamoDB table is being targeted (because “Comments” is statically listed), and they will not have any control over the object structure that is specified for each item (each input variable is explicitly declared a string to the PutItem API).
Back in the Method Execution pane click TEST.
Create an example Request Body that matches the API definition documented above and then choose Test. For example, your request body could be:
{
"pageId": "breaking-news-story-01-18-2016",
"userName": "Just Saying Thank You",
"message": "I really enjoyed this story!!"
}
Navigate to the DynamoDB console and view the Comments table to show that the request really was successfully processed:

Great! Try including a few more sample items in the table to further test the Get Comments API.
If you deployed this API, you would be all set with a public API that has the ability to post public comments and store them in DynamoDB. For some use cases you may only want to collect data through a single API like this: for example, when collecting customer and visitor feedback, or for a public voting or polling system. But for this use case, we’ll demonstrate how to create another API to retrieve records from a DynamoDB table as well. Many of the details are similar to the process above.
Creating the Get Comments API
Return to the Resources view, choose the /comments resource you created earlier and choose Create Resource, like before.
This time, include a request path parameter to represent the pageId of the comments being retrieved. Input the following information and then choose Create Resource:

In Resources, choose your new /{pageId} resource and choose Create Method. The Get Comments API will be retrieving data from our DynamoDB table, so choose GET for the HTTP method.
In the method configuration screen choose Show advanced and then select AWS Service Proxy. Fill out the form to match the following. Make sure to use the appropriate AWS Region and IAM execution role; these should match what you previously created. Finally, choose Save.

Modify the Integration Request and create a new mapping template. This will transform the simple pageId path parameter on the GET request to the needed DynamoDB Query API, which requires an HTTP POST. Here is the mapping template:
{
"TableName": "Comments",
"IndexName": "pageId-index",
"KeyConditionExpression": "pageId = :v1",
"ExpressionAttributeValues": {
":v1": {
"S": "$input.params(‘pageId’)"
}
}
}
Now test your mapping template. Navigate to the Method Execution pane and choose the Test icon on the left. Provide one of the pageId values that you’ve inserted into your Comments table and choose Test.

You should see a response like the following; it is directly passing through the raw DynamoDB response:

Now you’re close! All you need to do before you deploy your API is to map the raw DynamoDB response to the similar JSON object structure that you defined on the Post Comment API.
This will work very similarly to the mapping template changes you already made. But you’ll configure this change on the Integration Response page of the console by editing the default mapping response’s mapping template.
Navigate to Integration Response and expand the 200 response code by choosing the arrow on the left. In the 200 response, expand the Mapping Templates section. In Content-Type choose application/json then choose the pencil icon next to Output Passthrough.

Now, create a mapping template that extracts the relevant pieces of the DynamoDB response and places them into a response structure that matches our use case:
#set($inputRoot = $input.path(‘$’))
{
"comments": [
#foreach($elem in $inputRoot.Items) {
"commentId": "$elem.commentId.S",
"userName": "$elem.userName.S",
"message": "$elem.message.S"
}#if($foreach.hasNext),#end
#end
]
}
Now choose the check mark to save the mapping template, and choose Save to save this default integration response. Return to the Method Execution page and test your API again. You should now see a formatted response.
Now you have two working APIs that are ready to deploy! See our documentation to learn about how to deploy API stages.
But, before you deploy your API, here are some additional things to consider:

Authentication: you may want to require that users authenticate before they can leave comments. Amazon API Gateway can enforce IAM authentication for the APIs you create. To learn more, see Amazon API Gateway Access Permissions.
DynamoDB capacity: you may want to provision an appropriate amount of capacity to your Comments table so that your costs and performance reflect your needs.
Commenting features: Depending on how robust you’d like commenting to be on your site, you might like to introduce changes to the APIs described here. Examples are attributes that track replies or timestamp attributes.

Conclusion
Now you’ve got a fully functioning public API to post and retrieve public comments for your website. This API communicates directly with the Amazon DynamoDB API without you having to manage a single application component yourself!

Using API Gateway mapping templates to handle changes in your back-end APIs

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-api-gateway-mapping-templates-to-handle-changes-in-your-back-end-apis/

Maitreya Ranganath Maitreya Ranganath, AWS Solutions Architect
Changes to APIs are always risky, especially if changes are made in ways that are not backward compatible. In this blog post, we show you how to use Amazon API Gateway mapping templates to isolate your API consumers from API changes. This enables your API consumers to migrate to new API versions on their own schedule.
For an example scenario, we start with a very simple Store Front API with one resource for orders and one GET method. For this example, the API target is implemented in AWS Lambda to keep things simple – but you can of course imagine the back end being your own endpoint.
The structure of the API V1 is:
Method: GET
Path: /orders
Query Parameters:
start = timestamp
end = timestamp

Response:
[
{
“orderId” : string,
“orderTs” : string,
“orderAmount” : number
}
]
The initial version (V1) of the API was implemented when there were few orders per day. The API was not paginated; if the number of orders that match the query is larger than 5, an error returns. The API consumer must then submit a request with a smaller time range.
The API V1 is exposed through API Gateway and you have several consumers of this API in Production.
After you upgrade the back end, the API developers make a change to support pagination. This makes the API more scalable and allows the API consumers to handle large lists of orders by paging through them with a token. This is a good design change but it breaks backward compatibility. It introduces a challenge because you have a large base of API consumers using V1 and their code can’t handle the changed nesting structure of this response.
The structure of API V2 is:
Method: GET
Path: /orders
Query Parameters:
start = timestamp
end = timestamp
token = string (optional)

Response:
{
“nextToken” : string,
“orders” : [
{
“orderId” : string,
“orderTs” : string
“orderAmount” : number
}
]
}
Using mapping templates, you can isolate your API consumers from this change: your existing V1 API consumers will not be impacted when you publish V2 of the API in parallel. You want to let your consumers migrate to V2 on their own schedule.
We’ll show you how to do that in this blog post. Let’s get started.
Deploying V1 of the API
To deploy V1 of the API, create a simple Lambda function and expose that through API Gateway:

Sign in to the AWS Lambda console.
Choose Create a Lambda function.
In Step 1: Select blueprint, choose Skip; you’ll enter the details for the Lambda function manually.
In Step 2: Configure function, use the following values:

In Name, type getOrders.
In Description, type Returns orders for a time-range.
In Runtime, choose Node.js.
For Code entry type, choose Edit code inline. Copy and paste the code snippet below into the code input box.

MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
console.log(‘start =’, event.start);
console.log(‘end =’, event.end);

start = Date.parse(decodeURIComponent(event.start));
end = Date.parse(decodeURIComponent(event.end));

if(isNaN(start)) {
context.fail("Invalid parameter ‘start’");
}
if(isNaN(end)) {
context.fail("Invalid parameter ‘end’");
}

duration = end – start;

if(duration 5 * MILISECONDS_DAY) {
context.fail("Too many results, try your request with a shorter duration");
}

orderList = [];
count = 0;

for(d = start; d < end; d += MILISECONDS_DAY) {
order = {
"orderId" : "order-" + count,
"orderTs" : (new Date(d).toISOString()),
"orderAmount" : Math.round(Math.random()*100.0)
};
count += 1;
orderList.push(order);
}

console.log(‘Generated’, count, ‘orders’);
context.succeed(orderList);
};

In Handler, leave the default value of index.handler.
In Role, choose Basic execution role or choose an existing role if you’ve created one for Lambda before.
In Advanced settings, leave the default values and choose Next.


Finally, review the settings in the next page and choose Create function.
Your Lambda function is now created. You can test it by sending a test event. Enter the following for your test event:
{
"start": "2015-10-01T00:00:00Z",
"end": "2015-10-04T00:00:00Z"
}
Check the execution result and log output to see the results of your test.

Next, choose the API endpoints tab and then choose Add API endpoint. In Add API endpoint, use the following values:

In API endpoint type, choose API Gateway
In API name, type StoreFront
In Resource name, type /orders
In Method, choose GET
In Deployment stage, use the default value of prod
In Security, choose Open to allow the API to be publicly accessed
Choose Submit to create the API


The API is created and the API endpoint URL is displayed for the Lambda function.
Next, switch to the API Gateway console and verify that the new API appears on the list of APIs. Choose StoreFront to view its details.
To view the method execution details, in the Resources pane, choose GET. Choose Integration Request to edit the method properties.

On the Integration Request details page, expand the Mapping Templates section and choose Add mapping template. In Content-Type, type application/json and choose the check mark to accept.

Choose the edit icon to the right of Input passthrough. From the drop down, choose Mapping template and copy and paste the mapping template text below into the Template input box. Choose the check mark to create the template.
{
#set($queryMap = $input.params().querystring)

#foreach( $key in $queryMap.keySet())
"$key" : "$queryMap.get($key)"
#if($foreach.hasNext),#end
#end
}
This step is needed because the Lambda function requires its input as a JSON document. The mapping template takes query string parameters from the GET request and creates a JSON input document. Mapping templates use Apache Velocity, expose a number of utility functions, and give you access to all of the incoming requests data and context parameters. You can learn more from the mapping template reference page.
Back to the GET method configuration page, in the left pane, choose the GET method and then open the Method Request settings. Expand the URL Query String Parameters section and choose Add query string. In Name, type start and choose the check mark to accept. Repeat the process to create a second parameter named end.
From the GET method configuration page, in the top left, choose Test to test your API. Type the following values for the query string parameters and then choose Test:

In start, type 2015-10-01T00:00:00Z
In end, type 2015-10-04T00:00:00Z

Verify that the response status is 200 and the response body contains a JSON response with 3 orders.
Now that your test is successful, you can deploy your changes to the production stage. In the Resources pane, choose Deploy API. In Deployment stage, choose prod. In Deployment description, type a description of the deployment, and then choose Deploy.
The prod Stage Editor page appears, displaying the Invoke URL. In the CloudWatch Settings section, choose Enable CloudWatch Logs so you can see logs and metrics from this stage. Keep in mind that CloudWatch logs are charged to your account separately from API Gateway.
You have now deployed an API that is backed by V1 of the Lambda function.
Testing V1 of the API
Now you’ll test V1 of the API with curl and confirm its behavior. First, copy the Invoke URL and add the query parameters ?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z and make a GET invocation using curl.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

[
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 82
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 3
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 75
}
]
This should output a JSON response with 3 orders. Next, check what happens if you use a longer time-range by changing the end timestamp to 2015-10-15T00:00:00Z:
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"errorMessage": "Too many results, try your request with a shorter duration"
}
You see that the API returns an error indicating the time range is too long. This is correct V1 API behavior, so you are all set.
Updating the Lambda Function to V2
Next, you will update the Lambda function code to V2. This simulates the scenario of the back end of your API changing in a manner that is not backward compatible.
Switch to the Lambda console and choose the getOrders function. In the code input box, copy and paste the code snippet below. Be sure to replace all of the existing V1 code with V2 code.
MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
console.log(‘start =’, event.start);
console.log(‘end =’, event.end);

start = Date.parse(decodeURIComponent(event.start));
end = Date.parse(decodeURIComponent(event.end));

token = NaN;
if(event.token) {
s = new Buffer(event.token, ‘base64’).toString();
token = Date.parse(s);
}

if(isNaN(start)) {
context.fail("Invalid parameter ‘start’");
}
if(isNaN(end)) {
context.fail("Invalid parameter ‘end’");
}
if(!isNaN(token)) {
start = token;
}

duration = end – start;

if(duration <= 0) {
context.fail("Invalid parameters ‘end’ must be greater than ‘start’");
}

orderList = [];
count = 0;

console.log(‘start=’, start, ‘ end=’, end);

for(d = start; d < end && count < 5; d += MILISECONDS_DAY) {
order = {
"orderId" : "order-" + count,
"orderTs" : (new Date(d).toISOString()),
"orderAmount" : Math.round(Math.random()*100.0)
};
count += 1;
orderList.push(order);
}

nextToken = null;
if(d < end) {
nextToken = new Buffer(new Date(d).toISOString()).toString(‘base64’);
}

console.log(‘Generated’, count, ‘orders’);

result = {
orders : orderList,
};

if(nextToken) {
result.nextToken = nextToken;
}
context.succeed(result);
};
Choose Save to save V2 of the code. Then choose Test. Note that the output structure is different in V2 and there is a second level of nesting in the JSON document. This represents the updated V2 output structure that is different from V1.
Next, repeat the curl tests from the previous section. First, do a request for a short time duration. Notice that the response structure is nested differently from V1 and this is a problem for our API consumers that expect V1 responses.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

{
"orders": [
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 8
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 92
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 84
}
]
}
Now, repeat the request for a longer time range and you’ll see that instead of an error message, you now get the first page of information with 5 orders and a nextToken that will let you request the next page. This is the paginated behavior of V2 of the API.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"orders": [
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 62
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 59
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 21
},
{
"orderId": "order-3",
"orderTs": "2015-10-04T00:00:00.000Z",
"orderAmount": 95
},
{
"orderId": "order-4",
"orderTs": "2015-10-05T00:00:00.000Z",
"orderAmount": 84
}
],
"nextToken": "MjAxNS0xMC0wNlQwMDowMDowMC4wMDBa"
}
It is clear from these tests that V2 will break the current V1 consumer’s code. Next, we show how to isolate your V1 consumers from this change using API Gateway mapping templates.
Cloning the API
Because you want both V1 and V2 of the API to be available simultaneously to your API consumers, you first clone the API to create a V2 API. You then modify the V1 API to make it behave as your V1 consumers expect.
Go back to the API Gateway console, and choose Create API. Configure the new API with the following values:

In API name, type StoreFrontV2
In Clone from API, choose StoreFront
In Description, type a description
Choose Create API to clone the StoreFront API as StoreFrontV2

Open the StoreFrontV2 API and choose the GET method of the /orders resource. Next, choose Integration Request. Choose the edit icon next to the getOrders Lambda function name.
Keep the name as getOrders and choose the check mark to accept. In the pop up, choose OK to allow the StoreFrontV2 to invoke the Lambda function.
Once you have granted API Gateway permissions to access your Lambda function, choose Deploy API. In Deployment stage, choose New stage. In Stage name, type prod, and then choose Deploy. Now you have a new StoreFrontV2 API that invokes the same Lambda function. Confirm that the API has V2 behavior by testing it with curl. Use the Invoke URL for the StoreFrontV2 API instead of the previously used Invoke URL.
Update the V1 of the API
Now you will use mapping templates to update the original StoreFront API to preserve V1 behavior. This enables existing consumers to continue to consume the API without having to make any changes to their code.
Navigate to the API Gateway console, choose the StoreFront API and open the GET method of the /orders resource. On the Method Execution details page, choose Integration Response.
Expand the default response mapping (HTTP status 200), and expand the Mapping Templates section. Choose Add Mapping Template.
In Content-type, type application/json and choose the check mark to accept. Choose the edit icon next to Output passthrough to edit the mapping templates. Select Mapping template from the drop down and copy and paste the mapping template below into the Template input box.
#set($nextToken = $input.path(‘$.nextToken’))

#if($nextToken && $nextToken.length() != 0)
{
"errorMessage" : "Too many results, try your request with a shorter duration"
}
#else
$input.json(‘$.orders[*]’)
#end
Choose the check mark to accept and save. The mapping template transforms the V2 output from the Lambda function into the original V1 response. The mapping template also generates an error if the V2 response indicates that there are more results than can fit in one page. This emulates V1 behavior.
Finally click Save on the response mapping page. Deploy your StoreFront API and choose prod as the stage to deploy your changes.
Verify V1 behavior
Now that you have updated the original API to emulate V1 behavior, you can verify that using curl again. You will essentially repeat the tests from the earlier section. First, confirm that you have the Invoke URL for the original StoreFront API. You can always find the Invoke URL by looking at the stage details for the API.
Try a test with a short time range.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

[
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 50
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 16
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 14
}
]
Try a test with a longer time range and note that the V1 behavior of returning an error is recovered.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"errorMessage": "Too many results, try your request with a shorter duration"
}
Congratulations, you have successfully used Amazon API Gateway mapping templates to expose both V1 and V2 versions of your API allowing your API consumers to migrate to V2 on their own schedule.
Be sure to delete the two APIs and the AWS Lambda function that you created for this walkthrough to avoid being charged for their use.

Using API Gateway mapping templates to handle changes in your back-end APIs

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/using-api-gateway-mapping-templates-to-handle-changes-in-your-back-end-apis/

Maitreya Ranganath Maitreya Ranganath, AWS Solutions Architect
Changes to APIs are always risky, especially if changes are made in ways that are not backward compatible. In this blog post, we show you how to use Amazon API Gateway mapping templates to isolate your API consumers from API changes. This enables your API consumers to migrate to new API versions on their own schedule.
For an example scenario, we start with a very simple Store Front API with one resource for orders and one GET method. For this example, the API target is implemented in AWS Lambda to keep things simple – but you can of course imagine the back end being your own endpoint.
The structure of the API V1 is:
Method: GET
Path: /orders
Query Parameters:
start = timestamp
end = timestamp

Response:
[
{
“orderId” : string,
“orderTs” : string,
“orderAmount” : number
}
]
The initial version (V1) of the API was implemented when there were few orders per day. The API was not paginated; if the number of orders that match the query is larger than 5, an error returns. The API consumer must then submit a request with a smaller time range.
The API V1 is exposed through API Gateway and you have several consumers of this API in Production.
After you upgrade the back end, the API developers make a change to support pagination. This makes the API more scalable and allows the API consumers to handle large lists of orders by paging through them with a token. This is a good design change but it breaks backward compatibility. It introduces a challenge because you have a large base of API consumers using V1 and their code can’t handle the changed nesting structure of this response.
The structure of API V2 is:
Method: GET
Path: /orders
Query Parameters:
start = timestamp
end = timestamp
token = string (optional)

Response:
{
“nextToken” : string,
“orders” : [
{
“orderId” : string,
“orderTs” : string
“orderAmount” : number
}
]
}
Using mapping templates, you can isolate your API consumers from this change: your existing V1 API consumers will not be impacted when you publish V2 of the API in parallel. You want to let your consumers migrate to V2 on their own schedule.
We’ll show you how to do that in this blog post. Let’s get started.
Deploying V1 of the API
To deploy V1 of the API, create a simple Lambda function and expose that through API Gateway:

Sign in to the AWS Lambda console.
Choose Create a Lambda function.
In Step 1: Select blueprint, choose Skip; you’ll enter the details for the Lambda function manually.
In Step 2: Configure function, use the following values:

In Name, type getOrders.
In Description, type Returns orders for a time-range.
In Runtime, choose Node.js.
For Code entry type, choose Edit code inline. Copy and paste the code snippet below into the code input box.

MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
console.log(‘start =’, event.start);
console.log(‘end =’, event.end);

start = Date.parse(decodeURIComponent(event.start));
end = Date.parse(decodeURIComponent(event.end));

if(isNaN(start)) {
context.fail("Invalid parameter ‘start’");
}
if(isNaN(end)) {
context.fail("Invalid parameter ‘end’");
}

duration = end – start;

if(duration 5 * MILISECONDS_DAY) {
context.fail("Too many results, try your request with a shorter duration");
}

orderList = [];
count = 0;

for(d = start; d < end; d += MILISECONDS_DAY) {
order = {
"orderId" : "order-" + count,
"orderTs" : (new Date(d).toISOString()),
"orderAmount" : Math.round(Math.random()*100.0)
};
count += 1;
orderList.push(order);
}

console.log(‘Generated’, count, ‘orders’);
context.succeed(orderList);
};

In Handler, leave the default value of index.handler.
In Role, choose Basic execution role or choose an existing role if you’ve created one for Lambda before.
In Advanced settings, leave the default values and choose Next.


Finally, review the settings in the next page and choose Create function.
Your Lambda function is now created. You can test it by sending a test event. Enter the following for your test event:
{
"start": "2015-10-01T00:00:00Z",
"end": "2015-10-04T00:00:00Z"
}
Check the execution result and log output to see the results of your test.

Next, choose the API endpoints tab and then choose Add API endpoint. In Add API endpoint, use the following values:

In API endpoint type, choose API Gateway
In API name, type StoreFront
In Resource name, type /orders
In Method, choose GET
In Deployment stage, use the default value of prod
In Security, choose Open to allow the API to be publicly accessed
Choose Submit to create the API


The API is created and the API endpoint URL is displayed for the Lambda function.
Next, switch to the API Gateway console and verify that the new API appears on the list of APIs. Choose StoreFront to view its details.
To view the method execution details, in the Resources pane, choose GET. Choose Integration Request to edit the method properties.

On the Integration Request details page, expand the Mapping Templates section and choose Add mapping template. In Content-Type, type application/json and choose the check mark to accept.

Choose the edit icon to the right of Input passthrough. From the drop down, choose Mapping template and copy and paste the mapping template text below into the Template input box. Choose the check mark to create the template.
{
#set($queryMap = $input.params().querystring)

#foreach( $key in $queryMap.keySet())
"$key" : "$queryMap.get($key)"
#if($foreach.hasNext),#end
#end
}
This step is needed because the Lambda function requires its input as a JSON document. The mapping template takes query string parameters from the GET request and creates a JSON input document. Mapping templates use Apache Velocity, expose a number of utility functions, and give you access to all of the incoming requests data and context parameters. You can learn more from the mapping template reference page.
Back to the GET method configuration page, in the left pane, choose the GET method and then open the Method Request settings. Expand the URL Query String Parameters section and choose Add query string. In Name, type start and choose the check mark to accept. Repeat the process to create a second parameter named end.
From the GET method configuration page, in the top left, choose Test to test your API. Type the following values for the query string parameters and then choose Test:

In start, type 2015-10-01T00:00:00Z
In end, type 2015-10-04T00:00:00Z

Verify that the response status is 200 and the response body contains a JSON response with 3 orders.
Now that your test is successful, you can deploy your changes to the production stage. In the Resources pane, choose Deploy API. In Deployment stage, choose prod. In Deployment description, type a description of the deployment, and then choose Deploy.
The prod Stage Editor page appears, displaying the Invoke URL. In the CloudWatch Settings section, choose Enable CloudWatch Logs so you can see logs and metrics from this stage. Keep in mind that CloudWatch logs are charged to your account separately from API Gateway.
You have now deployed an API that is backed by V1 of the Lambda function.
Testing V1 of the API
Now you’ll test V1 of the API with curl and confirm its behavior. First, copy the Invoke URL and add the query parameters ?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z and make a GET invocation using curl.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

[
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 82
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 3
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 75
}
]
This should output a JSON response with 3 orders. Next, check what happens if you use a longer time-range by changing the end timestamp to 2015-10-15T00:00:00Z:
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"errorMessage": "Too many results, try your request with a shorter duration"
}
You see that the API returns an error indicating the time range is too long. This is correct V1 API behavior, so you are all set.
Updating the Lambda Function to V2
Next, you will update the Lambda function code to V2. This simulates the scenario of the back end of your API changing in a manner that is not backward compatible.
Switch to the Lambda console and choose the getOrders function. In the code input box, copy and paste the code snippet below. Be sure to replace all of the existing V1 code with V2 code.
MILISECONDS_DAY = 3600*1000*24;

exports.handler = function(event, context) {
console.log(‘start =’, event.start);
console.log(‘end =’, event.end);

start = Date.parse(decodeURIComponent(event.start));
end = Date.parse(decodeURIComponent(event.end));

token = NaN;
if(event.token) {
s = new Buffer(event.token, ‘base64’).toString();
token = Date.parse(s);
}

if(isNaN(start)) {
context.fail("Invalid parameter ‘start’");
}
if(isNaN(end)) {
context.fail("Invalid parameter ‘end’");
}
if(!isNaN(token)) {
start = token;
}

duration = end – start;

if(duration <= 0) {
context.fail("Invalid parameters ‘end’ must be greater than ‘start’");
}

orderList = [];
count = 0;

console.log(‘start=’, start, ‘ end=’, end);

for(d = start; d < end && count < 5; d += MILISECONDS_DAY) {
order = {
"orderId" : "order-" + count,
"orderTs" : (new Date(d).toISOString()),
"orderAmount" : Math.round(Math.random()*100.0)
};
count += 1;
orderList.push(order);
}

nextToken = null;
if(d < end) {
nextToken = new Buffer(new Date(d).toISOString()).toString(‘base64’);
}

console.log(‘Generated’, count, ‘orders’);

result = {
orders : orderList,
};

if(nextToken) {
result.nextToken = nextToken;
}
context.succeed(result);
};
Choose Save to save V2 of the code. Then choose Test. Note that the output structure is different in V2 and there is a second level of nesting in the JSON document. This represents the updated V2 output structure that is different from V1.
Next, repeat the curl tests from the previous section. First, do a request for a short time duration. Notice that the response structure is nested differently from V1 and this is a problem for our API consumers that expect V1 responses.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

{
"orders": [
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 8
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 92
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 84
}
]
}
Now, repeat the request for a longer time range and you’ll see that instead of an error message, you now get the first page of information with 5 orders and a nextToken that will let you request the next page. This is the paginated behavior of V2 of the API.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"orders": [
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 62
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 59
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 21
},
{
"orderId": "order-3",
"orderTs": "2015-10-04T00:00:00.000Z",
"orderAmount": 95
},
{
"orderId": "order-4",
"orderTs": "2015-10-05T00:00:00.000Z",
"orderAmount": 84
}
],
"nextToken": "MjAxNS0xMC0wNlQwMDowMDowMC4wMDBa"
}
It is clear from these tests that V2 will break the current V1 consumer’s code. Next, we show how to isolate your V1 consumers from this change using API Gateway mapping templates.
Cloning the API
Because you want both V1 and V2 of the API to be available simultaneously to your API consumers, you first clone the API to create a V2 API. You then modify the V1 API to make it behave as your V1 consumers expect.
Go back to the API Gateway console, and choose Create API. Configure the new API with the following values:

In API name, type StoreFrontV2
In Clone from API, choose StoreFront
In Description, type a description
Choose Create API to clone the StoreFront API as StoreFrontV2

Open the StoreFrontV2 API and choose the GET method of the /orders resource. Next, choose Integration Request. Choose the edit icon next to the getOrders Lambda function name.
Keep the name as getOrders and choose the check mark to accept. In the pop up, choose OK to allow the StoreFrontV2 to invoke the Lambda function.
Once you have granted API Gateway permissions to access your Lambda function, choose Deploy API. In Deployment stage, choose New stage. In Stage name, type prod, and then choose Deploy. Now you have a new StoreFrontV2 API that invokes the same Lambda function. Confirm that the API has V2 behavior by testing it with curl. Use the Invoke URL for the StoreFrontV2 API instead of the previously used Invoke URL.
Update the V1 of the API
Now you will use mapping templates to update the original StoreFront API to preserve V1 behavior. This enables existing consumers to continue to consume the API without having to make any changes to their code.
Navigate to the API Gateway console, choose the StoreFront API and open the GET method of the /orders resource. On the Method Execution details page, choose Integration Response.
Expand the default response mapping (HTTP status 200), and expand the Mapping Templates section. Choose Add Mapping Template.
In Content-type, type application/json and choose the check mark to accept. Choose the edit icon next to Output passthrough to edit the mapping templates. Select Mapping template from the drop down and copy and paste the mapping template below into the Template input box.
#set($nextToken = $input.path(‘$.nextToken’))

#if($nextToken && $nextToken.length() != 0)
{
"errorMessage" : "Too many results, try your request with a shorter duration"
}
#else
$input.json(‘$.orders[*]’)
#end
Choose the check mark to accept and save. The mapping template transforms the V2 output from the Lambda function into the original V1 response. The mapping template also generates an error if the V2 response indicates that there are more results than can fit in one page. This emulates V1 behavior.
Finally click Save on the response mapping page. Deploy your StoreFront API and choose prod as the stage to deploy your changes.
Verify V1 behavior
Now that you have updated the original API to emulate V1 behavior, you can verify that using curl again. You will essentially repeat the tests from the earlier section. First, confirm that you have the Invoke URL for the original StoreFront API. You can always find the Invoke URL by looking at the stage details for the API.
Try a test with a short time range.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-04T00:00:00Z"

[
{
"orderId": "order-0",
"orderTs": "2015-10-01T00:00:00.000Z",
"orderAmount": 50
},
{
"orderId": "order-1",
"orderTs": "2015-10-02T00:00:00.000Z",
"orderAmount": 16
},
{
"orderId": "order-2",
"orderTs": "2015-10-03T00:00:00.000Z",
"orderAmount": 14
}
]
Try a test with a longer time range and note that the V1 behavior of returning an error is recovered.
$ curl -s "https://your-invoke-url-and-path/orders?start=2015-10-01T00:00:00Z&end=2015-10-15T00:00:00Z"

{
"errorMessage": "Too many results, try your request with a shorter duration"
}
Congratulations, you have successfully used Amazon API Gateway mapping templates to expose both V1 and V2 versions of your API allowing your API consumers to migrate to V2 on their own schedule.
Be sure to delete the two APIs and the AWS Lambda function that you created for this walkthrough to avoid being charged for their use.