Tag Archives: Amazon API Gateway

Things to Consider When You Build REST APIs with Amazon API Gateway

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/things-to-consider-when-you-build-rest-apis-with-amazon-api-gateway/

A few weeks ago, we kicked off this series with a discussion on REST vs GraphQL APIs. This post will dive deeper into the things an API architect or developer should consider when building REST APIs with Amazon API Gateway.

Request Rate (a.k.a. “TPS”)

Request rate is the first thing you should consider when designing REST APIs. By default, API Gateway allows for up to 10,000 requests per second. You should use the built in Amazon CloudWatch metrics to review how your API is being used. The Count metric in particular can help you review the total number of API requests in a given period.

It’s important to understand the actual request rate that your architecture is capable of supporting. For example, consider this architecture:

REST API 1

This API accepts GET requests to retrieve a user’s cart by using a Lambda function to perform SQL queries against a relational database managed in RDS.  If you receive a large burst of traffic, both API Gateway and Lambda will scale in response to the traffic. However, relational databases typically have limited memory/cpu capacity and will quickly exhaust the total number of connections.

As an API architect, you should design your APIs to protect your down stream applications.  You can start by defining API Keys and requiring your clients to deliver a key with incoming requests. This lets you track each application or client who is consuming your API.  This also lets you create Usage Plans and throttle your clients according to the plan you define.  For example, you if you know your architecture is capable of of sustaining 200 requests per second, you should define a Usage plan that sets a rate of 200 RPS and optionally configure a quota to allow a certain number of requests by day, week, or month.

Additionally, API Gateway lets you define throttling settings for the whole stage or per method. If you know that a GET operation is less resource intensive than a POST operation you can override the stage settings and set different throttling settings for each resource.

Integrations and Design patterns

The example above describes a synchronous, tightly coupled architecture where the request must wait for a response from the backend integration (RDS in this case). This results in system scaling characteristics that are the lowest common denominator of all components. Instead, you should look for opportunities to design an asynchronous, loosely coupled architecture. A decoupled architecture separates the data ingestion from the data processing and allows you to scale each system separately. Consider this new architecture:

REST API 2

This architecture enables ingestion of orders directly into a highly scalable and durable data store such as Amazon Simple Queue Service (SQS).  Your backend can process these orders at any speed that is suitable for your business requirements and system ability.  Most importantly,  the health of the backend processing system does not impact your ability to continue accepting orders.

Security

Security with API Gateway falls into three major buckets, and I’ll outline them below. Remember, you should enable all three options to combine multiple layers of security.

Option 1 (Application Firewall)

You can enable AWS Web Application Firewall (WAF) for your entire API. WAF will inspect all incoming requests and block requests that fail your inspection rules. For example, WAF can inspect requests for SQL Injection, Cross Site Scripting, or whitelisted IP addresses.

Option 2 (Resource Policy)

You can apply a Resource Policy that protects your entire API. This is an IAM policy that is applied to your API and you can use this to white/black list client IP ranges or allow AWS accounts and AWS principals to access your API.

Option 3 (AuthZ)

  1. IAM:This AuthZ option requires clients to sign requests with the AWS v4 signing process. The associated IAM role or user must have permissions to perform the execute-api:Invoke action against the API.
  2. Cognito: This AuthZ option requires clients to login into Cognito and then pass the returned ID or Access JWT token in the Authentication header.
  3. Lambda Auth: This AuthZ option is the most flexible and lets you execute a Lambda function to perform any custom auth strategy needed. A common use case for this is OpenID Connect.

A Couple of Tips

Tip #1: Use Stage variables to avoid hard coding your backend Lambda and HTTP integrations. For example, you probably have multiple stages such as “QA” and “PROD” or “V1” and “V2.” You can define the same variable in each stage and specify different values. For example, you might an API that executes a Lambda function. In each stage, define the same variable called functionArn. You can reference this variable as your Lambda ARN during your integration configuration using this notation: ${stageVariables.functionArn}. API Gateway will inject the corresponding value for the stage dynamically at runtime, allowing you to execute different Lambda functions by stage.

Tip #2: Use Path and Query variables to inject dynamic values into your HTTP integrations. For example, your cart API may define a userId Path variable that is used to lookup a user’s cart: /cart/profile/{userId}. You can inject this variable directly into your backend HTTP integration URL settings like this: http://myapi.someds.com/cart/profile/{userId}

Summary

This post covered strategies you should use to ensure your REST API architectures are scalable and easy to maintain.  I hope you’ve enjoyed this post and our next post will cover GraphQL API architectures with AWS AppSync.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

ICYMI: Serverless Q2 2019

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/icymi-serverless-q2-2019/

This post is courtesy of Moheeb Zara, Senior Developer Advocate – AWS Serverless

Welcome to the sixth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, checkout what happened last quarter here.

April - June 2019

Amazon EventBridge

Before we dive in to all that happened in Q2, we’re excited about this quarter’s launch of Amazon EventBridge, the serverless event bus that connects application data from your own apps, SaaS, and AWS-as-a-service. This allows you to create powerful event-driven serverless applications using a variety of event sources.

Our very own AWS Solutions Architect, Mike Deck, sat down with AWS Serverless Hero Jeremy Daly and recorded a podcast on Amazon EventBridge. It’s a worthy listen if you’re interested in exploring all the features offered by this launch.

Now, back to Q2, here’s what’s new.

AWS Lambda

Lambda Monitoring

Amazon CloudWatch Logs Insights now allows you to see statistics from recent invocations of your Lambda functions in the Lambda monitoring tab.

Additionally, as of June, you can monitor the [email protected] functions associated with your Amazon CloudFront distributions directly from your Amazon CloudFront console. This includes a revamped monitoring dashboard for CloudFront distributions and [email protected] functions.

AWS Step Functions

Step Functions

AWS Step Functions now supports workflow execution events, which help in the building and monitoring of even-driven serverless workflows. Automatic Execution event notifications can be delivered upon start/completion of CloudWatch Events/Amazon EventBridge. This allows services such as AWS Lambda, Amazon SNS, Amazon Kinesis, or AWS Step Functions to respond to these events.

Additionally you can use callback patterns to automate workflows for applications with human activities and custom integrations with third-party services. You create callback patterns in minutes with less code to write and maintain, run without servers and infrastructure to manage, and scale reliably.

Amazon API Gateway

API Gateway Tag Based Control

Amazon API Gateway now offers tag-based access control for WebSocket APIs using AWS Identity and Access Management (IAM) policies, allowing you to categorize API Gateway resources for WebSocket APIs by purpose, owner, or other criteria.  With the addition of tag-based access control to WebSocket resources, you can now give permissions to WebSocket resources at various levels by creating policies based on tags. For example, you can grant full access to admins to while limiting access to developers.

You can now enforce a minimum Transport Layer Security (TLS) version and cipher suites through a security policy for connecting to your Amazon API Gateway custom domain.

In addition, Amazon API Gateway now allows you to define VPC Endpoint policies, enabling you to specify which Private APIs a VPC Endpoint can connect to. This enables granular security control using VPC Endpoint policies.

AWS Amplify

Amplify CLI (part of the open source Amplify Framework) now includes support for adding and configuring AWS Lambda triggers for events when using Amazon Cognito, Amazon Simple Storage Service, and Amazon DynamoDB as event sources. This means you can setup custom authentication flows for mobile and web applications via the Amplify CLI and Amazon Cognito User Pool as an authentication provider.

Amplify Console

Amplify Console,  a Git-based workflow for continuous deployment and hosting for fullstack serverless web apps, launched several updates to the build service including SAM CLI and custom container support.

Amazon Kinesis

Amazon Kinesis Data Firehose can now utilize AWS PrivateLink to securely ingest data. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely over the Amazon network. When AWS PrivateLink is used with Amazon Kinesis Data Firehose, all traffic to a Kinesis Data Firehose from a VPC flows over a private connection.

You can now assign AWS resource tags to applications in Amazon Kinesis Data Analytics. These key/value tags can be used to organize and identify resources, create cost allocation reports, and control access to resources within Amazon Kinesis Data Analytics.

Amazon Kinesis Data Firehose is now available in the AWS GovCloud (US-East), Europe (Stockholm), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and EU (London) regions.

For a complete list of where Amazon Kinesis Data Analytics is available, please see the AWS Region Table.

AWS Cloud9

Cloud9 Quick Starts

Amazon Web Services (AWS) Cloud9 integrated development environment (IDE) now has a Quick Start which deploys in the AWS cloud in about 30 minutes. This enables organizations to provide developers a powerful cloud-based IDE that can edit, run, and debug code in the browser and allow easy sharing and collaboration.

AWS Cloud9 is also now available in the EU (Frankfurt) and Asia Pacific (Tokyo) regions. For a current list of supported regions, see AWS Regions and Endpoints in the AWS documentation.

Amazon DynamoDB

You can now tag Amazon DynamoDB tables when you create them. Tags are labels you can attach to AWS resources to make them easier to manage, search, and filter.  Tagging support has also been extended to the AWS GovCloud (US) Regions.

DynamoDBMapper now supports Amazon DynamoDB transactional API calls. This support is included within the AWS SDK for Java. These transactional APIs provide developers atomic, consistent, isolated, and durable (ACID) operations to help ensure data correctness.

Amazon DynamoDB now applies adaptive capacity in real time in response to changing application traffic patterns, which helps you maintain uninterrupted performance indefinitely, even for imbalanced workloads.

AWS Training and Certification has launched Amazon DynamoDB: Building NoSQL Database–Driven Applications, a new self-paced, digital course available exclusively on edX.

Amazon Aurora

Amazon Aurora Serverless MySQL 5.6 can now be accessed using the built-in Data API enabling you to access Aurora Serverless with web services-based applications, including AWS LambdaAWS AppSync, and AWS Cloud9. For more check out this post.

Sharing snapshots of Aurora Serverless DB clusters with other AWS accounts or publicly is now possible. We are also giving you the ability to copy Aurora Serverless DB cluster snapshots across AWS regions.

You can now set the minimum capacity of your Aurora Serverless DB clusters to 1 Aurora Capacity Unit (ACU). With Aurora Serverless, you specify the minimum and maximum ACUs for your Aurora Serverless DB cluster instead of provisioning and managing database instances. Each ACU is a combination of processing and memory capacity. By setting the minimum capacity to 1 ACU, you can keep your Aurora Serverless DB cluster running at a lower cost.

AWS Serverless Application Repository

The AWS Serverless Application Repository is now available in 17 regions with the addition of the AWS GovCloud (US-West) region.

Region support includes Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), Canada (Central), EU (Frankfurt, Ireland, London, Paris, Stockholm), South America (São Paulo), US West (N. California, Oregon), and US East (N. Virginia, Ohio).

Amazon Cognito

Amazon Cognito has launched a new API – AdminSetUserPassword – for the Cognito User Pool service that provides a way for administrators to set temporary or permanent passwords for their end users. This functionality is available for end users even when their verified phone or email are unavailable.

Serverless Posts

April

May

June

Events

Events this quarter

Senior Developer Advocates for AWS Serverless spoke at several conferences this quarter. Here are some recordings worth watching!

Tech Talks

We hold several AWS Online Tech Talks covering serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. Here are the ones from Q2.

Twitch

Twitch Series

In April, we started a 13-week deep dive into building APIs on AWS as part of our Twitch Build On series. The Building Happy Little APIs series covers the common and not-so-common use cases for APIs on AWS and the features available to customers as they look to build secure, scalable, efficient, and flexible APIs.

There are also a number of other helpful video series covering Serverless available on the AWS Twitch Channel.

Build with Serverless on Twitch

Serverless expert and AWS Specialist Solutions architect, Heitor Lessa, has been hosting a weekly Twitch series since April. Join him and others as they build an end-to-end airline booking solution using serverless. The final episode airs on August 7th at Wednesday 8:00am PT.

Here’s a recap of the last quarter:

AWS re:Invent

AWS re:Invent 2019

AWS re:Invent 2019 is around the corner! From December 2 – 6 in Las Vegas, Nevada, join tens of thousands of AWS customers to learn, share ideas, and see exciting keynote announcements. Be sure to take a look at the growing catalog of serverless sessions this year.

Register for AWS re:Invent now!

What did we do at AWS re:Invent 2018? Check out our recap here: AWS re:Invent 2018 Recap at the San Francisco Loft

AWS Serverless Heroes

We urge you to explore the efforts of our AWS Serverless Heroes Community. This is a worldwide network of AWS Serverless experts with a diverse background of experience. For example, check out this post from last month where Marcia Villalba demonstrates how to set up unit tests for serverless applications.

Still looking for more?

The Serverless landing page has lots of information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

How to Architect APIs for Scale and Security

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/how-to-architect-apis-for-scale-and-security/

We hope you’ve enjoyed reading our posts on best practices for your serverless applications. This series of posts will focus on best practices and concepts you should be familiar with when you architect APIs for your applications. We’ll kick this first post off with a comparison between REST and GraphQL API architectures.

Introduction

Developers have been creating RESTful APIs for a long time, typically using HTTP methods, such as GET, POST, DELETE to perform operations against the API. Amazon API Gateway is designed to make it easy for developers to create APIs at any scale without managing any servers. API Gateway will handle all of the heavy lifting needed including traffic management, security, monitoring, and version/environment management.

GraphQL APIs are relatively new, with a primary design goal of allowing clients to define the structure of the data that they require. AWS AppSync allows you to create flexible APIs that access and combine multiple data sources.

REST APIs

Architecting a REST API is structured around creating combinations of resources and methods.  Resources are paths  that are present in the request URL and methods are HTTP actions that you take against the resource. For example, you may define a resource called “cart”: http://myapi.somecompany.com/cart. The cart resource can respond to HTTP POSTs for adding items to a shopping cart or HTTP GETs for retrieving the items in your cart. With API Gateway, you would implement the API like this:

Behind the scenes, you can integrate with nearly any backend to provide the compute logic, data persistence, or business work flows.  For example, you can configure an AWS Lambda function to perform the addition of an item to a shopping cart (HTTP POST).  You can also use API Gateway to directly interact with AWS services like Amazon DynamoDB.  An example is using API Gateway to retrieve items in a cart from DynamoDB (HTTP GET).

RESTful APIs tend to use Path and Query parameters to inject dynamic values into APIs. For example, if you want to retreive a specific cart with an id of abcd123, you could design the API to accept a query or path parameter that specifies the cartID:

/cart?cartId=abcd123 or /cart/abcd123

Finally, when you need to add functionality to your API, the typical approach would be to add additional resources.  For example, to add a checkout function, you could add a resource called /cart/checkout.

GraphQL APIs

Architecting GraphQL APIs is not structured around resources and HTTP verbs, instead you define your data types and configure where the operations will retrieve data through a resolver. An operation is either a query or a mutation. Queries simply retrieve data while mutations are used when you want to modify data. If we use the same example from above, you could define a cart data type as follows:

type Cart {

  cartId: ID!

  customerId: String

  name: String

  email: String

  items: [String]

}

Next, you configure the fields in the Cart to map to specific data sources. AppSync is then responsible for executing resolvers to obtain appropriate information. Your client will send a HTTP POST to the AppSync endpoint with the exact shape of the data they require. AppSync is responsible for executing all configured resolvers to obtain the requested data and return a single response to the client.

Rest API

With GraphQL, the client can change their query to specify the exact data that is needed. The above example shows two queries that ask for different sets of information. The first getCart query asks for all of the static customer (customerId, name, email) and a list of items in the cart. The second query just asks for the customer’s static information. Based on the incoming query, AppSync will execute the correct resolver(s) to obtain the data. The client submits the payload via a HTTP POST to the same endpoint in both cases. The payload of the POST body is the only thing that changes.

As we saw above, a REST based implementation would require the API to define multiple HTTP resources and methods or path/query parameters to accomplish this.

AppSync also provides other powerful features that are not possible with REST APIs such as real-time data synchronization and multiple methods of authentication at the field and operation level.

Summary

As you can see, these are two different approaches to architecting your API. In our next few posts, we’ll cover specific features and architecture details you should be aware of when choosing between API Gateway (REST) and AppSync (GraphQL) APIs. In the meantime, you can read more about working with API Gateway and Appsync.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

Ten Things Serverless Architects Should Know

Post Syndicated from Justin Pirtle original https://aws.amazon.com/blogs/architecture/ten-things-serverless-architects-should-know/

Building on the first three parts of the AWS Lambda scaling and best practices series where you learned how to design serverless apps for massive scale, AWS Lambda’s different invocation models, and best practices for developing with AWS Lambda, we now invite you to take your serverless knowledge to the next level by reviewing the following 10 topics to deepen your serverless skills.

1: API and Microservices Design

With the move to microservices-based architectures, decomposing monothlic applications and de-coupling dependencies is more important than ever. Learn more about how to design and deploy your microservices with Amazon API Gateway:

Get hands-on experience building out a serverless API with API Gateway, AWS Lambda, and Amazon DynamoDB powering a serverless web application by completing the self-paced Wild Rydes web application workshop.

Figure 1: WildRydes serverless web application workshop

2: Event-driven Architectures and Asynchronous Messaging Patterns

When building event-driven architectures, whether you’re looking for simple queueing and message buffering or a more intricate event-based choreography pattern, it’s valuable to learn about the mechanisms to enable asynchronous messaging and integration. These are enabled primarily through the use of queues or streams as a message buffer and topics for pub/sub messaging. Understand when to use each and the unique advantages and features of all three:

Gets hands-on experience building a real-time data processing application using Amazon Kinesis Data Streams and AWS Lambda by completing the self-paced Wild Rydes data processing workshop.

3: Workflow Orchestration in a Distributed, Microservices Environment

In distributed microservices architectures, you must design coordinated transactions in different ways than traditional database-based ACID transactions, which are typically implemented using a monolithic relational database. Instead, you must implement coordinated sequenced invocations across services along with rollback and retry mechanisms. For workloads where there a significant orchestration logic is required and you want to use more of an orchestrator pattern than the event choreography pattern mentioned above, AWS Step Functions enables the building complex workflows and distributed transactions through integration with a variety of AWS services, including AWS Lambda. Learn about the options you have to build your business workflows and keep orchestration logic out of your AWS Lambda code:

Get hands-on experience building an image processing workflow using computer vision AI services with AWS Rekognition and AWS Step Functions to orchestrate all logic and steps with the self-paced Serverless image processing workflow workshop.

Figure 2: Several AWS Lambda functions managed by an AWS Step Functions state machine

4: Lambda Computing Environment and Programming Model

Though AWS Lambda is a service that is quick to get started, there is value in learning more about the AWS Lambda computing environment and how to take advantage of deeper performance and cost optimization strategies with the AWS Lambda runtime. Take your understanding and skills of AWS Lambda to the next level:

5: Serverless Deployment Automation and CI/CD Patterns

When dealing with a large number of microservices or smaller components—such as AWS Lambda functions all working together as part of a broader application—it’s critical to integrate automation and code management into your application early on to efficiently create, deploy, and version your serverless architectures. AWS offers several first-party deployment tools and frameworks for Serverless architectures, including the AWS Serverless Application Model (SAM), the AWS Cloud Development Kit (CDK), AWS Amplify, and AWS Chalice. Additionally, there are several third party deployment tools and frameworks available, such as the Serverless Framework, Claudia.js, Sparta, or Zappa. You can also build your own custom-built homegrown framework. The important thing is to ensure your automation strategy works for your use case and team, and supports your planned data source integrations and development workflow. Learn more about the available options:

Learn how to build a full CI/CD pipeline and other DevOps deployment automation with the following workshops:

6: Serverless Identity Management, Authentication, and Authorization

Modern application developers need to plan for and integrate identity management into their applications while implementing robust authentication and authorization functionality. With Amazon Cognito, you can deploy serverless identity management and secure sign-up and sign-in directly into your applications. Beyond authentication, Amazon API Gateway also allows developers to granularly manage authorization logic at the gateway layer and authorize requests directly, without exposing their using several types of native authorization.

Learn more about the options and benefits of each:

Get hands-on experience working with Amazon Cognito, AWS Identity and Access Management (IAM), and Amazon API Gateway with the Serverless Identity Management, Authentication, and Authorization Workshop.

Figure 3: Serverless Identity Management, Authentication, and Authorization Workshop

7: End-to-End Security Techniques

Beyond identity and authentication/authorization, there are many other areas to secure in a serverless application. These include:

  • Input and request validation
  • Dependency and vulnerability management
  • Secure secrets storage and retrieval
  • IAM execution roles and invocation policies
  • Data encryption at-rest/in-transit
  • Metering and throttling access
  • Regulatory compliance concerns

Thankfully, there are AWS offerings and integrations for each of these areas. Learn more about the options and benefits of each:

Get hands-on experience adding end-to-end security with the techniques mentioned above into a serverless application with the Serverless Security Workshop.

8: Application Observability with Comprehensive Logging, Metrics, and Tracing

Before taking your application to production, it’s critical that you ensure your application is fully observable, both at a microservice or component level, as well as overall through comprehensive logging, metrics at various granularity, and tracing to understand distributed system performance and end user experiences end-to-end. With many different components making up modern architectures, having centralized visibility into all of your key logs, metrics, and end-to-end traces will make it much easier to monitor and understand your end users’ experiences. Learn more about the options for observability of your AWS serverless application:

9. Ensuring Your Application is Well-Architected

Adding onto the considerations mentioned above, we suggest architecting your applications more holistically to the AWS Well-Architected framework. This framework includes the five key pillars: security, reliability, performance efficiency, cost optimization, and operational excellence. Additionally, there is a serverless-specific lens to the Well-Architected framework, which more specifically looks at key serverless scenarios/use cases such as RESTful microservices, Alexa skills, mobile backends, stream processing, and web applications, and how they can implement best practices to be Well-Architected. More information:

10. Continuing your Learning as Serverless Computing Continues to Evolve

As we’ve discussed, there are many opportunities to dive deeper into serverless architectures in a variety of areas. Though the resources shared above should be helpful in familiarizing yourself with key concepts and techniques, there’s nothing better than continued learning from others over time as new advancements come out and patterns evolve.

Finally, we encourage you to check back often as we’ll be continuing further blog post series on serverless architectures, with the next series focusing on API design patterns and best practices.

About the author

Justin PritleJustin Pirtle is a specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. He’s responsible for helping customers design, deploy, and scale serverless applications using services such as AWS Lambda, Amazon API Gateway, Amazon Cognito, and Amazon DynamoDB. He is a regular speaker at AWS conferences, including re:Invent, as well as other AWS events. Justin holds a bachelor’s degree in Management Information Systems from the University of Texas at Austin and a master’s degree in Software Engineering from Seattle University.

How to Design Your Serverless Apps for Massive Scale

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/how-to-design-your-serverless-apps-for-massive-scale/

Serverless is one of the hottest design patterns in the cloud today, allowing you to focus on building and innovating, rather than worrying about the heavy lifting of server and OS operations. In this series of posts, we’ll discuss topics that you should consider when designing your serverless architectures. First, we’ll look at architectural patterns designed to achieve massive scale with serverless.

Scaling Considerations

In general, developers in a “serverful” world need to be worried about how many total requests can be served throughout the day, week, or month, and how quickly their system can scale. As you move into the serverless world, the most important question you should understand becomes: “What is the concurrency that your system is designed to handle?”

The AWS Serverless platform allows you to scale very quickly in response to demand. Below is an example of a serverless design that is fully synchronous throughout the application. During periods of extremely high demand, Amazon API Gateway and AWS Lambda will scale in response to your incoming load. This design places extremely high load on your backend relational database because Lambda can easily scale from thousands to tens of thousands of concurrent requests. In most cases, your relational databases are not designed to accept the same number of concurrent connections.

Serverless at scale-1

This design risks bottlenecks at your relational database and may cause service outages. This design also risks data loss due to throttling or database connection exhaustion.

Cloud Native Design

Instead, you should consider decoupling your architecture and moving to an asynchronous model. In this architecture, you use an intermediary service to buffer incoming requests, such as Amazon Kinesis or Amazon Simple Queue Service (SQS). You can configure Kinesis or SQS as out of the box event sources for Lambda. In design below, AWS will automatically poll your Kinesis stream or SQS resource for new records and deliver them to your Lambda functions. You can control the batch size per delivery and further place throttles on a per Lambda function basis.

Serverless at scale - 2

This design allows you to accept extremely high volume of requests, store the requests in a durable datastore, and process them at the speed which your system can handle.

Conclusion

Serverless computing allows you to scale much quicker than with server-based applications, but that means application architects should always consider the effects of scaling to your downstream services. Always keep in mind cost, speed, and reliability when you’re building your serverless applications.

Our next post in this series will discuss the different ways to invoke your Lambda functions and how to design your applications appropriately.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

Access Private applications on AWS Fargate using Amazon API Gateway PrivateLink

Post Syndicated from Ignacio Riesgo original https://aws.amazon.com/blogs/compute/access-private-applications-on-aws-fargate-using-amazon-api-gateway-privatelink/

This post is contributed by Mani Chandrasekaran | Solutions Architect, AWS

 

Customers would like to run container-based applications in a private subnet inside a virtual private cloud (VPC), where there is no direct connectivity from the outside world to these applications. This is a very secure way of running applications which do not want to be directly exposed to the internet.

AWS Fargate is a compute engine for Amazon ECS that enables you to run containers without having to manage servers or clusters. With AWS Fargate with Amazon ECS, you don’t have to provision, configure, and scale clusters of virtual machines to run containers.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. The API Gateway private integration makes it simple to expose your HTTP and HTTPS resources behind a virtual private cloud (VPC) with Amazon VPC private endpoints. This allows access by clients outside of the VPC without exposing the resources to the internet.

This post shows how API Gateway can be used to expose an application running on Fargate in a private subnet in a VPC using API Gateway private integration through AWS PrivateLink. With the API Gateway private integration, you can enable access to HTTP and HTTPS resources in a VPC without detailed knowledge of private network configurations or technology-specific appliances.

 

Architecture

You deploy a simple NGINX application running on Fargate within a private subnet as a first step, and then expose this NGINX application to the internet using the API.

As shown in the architecture in the following diagram, you create a VPC with two private subnets and two public subnets. To enable the Fargate tasks to download Docker images from Amazon ECR, you deploy two network address translation (NAT) gateways in the public subnets.

You also deploy a container application, NGINX, as an ECS service with one or more Fargate tasks running inside the private subnets. You provision an internal Network Load Balancer in the VPC private subnets and target the ECS service running as Fargate tasks. This is provisioned using an AWS CloudFormation template (link provided later in this post).

The integration between API Gateway and the Network Load Balancer inside the private subnet uses an API Gateway VpcLink resource. The VpcLink encapsulates connections between the API and targeted VPC resources when the application is hosted on Fargate. You set up an API with the private integration by creating a VpcLink that targets the Network Load Balancer and then uses the VpcLink as an integration endpoint .

 

 

Deployment

Here are the steps to deploy this solution:

  1. Deploy an application on Fargate.
  2. Set up an API Gateway private integration.
  3. Deploy and test the API.
  4. Clean up resources to avoid incurring future charges.

 

Step 1 — Deploy an application on AWS Fargate
I’ve created an AWS CloudFormation template to make it easier for you to get started.

  1. Get the AWS CloudFormation template.
  2. In the AWS Management Console, deploy the CloudFormation template in an AWS Region where Fargate and API Gateway are available.
  3. On the Create stack page, specify the parameters specific to your environment. Or, use the default parameters, which deploy an NGINX Docker image as a Fargate task in an ECS cluster across two Availability Zones.

When the process is finished, the status changes to CREATE_COMPLETE and the details of the Network Load Balancer, VPC, subnets, and ECS cluster name appear on the Outputs tab.

 

Step 2 — Set up an API Gateway Private Integration
Next, set up an API Gateway API with private integrations using the AWS CLI and specify the AWS Region in all the AWS CLI commands.

1. Create a VPCLink in API Gateway with the ARN of the Network Load Balancer that you provisioned. Make sure that you specify the correct endpoint URL and Region based on the AWS Region that you selected for the CloudFormation template. Run the following command:

aws apigateway create-vpc-link \
--name fargate-nlb-private-link \
--target-arns arn:aws:elasticloadbalancing:ap-south-1:xxx:loadbalancer/net/Farga-Netwo-XX/xx \
--endpoint-url https://apigateway.ap-south-1.amazonaws.com \
--region ap-south-1

The command immediately returns the following response, acknowledges the receipt of the request, and shows the PENDING status for the new VpcLink:

{
    "id": "alnXXYY",
    "name": "fargate-nlb-private-link",
    "targetArns": [
        " arn:aws:elasticloadbalancing:ap-south-1:xxx:loadbalancer/net/Farga-Netwo-XX/xx"
    ],
    "status": "PENDING"
}

It takes 2–4 minutes for API Gateway to create the VpcLink. When the operation finishes successfully, the status changes to AVAILABLE.

 

2. To verify that the VpcLink was successfully created, run the following command:

aws apigateway get-vpc-link --vpc-link-id alnXXYY --region ap-south-1

When the VpcLink status is AVAILABLE, you can create the API and integrate it with the VPC resource through the VpcLink.

 

3. To set up an API, run the following command to create an API Gateway RestApi resource

aws apigateway create-rest-api --name 'API Gateway VPC Link NLB Fargate Test' --region ap-south-1

{
    "id": "qc83xxxx",
    "name": "API Gateway VPC Link NLB Fargate Test",
    "createdDate": 1547703133,
    "apiKeySource": "HEADER",
    "endpointConfiguration": {
        "types": [
            "EDGE"
        ]
    }
}

Find the ID value of the RestApi in the returned result. In this example, it is qc83xxxx. Use this ID to finish the operations on the API, including methods and integrations setup.

 

4. In this example, you create an API with only a GET method on the root resource (/) and integrate the method with the VpcLink.

Set up the GET / method. First, get the identifier of the root resource (/):

aws apigateway get-resources --rest-api-id qc83xxxx --region ap-south-1

In the output, find the ID value of the / path. In this example, it is mq165xxxx.

 

5. Set up the method request for the API method of GET /:

aws apigateway put-method \
       --rest-api-id qc83xxxx \
       --resource-id mq165xxxx \
       --http-method GET \
       --authorization-type "NONE" --region ap-south-1

6. Set up the private integration of the HTTP_PROXY type and call the put-integration command:

aws apigateway put-integration \
--rest-api-id qc83xxxx \
--resource-id mq165xxxx \
--uri 'http://myApi.example.com' \
--http-method GET \
--type HTTP_PROXY \
--integration-http-method GET \
--connection-type VPC_LINK \
--connection-id alnXXYY --region ap-south-1

For a private integration, you must set connection-type to VPC_LINK and set connection-id to the VpcLink identifier, alnXXYY in this example. The URI parameter is not used to route requests to your endpoint, but is used to set the host header and for certificate validation.

 

Step 3 — Deploy and test the API

To test the API, run the following command to deploy the API:

aws apigateway create-deployment \
--rest-api-id qc83xxxx \
--stage-name test \
--variables vpcLinkId= alnXXYY --region ap-south-1

Test the APIs with tools such as Postman or the curl command. To call a deployed API, you must submit requests to the URL for the API Gateway component service for API execution, known as execute-api.

The base URL for REST APIs is in this format:

https://{restapi_id}.execute-api.{region}.amazonaws.com/{stage_name}/

Replace {restapi_id} with the API identifier, {region} with the Region, and {stage_name} with the stage name of the API deployment.

To test the API with curl, run the following command:

curl -X GET https://qc83xxxx.execute-api.ap-south-1.amazonaws.com/test/

The curl response should be the NGINX home page.

To test the API with Postman, place the Invoke URL into Postman and choose GET as the method. Choose Send.

The returned result (the NGINX home page) appears.

For more information, see Use Postman to Call a REST API.

 

Step 4 — Clean up resources

After you finish your deployment test, make sure to delete the following resources to avoid incurring future charges.

1. Delete the REST API created in the API Gateway and Amazon VPC endpoint services using the console.
Or, in the AWS CLI, run the following command:

aws apigateway delete-rest-api --rest-api-id qc83xxxx --region ap-south-1

aws apigateway delete-vpc-link --vpc-link-id alnXXYY --region ap-south-1

2. To delete the Fargate-related resources created in CloudFormation, in the console, choose Delete Stack.

 

Conclusion

API Gateway private endpoints enable use cases for building private API–based services running on Fargate inside your own VPCs. You can take advantage of advanced features of API Gateway, such as custom authorizers, Amazon Cognito User Pools integration, usage tiers, throttling, deployment canaries, and API keys. At the same time, you can make sure the APIs or applications running in Fargate are not exposed to the internet.

Updates to Serverless Architectural Patterns and Best Practices

Post Syndicated from Drew Dennis original https://aws.amazon.com/blogs/architecture/updates-to-serverless-architectural-patterns-and-best-practices/

As we sail past the halfway point between re:Invent 2018 and re:Invent 2019, I’d like to revisit some of the recent serverless announcements we’ve made. These are all complimentary to the patterns discussed in the re:Invent architecture track’s Serverless Architectural Patterns and Best Practices session.

AWS Event Fork Pipelines

AWS Event Fork Pipelines was announced in March 2019. Many customers use asynchronous event-driven processing in their serverless applications to decouple application components and address high concurrency needs. And in doing so, they often find themselves needing to backup, search, analyze, or replay these asynchronous events. That is exactly what AWS Event Fork Pipelines aims to achieve. You can plug them into a new or existing SNS topic used by your application and immediately address retention and compliance needs, gain new business insights, or even improve your application’s disaster recovery abilities.

AWS Event Fork Pipelines is a suite of three applications. The first application addresses event storage and backup needs by writing all events to an S3 bucket where they can be queried with services like Amazon Athena. The second is a search and analytics pipeline that delivers events to a new or existing Amazon ES domain, enabling search and analysis of your events. Finally, the third application is an event replay pipeline that can be used to reprocess messages should a downstream failure occur in your application. AWS Event Fork Pipelines is available in AWS Serverless Application Model (SAM) templates and are available in the AWS Serverless Application Repository (SAR). Check out our example e-commerce application on GitHub..

Amazon API Gateway Serverless Developer Portal

If you publish APIs for developers allowing them to build new applications and capabilities with your data, you understand the need for a developer portal. Also, in March 2019, we announced some significant upgrades to the API Gateway Serverless Developer Portal. The portal’s front end is written in React and is designed to be fully customizable.

The API Gateway Serverless Developer Portal is also available in GitHub and the AWS SAR. As you can see from the architecture diagram below, it is integrated with Amazon Cognito User Pools to allow developers to sign-up, receive an API Key, and register for one or more of your APIs. You can now also enable administrative scenarios from your developer portal by logging in as users belonging to the portal’s Admin group which is created when the portal is initially deployed to your account. For example, you can control which APIs appear in a customer’s developer portal, enable SDK downloads, solicit developer feedback, and even publish updates for APIs that have been recently revised.

AWS Lambda with Amazon Application Load Balancer (ALB)

Serverless microservices have been built by our customers for quite a while, with AWS Lambda and Amazon API Gateway. At re:Invent 2018 during Dr. Werner Vogel’s keynote, a new approach to serverless microservices was announced, Lambda functions as ALB targets.

ALB’s support for Lambda targets gives customers the ability to deploy serverless code behind an ALB, alongside servers, containers, and IP addresses. With this feature, ALB path and host-based routing can be used to direct incoming requests to Lambda functions. Also, ALB can now provide an entry point for legacy applications to take on new serverless functionality, and enable migration scenarios from monolithic legacy server or container-based applications.

Use cases for Lambda targets for ALB include adding new functionality to an existing application that already sits behind an ALB. This could be request monitoring by sending http headers to Elasticsearch clusters or implementing controls that manage cookies. Check out our demo of this new feature. For additional details, take a look at the feature’s documentation.

Security Overview of AWS Lambda Whitepaper

Finally, I’d be remiss if I didn’t point out the great work many of my colleagues have done in releasing the Security Overview of AWS Lambda Whitepaper. It is a succinct and enlightening read for anyone wishing to better understand the Lambda runtime environment, function isolation, or data paths taken for payloads sent to the Lambda service during synchronous and asynchronous invocations. It also has some great insight into compliance, auditing, monitoring, and configuration management of your Lambda functions. A must read for anyone wishing to better understand the overall security of AWS serverless applications.

I look forward to seeing everyone at re:Invent 2019 for more exciting serverless announcements!

About the author

Drew DennisDrew Dennis is a Global Solutions Architect with AWS based in Dallas, TX. He enjoys all things Serverless and has delivered the Architecture Track’s Serverless Patterns and Best Practices session at re:Invent the past three years. Today, he helps automotive companies with autonomous driving research on AWS, connected car use cases, and electrification.

Getting started with serverless

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/getting-started-with-serverless/

This post is contributed by Maureen Lonergan, Director, AWS Training and Certification

We consistently hear from customers that they’re interested in building serverless applications to take advantage of the increased agility and decreased total cost of ownership (TCO) that serverless delivers. But we also know that serverless may be intimidating for those who are more accustomed to using instances or containers for compute.

Since we launched AWS Lambda in 2014, our serverless portfolio has expanded beyond event-driven computing. We now have serverless databases, integration, and orchestration tools. This enables you to build end-to-end serverless applications—but it also means that you must learn how to build using a new serverless operational model.

For this reason, AWS Training and Certification is pleased to offer a new course through Coursera entitled AWS Fundamentals: Building Serverless Applications.

This scenario-based course, developed by the experts at AWS, will:

  • Introduce the AWS serverless framework and architecture in the context of a real business problem.
  • Provide the foundational knowledge to become more proficient in choosing and creating serverless solutions using AWS.
  • Provide demonstrations of the AWS services needed for deploying serverless solutions.
  • Help you develop skills in building and deploying serverless solutions using real-world examples of a serverless website and chatbot.

The syllabus allocates more than nine hours of video content and reading material over four weekly lessons. Each lesson has an estimated 2–3 hours per week of study time (though you can set your own pace and deadlines), with suggested exercises in the AWS Management Console. There is an end-of-course assessment that covers all the learning objectives and content.

The course is on-demand and 100% digital; you can even audit it for free. A completion certificate and access to the graded assessments are available for $49.

What can you expect?

In this course you will learn to use the AWS Serverless portfolio to create a chatbot that answers the question, “Can I let my cat outside?” You will build an application using every one of the concepts and services discussed in the class, including:

At the end of the class, you can audibly interact with the application to ask that essential question, “Can my cat go out in Denver?” (See the conversation in the following screenshot.)

Serverless Coursera training app

Across the four weeks of the course, you learn:

  1. What serverless computing is and how to create a chatbot with Amazon Lex using an S3 bucket to host a web application.
  2. How to build a highly scalable API with API Gateway and use Amazon CloudFront as a content delivery network (CDN) for your site and API.
  3. How to use Lambda to build serverless functions that write data to DynamoDB.
  4. How to apply lessons from the previous weeks to extend and add functionality to the chatbot.

Serverless Coursera training

AWS Fundamentals: Building Serverless Applications is now available. This course complements other standalone digital courses by AWS Training and Certification. They include the highly recommended Introduction to Serverless Development, as well as the following:

ICYMI: Serverless Q1 2019

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/icymi-serverless-q1-2019/

Welcome to the fifth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

If you didn’t see them, check our previous posts for what happened in 2018:

So, what might you have missed this past quarter? Here’s the recap.

Amazon API Gateway

Amazon API Gateway improved the experience for publishing APIs on the API Gateway Developer Portal. In addition, we also added features like a search capability, feedback mechanism, and SDK-generation capabilities.

Last year, API Gateway announced support for WebSockets. As of early February 2019, it is now possible to build WebSocket-enabled APIs via AWS CloudFormation and AWS Serverless Application Model (AWS SAM). The following diagram shows an example application.WebSockets

API Gateway is also now supported in AWS Config. This feature enhancement allows API administrators to track changes to their API configuration automatically. With the power of AWS Config, you can automate alerts—and even remediation—with triggered Lambda functions.

In early January, API Gateway also announced a service level agreement (SLA) of 99.95% availability.

AWS Step Functions

Step Functions Local

AWS Step Functions added the ability to tag Step Function resources and provide access control with tag-based permissions. With this feature, developers can use tags to define access via AWS Identity and Access Management (IAM) policies.

In addition to tag-based permissions, Step Functions was one of 10 additional services to have support from the Resource Group Tagging API, which allows a single central point of administration for tags on resources.

In early February, Step Functions released the ability to develop and test applications locally using a local Docker container. This new feature allows you to innovate faster by iterating faster locally.

In late January, Step Functions joined the family of services offering SLAs with an SLA of 99.9% availability. They also increased their service footprint to include the AWS China (Ningxia) and AWS China (Beijing) Regions.

AWS SAM Command Line Interface

AWS SAM Command Line Interface (AWS SAM CLI) released the AWS Toolkit for Visual Studio Code and the AWS Toolkit for IntelliJ. These toolkits are open source plugins that make it easier to develop applications on AWS. The toolkits provide an integrated experience for developing serverless applications in Node.js (Visual Studio Code) as well as Java and Python (IntelliJ), with more languages and features to come.

The toolkits help you get started fast with built-in project templates that leverage AWS SAM to define and configure resources. They also include an integrated experience for step-through debugging of serverless applications and make it easy to deploy your applications from the integrated development environment (IDE).

AWS Serverless Application Repository

AWS Serverless Application Repository applications can now be published to the application repository using AWS CodePipeline. This allows you to update applications in the AWS Serverless Application Repository with a continuous integration and continuous delivery (CICD) process. The CICD process is powered by a pre-built application that publishes other applications to the AWS Serverless Application Repository.

AWS Event Fork Pipelines

Event Fork Pipelines

AWS Event Fork Pipelines is now available in AWS Serverless Application Repository. AWS Event Fork Pipelines is a suite of nested open-source applications based on AWS SAM. You can deploy Event Fork Pipelines directly from AWS Serverless Application Repository into your AWS account. These applications help you build event-driven serverless applications by providing pipelines for common event-handling requirements.

AWS Cloud9

Cloud9

AWS Cloud9 announced that, in addition to Amazon Linux, you can now select Ubuntu as the operating system for their AWS Cloud9 environment. Before this announcement, you would have to stand up an Ubuntu server and connect AWS Cloud9 to the instance by using SSH. With native support for Ubuntu, you can take advantage of AWS Cloud9 features, such as instance lifecycle management for cost efficiency and preconfigured tooling environments.

AWS Cloud9 also added support for AWS CloudTrail, which allows you to monitor and react to changes made to your AWS Cloud9 environment.

Amazon Kinesis Data Analytics

Amazon Kinesis Data Analytics now supports CloudTrail logging. CloudTrail captures changes made to Kinesis Data Analytics and delivers the logs to an Amazon S3 bucket. This makes it easy for administrators to understand changes made to the application and who made them.

Amazon DynamoDB

Amazon DynamoDB removed the associated costs of DynamoDB Streams used in replicating data globally. Because of their use of streams to replicate data between Regions, this translates to cost savings in global tables. However, DynamoDB streaming costs remain the same for your applications reading from a replica table’s stream.

DynamoDB added the ability to switch encryption keys used to encrypt data. DynamoDB, by default, encrypts all data at rest. You can use the default encryption, the AWS-owned customer master key (CMK), or the AWS managed CMK to encrypt data. It is now possible to change between the AWS-owned CMK and the AWS managed CMK without having to modify code or applications.

Amazon DynamoDB Local, a local installable version of DynamoDB, has added support for transactional APIs, on-demand capacity, and as many as 20 global secondary indexes per table.

AWS Amplify

Amplify Deploy

AWS Amplify added support for OAuth 2.0 Authorization Code Grant flows in the native (iOS and Android) and React Native libraries. Previously, you would have to use third-party libraries and handwritten logic to achieve these use cases.

Additionally, Amplify also launched the ability to perform instant cache invalidation and delta deployments on every code commit. To achieve this, Amplify creates unique references to all the build artifacts on each deploy. Amplify has also added the ability to detect and upload only modified artifacts at the time of release to help reduce deployment time.

Amplify also added features for multiple environments, custom resolvers, larger data models, and IAM roles, including multi-factor authentication (MFA).

AWS AppSync

AWS AppSync increased its availability footprint to the EU (London) Region.

Amazon Cognito

Amazon Cognito increased its service footprint to include the Canada (central) Region. It also published an SLA of 99.9% availability.

Amazon Aurora

Amazon Aurora Serverless increases performance visibility by publishing logs to Amazon CloudWatch.

AWS CodePipeline

CodePipeline

AWS CodePipeline announces support for deploying static files to Amazon S3. While this may not usually fall under the serverless blogs and announcements, if you’re a developer who builds single-page applications or host static websites, this makes your life easier. Your static site can now be part of your CICD process without custom coding.

Serverless Posts

January:

February:

March

Tech talks

We hold several AWS Online Tech Talks covering serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. Here are the three tech talks that we delivered in Q1:

Whitepapers

Security Overview of AWS Lambda: This whitepaper presents a deep dive into the Lambda service through a security lens. It provides a well-rounded picture of the service, which can be useful for new adopters, as well as deepening understanding of Lambda for current users. Read the full whitepaper.

Twitch

AWS Launchpad Santa Clara

There is always something going on at our Twitch channel! Be sure and follow us so you don’t miss anything! For information about upcoming broadcasts and recent livestreams, keep an eye on AWS on Twitch for more Serverless videos and on the Join us on Twitch AWS page.

In other news

Building Happy Little APIs

Twitch Series: Building Happy Little APIs

In April, we started a 13-week deep dive into building APIs on AWS as part of our Twitch Build On series. The Building Happy Little APIs series covers the common and not-so-common use cases for APIs on AWS and the features available to customers as they look to build secure, scalable, efficient, and flexible APIs.

Twitch series: Build on Serverless: Season 2

Build On Serverless

Join Heitor Lessa across 14 weeks, nearly every Wednesday from April 24 – August 7 at 8AM PST/11AM EST/3PM UTC. Heitor is live-building a full-stack, serverless airline-booking application using a bunch of services: Lambda, Amplify, API Gateway, Amazon Cognito, AWS SAM, CloudWatch, AWS AppSync, and others. See the episode guide and sign up for stream reminders.

2019 AWS Summits

AWS Summit

The 2019 schedule is in full swing for 2019 AWS Global Summits held in major cities around the world. These free events bring the cloud computing community together to connect, collaborate, and learn about AWS. They attract technologists from all industries and skill levels who want to discover how AWS can help them innovate quickly and deliver flexible, reliable solutions at scale. Get notified when to register and learn more at the AWS Global Summit Program website.

Still looking for more?

The Serverless landing page has lots of information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

From Poll to Push: Transform APIs using Amazon API Gateway REST APIs and WebSockets

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/from-poll-to-push-transform-apis-using-amazon-api-gateway-rest-apis-and-websockets/

This post is courtesy of Adam Westrich – AWS Principal Solutions Architect and Ronan Prenty – Cloud Support Engineer

Want to deploy a web application and give a large number of users controlled access to data analytics? Or maybe you have a retail site that is fulfilling purchase orders, or an app that enables users to connect data from potentially long-running third-party data sources. Similar use cases exist across every imaginable industry and entail sending long-running requests to perform a subsequent action. For example, a healthcare company might build a custom web portal for physicians to connect and retrieve key metrics from patient visits.

This post is aimed at optimizing the delivery of information without needing to poll an endpoint. First, we outline the current challenges with the consumer polling pattern and alternative approaches to solve these information delivery challenges. We then show you how to build and deploy a solution in your own AWS environment.

Here is a glimpse of the sample app that you can deploy in your own AWS environment:

What’s the problem with polling?

Many customers need implement the delivery of long-running activities (such as a query to a data warehouse or data lake, or retail order fulfillment). They may have developed a polling solution that looks similar to the following:

  1. POST sends a request.
  2. GET returns an empty response.
  3. Another… GET returns an empty response.
  4. Yet another… GET returns an empty response.
  5. Finally, GET returns the data for which you were looking.

The challenges of traditional polling methods

  • Unnecessary chattiness and cost due to polling for result sets—Every time your frontend polls an API, it’s adding costs by leveraging infrastructure to compute the result. Empty polls are just wasteful!
  • Hampered mobile battery life—One of the top contributors of apps that eat away at battery life is excessive polling. Make sure that your app isn’t on your users’ Top App Battery Usage list-of-shame that could result in deletion.
  • Delayed data arrival due to polling schedule—Some approaches to polling include an incremental backoff to limit the number of empty polls. This sometimes results in a delay between data readiness and data arrival.

But what about long polling?

  • User request deadlocks can hamper application performance—Long synchronous user responses can lead to unexpected user wait times or UI deadlocks, which can affect mobile devices especially.
  • Memory leaks and consumption could bring your app down—Keeping long-running tasks queries open may overburden your backend and create failure scenarios, which may bring down your app.
  • HTTP default timeouts across browsers may result in inconsistent client experience—These timeouts vary across browsers, and can lead to an inconsistent experience across your end users. Depending on the size and complexity of the requests, processing can last longer than many of these timeouts and take minutes to return results.

Instead, create an event-driven architecture and move your APIs from poll to push.

Asynchronous push model

To create optimal UX experiences, frontend developers often strive to create progressive and reactive user experiences. Users can interact with frontend objects (for example, push buttons) with little lag when sending requests and receiving data. But frontend developers also want users to receive timely data, without sacrificing additional user actions or performing unnecessary processing.

The birth of microservices and the cloud over the past several years has enabled frontend developers and backend service designers to think about these problems in an asynchronous manner. This enables the virtually unlimited resources of the cloud to choreograph data processing. It also enables clients to benefit from progressive and reactive user experiences.

This is a fresh alternative to the synchronous design pattern, which often relies on client consumers to act as the conductor for all user requests. The following diagram compares the flow of communication between patterns.Comparison of communication patterns

Asynchronous orchestration can be easily choreographed using the workflow definition, monitoring, and tracking console with AWS Step Functions state machines. Break up services into functions with AWS Lambda and track executions within a state diagram, like the following:

With this approach, consumers send a request, which initiates downstream distributed processing. The subsequent steps are invoked according to the state machine workflow and each execution can be monitored.

Okay, but how does the consumer get the result of what they’re looking for?

Multiple approaches to this problem

There are different approaches for consumers to retrieve their resulting data. In order to create the optimal solution, there are several questions that a service owner may need to ask.

Is there known trust with the client (such as another microservice)?

If the answer is Yes, one approach is to use Amazon SNS. That way, consumers can subscribe to topics and have data delivered using email, SMS, or even HTTP/HTTPS as an event subscriber (that is, webhooks).

With webhooks, the consumer creates an endpoint where the service provider can issue a callback using a small amount of consumer-side resources. The consumer is waiting for an incoming request to facilitate downstream processing.

  1. Send a request.
  2. Subscribe the consumer to the topic.
  3. Open the endpoint.
  4. SNS sends the POST request to the endpoint.

If the trust answer is No, then clients may not be able to open a lightweight HTTP webhook after sending the initial request. In that case, you must open the webhook. Consider an alternative framework.

Are you restricted to only using a REST protocol?

Some applications can only run HTTP requests. These scenarios include the age of the technology or browser for end users, and potential security requirements that may block other protocols.

In these scenarios, customers may use a GET method as a best practice, but we still advise avoiding polling. In these scenarios, some design questions may be: Does data readiness happen at a predefined time or duration interval? Or can the user experience tolerate time between data readiness and data arrival?

If the answer to both of these is Yes, then consider trying to send GET calls to your RESTful API one time. For example, if a job averages 10 minutes, make your GET call at 10 minutes after the submission. Sounds simple, right? Much simpler than polling.

Should I use GraphQL, a WebSocket API, or another framework?

Each framework has tradeoffs.

If you want a more flexible query schema, you may gravitate to GraphQL, which follows a “data-driven UI” approach. If data drives the UI, then GraphQL may be the best solution for your use case.

AWS AppSync is a serverless GraphQL engine that supports the heavy lifting of these constructs. It offers functionality such as AWS service integration, offline data synchronization, and conflict resolution. With GraphQL, there’s a construct called Subscriptions where clients can subscribe to event-based subscriptions upon a data change.

Amazon API Gateway makes it easy for developers to deploy secure APIs at scale. With the recent introduction of API Gateway WebSocket APIs, web, mobile clients, and backend services can communicate over an established WebSocket connection. This also allows clients to be more reactive to data updates and only do work one time after an update has been received over the WebSocket connection.

The typical frontend design approach is to create a UI component that is updated when the results of the given procedure are complete. This is beneficial over a complete webpage refresh and gains in the customer’s user experience.

Because many companies have elected to use the REST framework for creating API-driven tightly bound service contracts, a RESTful interface can be used to validate and receive the request. It can provide a status endpoint used to deliver status. Also, it provides additional flexibility in delivering the result to the variety of clients, along with the WebSocket API.

Poll-to-push solution with API Gateway

Imagine a scenario where you want to be updated as soon as the data is created. Instead of the traditional polling methods described earlier, use an API Gateway WebSocket API. That pushes new data to the client as it’s created, so that it can be rendered on the client UI.

Alternatively, a WebSocket server can be deployed on Amazon EC2. With this approach, your server is always running to accept and maintain new connections. In addition, you manage the scaling of the instance manually at times of high demand.

By using an API Gateway WebSocket API in front of Lambda, you don’t need a machine to stay always on, eating away your project budget. API Gateway handles connections and invokes Lambda whenever there’s a new event. Scaling is handled on the service side. To update our connected clients from the backend, we can use the API Gateway callback URL. The AWS SDKs make communication from the backend easy. As an example, see the Boto3 sample of post_to_connection:

import boto3 
#Use a layer or deployment package to 
#include the latest boto3 version.
...
apiManagement = boto3.client('apigatewaymanagementapi', region_name={{api_region}},
                      endpoint_url={{api_url}})
...
response = apiManagement.post_to_connection(Data={{message}},ConnectionId={{connectionId}})
...

Solution example

To create this more optimized solution, create a simple web application that enables a user to make a request against a large dataset. It returns the results in a flat file over WebSocket. In this case, we’re querying, via Amazon Athena, a data lake on S3 populated with AWS Twitter sentiment (Twitter: @awscloud or #awsreinvent). However, this solution could apply to any data store, data mart, or data warehouse environment, or the long-running return of data for a response.

For the frontend architecture, create this web application using a JavaScript framework (such as JQuery). Use API Gateway to accept a REST API for the data and then open a WebSocket connection on the client to facilitate the return of results:poll to push solution architecture

  1. The client sends a REST request to API Gateway. This invokes a Lambda function that starts the Step Functions state machine execution. The function also returns a task token for the open connection activity worker. The function then returns the execution ARN and task token to the client.
  2. Using the data returned by the REST request, the client connects to the WebSocket API and sends the task token to the WebSocket connection.
  3. The WebSocket notifies the Step Functions state machine that the client is connected. The Lambda function completes the OpenConnection task through validating the task token and sending a success message.
  4. After RunAthenaQuery and OpenConnection are successful, the state machine updates the connected client over the WebSocket API that their long-running job is complete. It uses the REST API call post_to_connection.
  5. The client receives the update over their WebSocket connection using the IssueCallback Lambda function, with the callback URL from the API Gateway WebSocket API.

In this example application, the data response is an S3 presigned URL composed of results from Athena. You can then configure your frontend to use that S3 link to download to the client.

Why not just open the WebSocket API for the request?

While this approach can work, we advise against it for this use case. Break the required interfaces for this use case into three processes:

  1. Submit a request.
  2. Get the status of the request (to detect failure).
  3. Deliver the results.

For the submit request interface, RESTful APIs have strong controls to ensure that user requests for data are validated and given guardrails for the intended query purpose. This helps prevent rogue requests and unforeseen impacts to the analytics environment, especially when exposed to a large number of users.

With this example solution, you’re restricting the data requests to specific countries in the frontend JavaScript. Using the RESTful POST method for API requests enables you to validate data as query string parameters, such as the following:

https://<apidomain>.amazonaws.com/Demo/CreateStateMachineAndToken?Country=France

API Gateway models and mapping templates can also be used to validate or transform request payloads at the API layer before they hit the backend. Use models to ensure that the structure or contents of the client payload are the same as you expect. Use mapping templates to transform the payload sent by clients to another format, to be processed by the backend.

This REST validation framework can also be used to detect header information on WebSocket browser compatibility (external site). While using WebSocket has many advantages, not all browsers support it, especially older browsers. Therefore, a REST API request layer can pass this browser metadata and determine whether a WebSocket API can be opened.

Because a REST interface is already created for submitting the request, you can easily add another GET method if the client must query the status of the Step Functions state machine. That might be the case if a health check in the request is taking longer than expected. You can also add another GET method as an alternative access method for REST-only compatible clients.

If low-latency request and retrieval are an important characteristic of your API and there aren’t any browser-compatibility risks, use a WebSocket API with JSON model selection expressions to protect your backend with a schema.

In the spirit of picking the best tool for the job, use a REST API for the request layer and a WebSocket API to listen for the result.

This solution, although secure, is an example for how a non-polling solution can work on AWS. At scale, it may require refactoring due to the cross-talk at high concurrency that may result in client resubmissions.

To discover, deploy, and extend this solution into your own AWS environment, follow the PollToPush instructions in the AWS Serverless Application Repository.

Conclusion

When application consumers poll for long-running tasks, it can be a wasteful, detrimental, and costly use of resources. This post outlined multiple ways to refactor the polling method. Use API Gateway to host a RESTful interface, Step Functions to orchestrate your workflow, Lambda to perform backend processing, and an API Gateway WebSocket API to push results to your clients.

This Is My Architecture: Mobile Cryptocurrency Mining

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/this-is-my-architecture-mobile-cryptocurrency-mining/

In North America, approximately 95% of adults over the age of 25 have a bank account. In the developing world, that number is only about 52%. Cryptocurrencies can provide a platform for millions of unbanked people in the world to achieve financial freedom on a more level financial playing field.

Electroneum, a cryptocurrency company located in England, built its cryptocurrency mobile back end on AWS and is using the power of blockchain to unlock the global digital economy for millions of people in the developing world.

Electroneum’s cryptocurrency mobile app allows Electroneum customers in developing countries to transfer ETNs (exchange-traded notes) and pay for goods using their smartphones. Listen in to the discussion between AWS Solutions Architect Toby Knight and Electroneum CTO Barry Last as they explain how the company built its solution. Electroneum’s app is a web application that uses a feedback loop between its web servers and AWS WAF (a web application firewall) to automatically block malicious actors. The system then uses Athena, with a gamified approach, to provide an additional layer of blocking to prevent DDoS attacks. Finally, Electroneum built a serverless, instant payments system using AWS API Gateway, AWS Lambda, and Amazon DynamoDB to help its customers avoid the usual delays in confirming cryptocurrency transactions.

 

Deploying a personalized API Gateway serverless developer portal

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/deploying-a-personalized-api-gateway-serverless-developer-portal/

This post is courtesy of Drew Dresser, Application Architect – AWS Professional Services

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Customers of these APIs often want a website to learn and discover APIs that are available to them. These customers might include front-end developers, third-party customers, or internal system engineers. To produce such a website, we have created the API Gateway serverless developer portal.

The API Gateway serverless developer portal (developer portal or portal, for short) is an application that you use to make your API Gateway APIs available to your customers by enabling self-service discovery of those APIs. Your customers can use the developer portal to browse API documentation, register for, and immediately receive their own API key that they can use to build applications, test published APIs, and monitor their own API usage.

Over the past few months, the team has been hard at work contributing to the open source project, available on Github. The developer portal was relaunched on October 29, 2018, and the team will continue to push features and take customer feedback from the open source community. Today, we’re happy to highlight some key functionality of the new portal.

Benefits for API publishers

API publishers use the developer portal to expose the APIs that they manage. As an API publisher, you need to set up, maintain, and enable the developer portal. The new portal has the following benefits:

Benefits for API consumers

API Consumers use the developer portal as a traditional application user. An API consumer needs to understand the APIs being published. API consumers might be front-end developers, distributed system engineers, or third-party customers. The new developer portal comes with the following benefits for API consumers:

  • Explore – API consumers can quickly page through lists of APIs. When they find one they’re interested in, they can immediately see documentation on that API.
  • Learn – API consumers might need to drill down deeper into an API to learn its details. They want to learn how to form requests and what they can expect as a response.
  • Test – Through the developer portal, API consumers can get an API key and invoke the APIs directly. This enables developers to develop faster and with more confidence.

Architecture

The developer portal is a completely serverless application. It leverages Amazon API Gateway, Amazon Cognito User Pools, AWS Lambda, Amazon DynamoDB, and Amazon S3. Serverless architectures enable you to build and run applications without needing to provision, scale, and manage any servers. The developer portal is broken down in to multiple microservices, each with a distinct responsibility, as shown in the following image.

Identity management for the developer portal is performed by Amazon Cognito and a Lambda function in the Login & Registration microservice. An Amazon Cognito User Pool is configured out of the box to enable users to register and login. Additionally, you can deploy the developer portal to use a UI hosted by Amazon Cognito, which you can customize to match your style and branding.

Requests are routed to static content served from Amazon S3 and built using React. The React app communicates to the Lambda backend via API Gateway. The Lambda function is built using the aws-serverless-express library and contains the business logic behind the APIs. The business logic of the web application queries and add data to the API Key Creation and Catalog Update microservices.

To maintain the API catalog, the Catalog Update microservice uses an S3 bucket and a Lambda function. When an API’s Swagger file is added or removed from the bucket, the Lambda function triggers and maintains the API catalog by updating the catalog.json file in the root of the S3 bucket.

To manage the mapping between API keys and customers, the application uses the API Key Creation microservice. The service updates API Gateway with API key creations or deletions and then stores the results in a DynamoDB table that maps customers to API keys.

Deploying the developer portal

You can deploy the developer portal using AWS SAM, the AWS SAM CLI, or the AWS Serverless Application Repository. To deploy with AWS SAM, you can simply clone the repository and then deploy the application using two commands from your CLI. For detailed instructions for getting started with the portal, see Use the Serverless Developer Portal to Catalog Your API Gateway APIs in the Amazon API Gateway Developer Guide.

Alternatively, you can deploy using the AWS Serverless Application Repository as follows:

  1. Navigate to the api-gateway-dev-portal application and choose Deploy in the top right.
  2. On the Review page, for ArtifactsS3BucketName and DevPortalSiteS3BucketName, enter globally unique names. Both buckets are created for you.
  3. To deploy the application with these settings, choose Deploy.
  4. After the stack is complete, get the developer portal URL by choosing View CloudFormation Stack. Under Outputs, choose the URL in Value.

The URL opens in your browser.

You now have your own serverless developer portal application that is deployed and ready to use.

Publishing a new API

With the developer portal application deployed, you can publish your own API to the portal.

To get started:

  1. Create the PetStore API, which is available as a sample API in Amazon API Gateway. The API must be created and deployed and include a stage.
  2. Create a Usage Plan, which is required so that API consumers can create API keys in the developer portal. The API key is used to test actual API calls.
  3. On the API Gateway console, navigate to the Stages section of your API.
  4. Choose Export.
  5. For Export as Swagger + API Gateway Extensions, choose JSON. Save the file with the following format: apiId_stageName.json.
  6. Upload the file to the S3 bucket dedicated for artifacts in the catalog path. In this post, the bucket is named apigw-dev-portal-artifacts. To perform the upload, run the following command.
    aws s3 cp apiId_stageName.json s3://yourBucketName/catalog/apiId_stageName.json

Uploading the file to the artifacts bucket with a catalog/ key prefix automatically makes it appear in the developer portal.

This might be familiar. It’s your PetStore API documentation displayed in the OpenAPI format.

With an API deployed, you’re ready to customize the portal’s look and feel.

Customizing the developer portal

Adding a customer’s own look and feel to the developer portal is easy, and it creates a great user experience. You can customize the domain name, text, logo, and styling. For a more thorough walkthrough of customizable components, see Customization in the GitHub project.

Let’s walk through a few customizations to make your developer portal more familiar to your API consumers.

Customizing the logo and images

To customize logos, images, or content, you need to modify the contents of the your-prefix-portal-static-assets S3 bucket. You can edit files using the CLI or the AWS Management Console.

Start customizing the portal by using the console to upload a new logo in the navigation bar.

  1. Upload the new logo to your bucket with a key named custom-content/nav-logo.png.
    aws s3 cp {myLogo}.png s3://yourPrefix-portal-static-assets/custom-content/nav-logo.png
  2. Modify object permissions so that the file is readable by everyone because it’s a publicly available image. The new navigation bar looks something like this:

Another neat customization that you can make is to a particular API and stage image. Maybe you want your PetStore API to have a dog picture to represent the friendliness of the API. To add an image:

  1. Use the command line to copy the image directly to the S3 bucket location.
    aws s3 cp my-image.png s3://yourPrefix-portal-static-assets/custom-content/api-logos/apiId-stageName.png
  2. Modify object permissions so that the file is readable by everyone.

Customizing the text

Next, make sure that the text of the developer portal welcomes your pet-friendly customer base. The YAML files in the static assets bucket under /custom-content/content-fragments/ determine the portal’s text content.

To edit the text:

  1. On the AWS Management Console, navigate to the website content S3 bucket and then navigate to /custom-content/content-fragments/.
  2. Home.md is the content displayed on the home page, APIs.md controls the tab text on the navigation bar, and GettingStarted.md contains the content of the Getting Started tab. All three files are written in markdown. Download one of them to your local machine so that you can edit the contents. The following image shows Home.md edited to contain custom text:
  3. After editing and saving the file, upload it back to S3, which results in a customized home page. The following image reflects the configuration changes in Home.md from the previous step:

Customizing the domain name

Finally, many customers want to give the portal a domain name that they own and control.

To customize the domain name:

  1. Use AWS Certificate Manager to request and verify a managed certificate for your custom domain name. For more information, see Request a Public Certificate in the AWS Certificate Manager User Guide.
  2. Copy the Amazon Resource Name (ARN) so that you can pass it to the developer portal deployment process. That process is now includes the certificate ARN and a property named UseRoute53Nameservers. If the property is set to true, the template creates a hosted zone and record set in Amazon Route 53 for you. If the property is set to false, the template expects you to use your own name server hosting.
  3. If you deployed using the AWS Serverless Application Repository, navigate to the Application page and deploy the application along with the certificate ARN.

After the developer portal is deployed and your CNAME record has been added, the website is accessible from the custom domain name as well as the new Amazon CloudFront URL.

Customizing the logo, text content, and domain name are great tools to make the developer portal feel like an internally developed application. In this walkthrough, you completely changed the portal’s appearance to enable developers and API consumers to discover and browse APIs.

Conclusion

The developer portal is available to use right away. Support and feature enhancements are tracked in the public GitHub. You can contribute to the project by following the Code of Conduct and Contributing guides. The project is open-sourced under the Amazon Open Source Code of Conduct. We plan to continue to add functionality and listen to customer feedback. We can’t wait to see what customers build with API Gateway and the API Gateway serverless developer portal.

ICYMI: Serverless Q4 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/icymi-serverless-q4-2018/

This post is courtesy of Eric Johnson, Senior Developer Advocate – AWS Serverless

Welcome to the fourth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

This edition of ICYMI includes all announcements from AWS re:Invent 2018!

If you didn’t see them, check our Q1 ICYMIQ2 ICYMI, and Q3 ICYMI posts for what happened then.

So, what might you have missed this past quarter? Here’s the recap.

New features

AWS Lambda introduced the Lambda runtime API and Lambda layers, which enable developers to bring their own runtime and share common code across Lambda functions. With the release of the runtime API, we can now support runtimes from AWS partners such as the PHP runtime from Stackery and the Erlang and Elixir runtimes from Alert Logic. Using layers, partners such as Datadog and Twistlock have also simplified the process of using their Lambda code libraries.

To meet the demand of larger Lambda payloads, Lambda doubled the payload size of asynchronous calls to 256 KB.

In early October, Lambda also lengthened the runtime limit by enabling Lambda functions that can run up to 15 minutes.

Lambda also rolled out native support for Ruby 2.5 and Python 3.7.

You can now process Amazon Kinesis streams up to 68% faster with AWS Lambda support for Kinesis Data Streams enhanced fan-out and HTTP/2 for faster streaming.

Lambda also released a new Application view in the console. It’s a high-level view of all of the resources in your application. It also gives you a quick view of deployment status with the ability to view service metrics and custom dashboards.

Application Load Balancers added support for targeting Lambda functions. ALBs can provide a simple HTTP/S front end to Lambda functions. ALB features such as host- and path-based routing are supported to allow flexibility in triggering Lambda functions.

Amazon API Gateway added support for AWS WAF. You can use AWS WAF for your Amazon API Gateway APIs to protect from attacks such as SQL injection and cross-site scripting (XSS).

API Gateway has also improved parameter support by adding support for multi-value parameters. You can now pass multiple values for the same key in the header and query string when calling the API. Returning multiple headers with the same name in the API response is also supported. For example, you can send multiple Set-Cookie headers.

In October, API Gateway relaunched the Serverless Developer Portal. It provides a catalog of published APIs and associated documentation that enable self-service discovery and onboarding. You can customize it for branding through either custom domain names or logo/styling updates. In November, we made it easier to launch the developer portal from the Serverless Application Repository.

In the continuous effort to decrease customer costs, API Gateway introduced tiered pricing. The tiered pricing model allows the cost of API Gateway at scale with an API Requests price as low as $1.51 per million requests at the highest tier.

Last but definitely not least, API Gateway released support for WebSocket APIs in mid-December as a final holiday gift. With this new feature, developers can build bidirectional communication applications without having to provision and manage any servers. This has been a long-awaited and highly anticipated announcement for the serverless community.

AWS Step Functions added eight new service integrations. With this release, the steps of your workflow can exist on Amazon ECS, AWS Fargate, Amazon DynamoDB, Amazon SNS, Amazon SQS, AWS Batch, AWS Glue, and Amazon SageMaker. This is in addition to the services that Step Functions already supports: AWS Lambda and Amazon EC2.

Step Functions expanded by announcing availability in the EU (Paris) and South America (São Paulo) Regions.

The AWS Serverless Application Repository increased its functionality by supporting more resources in the repository. The Serverless Application Repository now supports Application Auto Scaling, Amazon Athena, AWS AppSync, AWS Certificate Manager, Amazon CloudFront, AWS CodeBuild, AWS CodePipeline, AWS Glue, AWS dentity and Access Management, Amazon SNS, Amazon SQS, AWS Systems Manager, and AWS Step Functions.

The Serverless Application Repository also released support for nested applications. Nested applications enable you to build highly sophisticated serverless architectures by reusing services that are independently authored and maintained but easily composed using AWS SAM and the Serverless Application Repository.

AWS SAM made authorization simpler by introducing SAM support for authorizers. Enabling authorization for your APIs is as simple as defining an Amazon Cognito user pool or an API Gateway Lambda authorizer as a property of your API in your SAM template.

AWS SAM CLI introduced two new commands. First, you can now build locally with the sam build command. This functionality allows you to compile deployment artifacts for Lambda functions written in Python. Second, the sam publish command allows you to publish your SAM application to the Serverless Application Repository.

Our SAM tooling team also released the AWS Toolkit for PyCharm, which provides an integrated experience for developing serverless applications in Python.

The AWS Toolkits for Visual Studio Code (Developer Preview) and IntelliJ (Developer Preview) are still in active development and will include similar features when they become generally available.

AWS SAM and the AWS SAM CLI implemented support for Lambda layers. Using a SAM template, you can manage your layers, and using the AWS SAM CLI, you can develop and debug Lambda functions that are dependent on layers.

Amazon DynamoDB added support for transactions, allowing developer to enforce all-or-nothing operations. In addition to transaction support, Amazon DynamoDB Accelerator also added support for DynamoDB transactions.

Amazon DynamoDB also announced Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance costs and performance.

AWS Amplify released the Amplify Console, which is a continuous deployment and hosting service for modern web applications with serverless backends. Modern web applications include single-page app frameworks such as React, Angular, and Vue and static-site generators such as Jekyll, Hugo, and Gatsby.

Amazon SQS announced support for Amazon VPC Endpoints using PrivateLink.

Serverless blogs

October

November

December

Tech talks

We hold several Serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. Here are the three tech talks that we delivered in Q4:

Twitch

We’ve been so busy livestreaming on Twitch that you’re most certainly missing out if you aren’t following along!

For information about upcoming broadcasts and recent livestreams, keep an eye on AWS on Twitch for more Serverless videos and on the Join us on Twitch AWS page.

New Home for SAM Docs

This quarter, we moved all SAM docs to https://docs.aws.amazon.com/serverless-application-model. Everything you need to know about SAM is there. If you don’t find what you’re looking for, let us know!

In other news

The schedule is out for 2019 AWS Global Summits in cities around the world. AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Summits are held in major cities around the world. They attract technologists from all industries and skill levels who want to discover how AWS can help them innovate quickly and deliver flexible, reliable solutions at scale. Get notified when to register and learn more at the AWS Global Summits website.

Still looking for more?

The Serverless landing page has lots of information. The resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

Announcing WebSocket APIs in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/

This post is courtesy of Diego Magalhaes, AWS Senior Solutions Architect – World Wide Public Sector-Canada & JT Thompson, AWS Principal Software Development Engineer – Amazon API Gateway

Starting today, you can build bidirectional communication applications using WebSocket APIs in Amazon API Gateway without having to provision and manage any servers.

HTTP-based APIs use a request/response model with a client sending a request to a service and the service responding synchronously back to the client. WebSocket-based APIs are bidirectional in nature. This means that a client can send messages to a service and services can independently send messages to its clients.

This bidirectional behavior allows for richer types of client/service interactions because services can push data to clients without a client needing to make an explicit request. WebSocket APIs are often used in real-time applications such as chat applications, collaboration platforms, multiplayer games, and financial trading platforms.

This post explains how to build a serverless, real-time chat application using WebSocket API and API Gateway.

Overview

Historically, building WebSocket APIs required setting up fleets of hosts that were responsible for managing the persistent connections that underlie the WebSocket protocol. Now, with API Gateway, this is no longer necessary. API Gateway handles the connections between the client and service. It lets you build your business logic using HTTP-based backends such as AWS Lambda, Amazon Kinesis, or any other HTTP endpoint.

Before you begin, here are a couple of the concepts of a WebSocket API in API Gateway. The first is a new resource type called a route. A route describes how API Gateway should handle a particular type of client request, and includes a routeKey parameter, a value that you provide to identify the route.

A WebSocket API is composed of one or more routes. To determine which route a particular inbound request should use, you provide a route selection expression. The expression is evaluated against an inbound request to produce a value that corresponds to one of your route’s routeKey values.

There are three special routeKey values that API Gateway allows you to use for a route:

  • $default—Used when the route selection expression produces a value that does not match any of the other route keys in your API routes. This can be used, for example, to implement a generic error handling mechanism.
  • $connect—The associated route is used when a client first connects to your WebSocket API.
  • $disconnect—The associated route is used when a client disconnects from your API. This call is made on a best-effort basis.

You are not required to provide routes for any of these special routeKey values.

Build a serverless, real-time chat application

To understand how you can use the new WebSocket API feature of API Gateway, I show you how to build a real-time chat app. To simplify things, assume that there is only a single chat room in this app. Here are the capabilities that the app includes:

  • Clients join the chat room as they connect to the WebSocket API.
  • The backend can send messages to specific users via a callback URL that is provided after the user is connected to the WebSocket API.
  • Users can send messages to the room.
  • Disconnected clients are removed from the chat room.

Here’s an overview of the real-time chat application:

A serverless real-time chat application using WebSocket API on Amazon API Gateway

The application is composed of the WebSocket API in API Gateway that handles the connectivity between the client and servers (1). Two AWS Lambda functions react when clients connect (2) or disconnect (5) from the API. The sendMessage function (3) is invoked when the clients send messages to the server. The server sends the message to all connected clients (4) using the new API Gateway Management API. To track each of the connected clients, use a DynamoDB table to persist the connection identifiers.

To help you see this in action more quickly, I’ve provided an AWS SAM application that includes the Lambda functions, DynamoDB table definition, and IAM roles. I placed it into the AWS Serverless Application Repository. For more information, see the simple-websockets-chat-app repo on GitHub.

Create a new WebSocket API

  1. In the API Gateway console, choose Create API, New API.
  2. Under Choose the protocol, choose WebSocket.
  3. For API name, enter My Chat API.
  4. For Route Selection Expression, enter $request.body.action.
  5. Enter a description if you’d like and click Create API.

The attribute described by the Route Selection Expression value should be present in every message that a client sends to the API. Here’s one example of a request message:

{
    "action": "sendmessage",
    “data”: "Hello, I am using WebSocket APIs in API Gateway.”
}

To route messages based on the message content, the WebSocket API expects JSON-formatted messages with a property serving as a routing key.

Manage routes

Now, configure your new API to respond to incoming requests. For this app, create three routes as described earlier. First, configure the sendmessage route with a Lambda integration.

  1. In the API Gateway console, under My Chat API, choose Routes.
  2. For New Route Key, enter sendmessage and confirm it.

Each route includes route-specific information such as its model schemas, as well as a reference to its target integration. With the sendmessage route created, use a Lambda function as an integration that is responsible for sending messages.

At this point, you’ve either deployed the AWS Serverless Application Repository backend or deployed it yourself using AWS SAM. In the Lambda console, look at the sendMessage function. This function receives the data sent by one of the clients, looks up all the currently connected clients, and sends the provided data to each of them. Here’s a snippet from the Lambda function:

DDB.scan(scanParams, function (err, data) {
  // some code omitted for brevity

  var apigwManagementApi = new AWS.APIGatewayManagementAPI({
    apiVersion: "2018-11-29",
    endpoint: event.requestContext.domainName " /" + event.requestContext.stage
  });
  var postParams = {
    Data: JSON.parse(event.body).data
  };

  data.Items.forEach(function (element) {
    postParams.ConnectionId = element.connectionId.S;
    apigwManagementApi.postToConnection(postParams, function (err, data) {
      // some code omitted for brevity
    });
  });

When one of the clients sends a message with an action of “sendmessage,” this function scans the DynamoDB table to find all of the currently connected clients. For each of them, it makes a postToConnection call with the provided data.

To send messages properly, you must know the API to which your clients are connected. This means explicitly setting the SDK endpoint to point to your API.

What’s left is to properly track the clients that are connected to the chat room. For this, take advantage of two of the special routeKey values and implement routes for $connect and $disconnect.

The onConnect function inserts the connectionId value from requestContext to the DynamoDB table.

exports.handler = function(event, context, callback) {
  var putParams = {
    TableName: process.env.TABLE_NAME,
    Item: {
      connectionId: { S: event.requestContext.connectionId }
    }
  };

  DDB.putItem(putParams, function(err, data) {
    callback(null, {
      statusCode: err ? 500 : 200,
      body: err ? "Failed to connect: " + JSON.stringify(err) : "Connected"
    });
  });
};

The onDisconnect function removes the record corresponding with the specified connectionId value.

exports.handler = function(event, context, callback) {
  var deleteParams = {
    TableName: process.env.TABLE_NAME,
    Key: {
      connectionId: { S: event.requestContext.connectionId }
    }
  };

  DDB.deleteItem(deleteParams, function(err) {
    callback(null, {
      statusCode: err ? 500 : 200,
      body: err ? "Failed to disconnect: " + JSON.stringify(err) : "Disconnected."
    });
  });
};

As I mentioned earlier, the onDisconnect function may not be called. To keep from retrying connections that are never going to be available again, add some additional logic in case the postToConnection call does not succeed:

if (err.statusCode === 410) {
  console.log("Found stale connection, deleting " + postParams.connectionId);
  DDB.deleteItem({ TableName: process.env.TABLE_NAME,
                   Key: { connectionId: { S: postParams.connectionId } } });
} else {
  console.log("Failed to post. Error: " + JSON.stringify(err));
}

API Gateway returns a status of 410 GONE when the connection is no longer available. If this happens, delete the identifier from the DynamoDB table.

To make calls to your connected clients, your application needs a new permission: “execute-api:ManageConnections”. This is handled for you in the AWS SAM template.

Deploy the WebSocket API

The next step is to deploy the WebSocket API. Because it’s the first time that you’re deploying the API, create a stage, such as “dev,” and give a sample description.

The Stage editor screen displays all the information for managing your WebSocket API, such as Settings, Logs/Tracing, Stage Variables, and Deployment History. Also, make sure that you check your limits and read more about API Gateway throttling.

Make sure that you monitor your tests using the dashboard available for your newly created WebSocket API.

Test the chat API

To test the WebSocket API, you can use wscat, an open-source, command line tool.

  1. Install NPM.
  2. Install wscat:
    $ npm install -g wscat
  3. On the console, connect to your published API endpoint by executing the following command:
    $ wscat -c wss://{YOUR-API-ID}.execute-api.{YOUR-REGION}.amazonaws.com/{STAGE}
  4. To test the sendMessage function, send a JSON message like the following example. The Lambda function sends it back using the callback URL:
    $ wscat -c wss://{YOUR-API-ID}.execute-api.{YOUR-REGION}.amazonaws.com/dev
    connected (press CTRL+C to quit)
    > {"action":"sendmessage", "data":"hello world"}
    < hello world

If you run wscat in multiple consoles, each will receive the “hello world”.

You’re now ready to send messages and get responses back and forth from your WebSocket API!

Conclusion

In this post, I showed you how to use WebSocket APIs in API Gateway, including a message exchange between client and server using routes and the route selection expression. You used Lambda and DynamoDB to create a serverless real-time chat application. While this post only scratched the surface of what is possible, you did all of this without needing to manage any servers at all.

You can get started today by creating your own WebSocket APIs in API Gateway. To learn more, see the Amazon API Gateway Developer Guide. You can also watch the Building Real-Time Applications using WebSocket APIs Supported by Amazon API Gateway webinar hosted by George Mao.

I’m excited to see what new applications come up with your use of the new WebSocket API. I welcome any feedback here, in the AWS forums, or on Twitter.

Happy coding!

Announcing Ruby Support for AWS Lambda

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-ruby-support-for-aws-lambda/

This post is courtesy of Xiang Shen Senior – AWS Solutions Architect and Alex Wood Software Development Engineer – AWS SDKs and Tools

Ruby remains a popular programming language for AWS customers. In the summer of 2011, AWS introduced the initial release of AWS SDK for Ruby, which has helped Ruby developers to better integrate and use AWS resources. The SDK is now in its third major version and it continues to improve and deliver AWS API updates.

Today, AWS is excited to announce Ruby as a supported language for AWS Lambda.

Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default. That makes it easy to interact with the AWS resources directly from your functions. In this post, we walk you through how it works, using examples:

  • Creating a Hello World example
  • Including dependencies
  • Migrating a Sinatra application

Creating a Hello World example

If you are new to Lambda, it’s simple to create a function using the console.

  1. Open the Lambda console.
  2. Choose Create function.
  3. Select Author from scratch.
  4. Name your function something like hello_ruby.
  5. For Runtime, choose Ruby 2.5.
  6. For Role, choose Create a new role from one or more templates.
  7. Name your role something like hello_ruby_role.
  8. Choose Create function.

Your function is created and you are directed to your function’s console page.

You can modify all aspects of your function, such editing the function’s code, assigning one or more triggering services, or configuring additional services that your function can interact with. From the Monitoring tab, you can view various metrics about your function’s usage as well as a link to CloudWatch Logs.

As you can see in the code editor, the Ruby code for this Hello World example is basic. It has a single handler function named lambda_handler and returns an HTTP status code of 200 and the text “Hello from Lambda!” in a JSON structure. You can learn more about the programming model for Lambda functions.

Next, test this Lambda function and confirm that it is working.

  1. On your function console page, choose Test.
  2. Name the test HelloRubyTest and clear out the data in the brackets. This function takes no input.
  3. Choose Save.
  4. Choose Test.

You should now see the results of a success invocation of your Ruby Lambda function.

Including dependencies

When developing Lambda functions with Ruby, you probably need to include other dependencies in your code. To achieve this, use the tool bundle to download the needed RubyGems to a local directory and create a deployable application package. All dependencies need to be included in either this package or in a Lambda layer.

Do this with a Lambda function that is using the gem aws-record to save data into an Amazon DynamoDB table.

  1. Create a directory for your new Ruby application in your development environment:
    mkdir hello_ruby
    cd hello_ruby
  2. Inside of this directory, create a file Gemfile and add aws-record to it:
    source 'https://rubygems.org'
    gem 'aws-record', '~> 2'
  3. Create a hello_ruby_record.rb file with the following code. In the code, put_item is the handler method, which expects an event object with a body attribute. After it’s invoked, it saves the value of the body attribute along with a UUID to the table.
    # hello_ruby_record.rb
    require 'aws-record'
    
    class DemoTable
      include Aws::Record
      set_table_name ENV[‘DDB_TABLE’]
      string_attr :id, hash_key: true
      string_attr :body
    end
    
    def put_item(event:,context:)
      body = event["body"]
      item = DemoTable.new(id: SecureRandom.uuid, body: body)
      item.save! # raise an exception if save fails
      item.to_h
    end 
  4. Next, bring in the dependencies for this application. Bundler is a tool used to manage RubyGems. From your application directory, run the following two commands. They create the Gemfile.lock file and download the gems to the local directory instead of to the local systems Ruby directory. This way, they ensure that all your dependencies are included in the function deployment package.
    bundle install
    bundle install --deployment
  5. AWS SAM is a templating tool that you can use to create and manage serverless applications. With it, you can define the structure of your Lambda application, define security policies and invocation sources, and manage or create almost any AWS resource. Use it now to help define the function and its policy, create your DynamoDB table and then deploy the application.
    Create a new file in your hello_ruby directory named template.yaml with the following contents:

    AWSTemplateFormatVersion: '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
    Description: 'sample ruby application'
    
    Resources:
      HelloRubyRecordFunction:
        Type: AWS::Serverless::Function
        Properties:
          Handler: hello_ruby_record.put_item
          Runtime: ruby2.5
          Policies:
          - DynamoDBCrudPolicy:
              TableName: !Ref RubyExampleDDBTable 
          Environment:
            Variables:
              DDB_TABLE: !Ref RubyExampleDDBTable
    
      RubyExampleDDBTable:
        Type: AWS::Serverless::SimpleTable
        Properties:
          PrimaryKey:
            Name: id
            Type: String
    
    Outputs:
      HelloRubyRecordFunction:
        Description: Hello Ruby Record Lambda Function ARN
        Value:
          Fn::GetAtt:
          - HelloRubyRecordFunction
          - Arn

    In this template file, you define Serverless::Function and Serverless::SimpleTable as resources, which correspond to a Lambda function and DynamoDB table.

    The line Policies in the function and the following line DynamoDBCrudPolicy refer to an AWS SAM policy template, which greatly simplifies granting permissions to Lambda functions. The DynamoDBCrudPolicy allows you to create, read, update, and delete DynamoDB resources and items in tables.

    In this example, you limit permissions by specifying TableName and passing a reference to Serverless::SimpleTable that the template creates. Next in importance is the Environment section of this template, where you create a variable named DDB_TABLE and also pass it a reference to Serverless::SimpleTable. Lastly, the Outputs section of the template allows you to easily find the function that was created.

    The directory structure now should look like the following:

    $ tree -L 2 -a
    .
    ├── .bundle
    │   └── config
    ├── Gemfile
    ├── Gemfile.lock
    ├── hello_ruby_record.rb
    ├── template.yaml
    └── vendor
        └── bundle
  6. Now use the template file to package and deploy your application. An AWS SAM template can be deployed using the AWS CloudFormation console, AWS CLI, or AWS SAM CLI. The AWS SAM CLI is a tool that simplifies serverless development across the lifecycle of your application. That includes the initial creation of a serverless project, to local testing and debugging, to deployment up to AWS. Follow the steps for your platform to get the AWS SAM CLI Installed.
  7. Create an Amazon S3 bucket to store your application code. Run the following AWS CLI command to create an S3 bucket with a custom name:
    aws s3 mb s3://<bucketname>
  8. Use the AWS SAM CLI to package your application:
    sam package --template-file template.yaml \
    --output-template-file packaged-template.yaml \
    --s3-bucket <bucketname>

    This creates a new template file named packaged-template.yaml.

  9. Use the AWS SAM CLI to deploy your application. Use any stack-name value:
    sam deploy --template-file packaged-template.yaml \
    --stack-name helloRubyRecord \
    --capabilities CAPABILITY_IAM
    
    Waiting for changeset to be created...
    Waiting for stack create/update to complete
    Successfully created/updated stack - helloRubyRecord

    This can take a few moments to create all of the resources from the template. After you see the output “Successfully created/updated stack,” it has completed.

  10. In the Lambda Serverless Applications console, you should see your application listed:

  11. To see the application dashboard, choose the name of your application stack. Because this application was deployed with either AWS SAM or AWS CloudFormation, the dashboard allows you to manage the resources as a single group with a number of features. You can view the stack resources, its template, recent deployments, metrics, including any custom dashboards you might make.

Now test the Lambda function and confirm that it is working.

  1. Choose Overview. Under Resources, select the Lambda function created in this application:

  2. In the Lambda function console, configure a test as you did earlier. Use the following JSON:
    {"body": "hello lambda"}
  3. Execute the test and you should see a success message:

  4. In the Lambda Serverless Applications console for this stack, select the DynamoDB table created:

  5. Choose Items

The id and body should match the output from the Lambda function test run.

You just created a Ruby-based Lambda application using AWS SAM!

Migrating a Sinatra application

Sinatra is a popular open source framework for Ruby that launched over a decade ago. It allows you to quickly create powerful web applications with minimal effort. Until today, you still would have needed servers to run those applications. Now, you can just deploy a Sinatra app to Lambda and move to a serverless world!

Thanks to Rack, a Ruby webserver interface, you only need to create a simple Lambda function to bridge the gap between the HTTP requests and the serverless Sinatra application. You don’t need to make additional changes to other Sinatra files at all. Paired with Amazon API Gateway and DynamoDB, your Sinatra application runs completely serverless!

For this post, take an existing Sinatra application and make it function in Lambda.

  1. Clone the serverless-sinatra-sample GitHub repository into your local environment.
    Under the app directory, find the Sinatra application files. The files enable you to specify routes to return either JSON or HTML that is generated from ERB templates in the server.rb file.

    ├── app
    │   ├── config.ru
    │   ├── server.rb
    │   └── views
    │       ├── feedback.erb
    │       ├── index.erb
    │       └── layout.erb

    In the root of the directory, you also find the template.yaml and lambda.rb files. The template.yaml includes four resources:

    • Serverless::Function
    • Serverless::API
    • Serverless::SimpleTable
    • Lambda::Permission

    In the lambda.rb file, you find the main handler for this function, which calls Rack to interface with the Sinatra application.

  2. This application has several dependencies, so use bundle to install them:
    bundle install
    bundle install --deployment
  3. Package this Lambda function and the related application components using the AWS SAM CLI:
    sam package --template-file template.yaml \
    --output-template-file packaged-template.yaml \
    --s3-bucket <bucketname>
  4. Next, deploy the application:
    sam deploy --template-file packaged-template.yaml \
    --stack-name LambdaSinatra \
    --capabilities CAPABILITY_IAM                                                                                                        
    
    Waiting for changeset to be created..
    Waiting for stack create/update to complete
    Successfully created/updated stack - LambdaSinatra
  5. In the Lambda Serverless Applications console, select your application:

  6. Choose Overview. Under Resources, find the ApiGateway RestApi entry and select the Logical ID. Below it is SinatraAPI:

  7. In the API Gateway console, in the left navigation pane, choose Dashboard. Copy the URL from Invoke this API and paste it in another browser tab:

  8. Add on to the URL a route from the Sinatra application, as seen in the server.rb.

For example, this is the hello-world GET route:

And this is the /feedback route:

Congratulations, you’ve just successfully deployed a Sinatra-based Ruby application inside of a Lambda function!

Conclusion

As you’ve seen in this post, getting started with Ruby on Lambda is made easy via either the AWS Management Console or the AWS SAM CLI.

You might even be able to easily port existing applications to Lambda without needing to change your code base. The new support for Ruby allows you to benefit from the greatly reduced operational overhead, scalability, availability, and pay–per-use pricing of Lambda.

If you are excited about this feature as well, there is even more information on writing Lambda functions in Ruby in the AWS Lambda Developer Guide.

Happy coding!

Amazon API Gateway adds support for AWS WAF

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/amazon-api-gateway-adds-support-for-aws-waf/

This post courtesy of Heitor Lessa, AWS Specialist Solutions Architect – Serverless

Today, I’m excited to tell you about the Amazon API Gateway native integration with AWS WAF. Previously, if you wanted to secure your API in Amazon API Gateway with AWS WAF, you had to deploy a Regional API endpoint and use your own Amazon CloudFront distribution. This new feature now enables you to provision any­­ API Gateway endpoint and secure it with AWS WAF without having to configure your own CloudFront distribution to add that capability.

In Part 1 of this series, I described how to protect your API provided by API Gateway using AWS WAF.

In Part 2 of this series, I described how to use API keys as a shared secret between a CloudFront distribution and API Gateway to secure public access to your API in API Gateway. This new AWS WAF integration means that the method described in Part 2 is no longer necessary.

The following image describes methods to secure your API in API Gateway before and after this feature was made available.

Where:

  1. AWS WAF securing CloudFront endpoint only.
  2. AWS WAF securing both CloudFront and API Gateway endpoints natively.
  3. AWS WAF securing Amazon API Gateway endpoints natively where global presence is not needed.

Enabling AWS WAF for an API managed by Amazon API Gateway

For this walkthrough, you can use an existing Pet Store API or any API in API Gateway that you may already have deployed. You create a new AWS WAF web ACL that is later associated with your API Gateway stage.

Follow these steps to create a web ACL:

  1. Open the AWS WAF console.
  2. Choose Create web ACL.
  3. For Web ACL Name, enter ApiGateway-HTTP-Flood-Sample.
  4. For Region, choose US East (N. Virginia).
  5. Choose Next until you reach Step 3: Create rules.
  6. Choose Create rule and enter HTTP Flood Sample.
  7. For Rule type, choose Rate-based rule.
  8. For Rate limit, enter 2000 and choose Create.
  9. For Default action, choose Allow all requests that don’t match any rules.
  10. Choose Review and create.
  11. Confirm that your options look similar to the following image and choose Confirm and create next.

You can now follow the steps to enable the AWS WAF web ACL for an existing API in API Gateway:

  1. Open the Amazon API Gateway console.
  2. Choose Stages, prod.
  3. Under Web Application Firewall (WAF), choose ApiGateway-HTTP-Flood-Sample (or the web ACL that you just created).
  4. Choose Save Changes.

Testing your API in API Gateway now secured by AWS WAF

AWS WAF provides HTTP flood protection that is a rate-based rule. The rate-based rule is automatically triggered when web requests from a client exceed a configurable threshold. The threshold is defined by the maximum number of incoming requests allowed from a single IP address within a five-minute period.

After this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold. For this example, you defined 2000 requests as a threshold for the HTTP flood rate–based rule.

Artillery, an open source modern load testing toolkit, is used to send a large number of requests directly to the API Gateway Invoke URL to test whether your AWS WAF native integration is working correctly.

Firstly, follow these steps to retrieve the correct Invoke URL of your Pet Store API:

  1. Open the API Gateway console.
  2. In the left navigation pane, open the PetStore API.
  3. Choose Stages, select prod, and copy the Invoke URL value.

Secondly, use cURL to query your distribution and see the API output before the rate limit rule is triggered:

$ curl -s INVOKE_URL/pets

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
] 

Then, use Artillery to send a large number of requests in a short period of time to trigger your rate limit rule:

$ artillery quick -n 2000 --count 10 INVOKE_URL/pets

With this command, Artillery sends 2000 requests to your PetStore API from 10 concurrent users. By doing so, you trigger the rate limit rule in less than the 5-minute threshold. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try re-running the cURL command. You should no longer see a list of pets:

{“message”:”Forbidden”}

As you can see from the output, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

As you can see, with the AWS WAF native integration with Amazon API Gateway, you no longer have to manage your own Amazon CloudFront distribution in order to secure your API with AWS WAF. The AWS WAF native integration makes this process seamless.

I hope that you found the information in this post helpful. Remember that you can use this integration today with all Amazon API Gateway endpoints (Edge, Regional, and Private). It is available in the following Regions:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • US West (N. California)
  • EU (Ireland)
  • EU (Frankfurt)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)

Support for multi-value parameters in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/support-for-multi-value-parameters-in-amazon-api-gateway/

This post is courtesy of Akash Jain, Partner Solutions Architect – AWS

The new multi-value parameter support feature for Amazon API Gateway allows you to pass multiple values for the same key in the header and query string as part of your API request. It also allows you to pass multi-value headers in the API response to implement things like sending multiple Set-Cookie headers.

As part of this feature, AWS added two new keys:

  • multiValueQueryStringParameters—Used in the API Gateway request to support a multi-valued parameter in the query string.
  • multiValueHeaders—Used in both the request and response to support multi-valued headers.

In this post, I walk you through examples for using this feature by extending the PetStore API. You add pet search functionality to the PetStore API and then add personalization by setting the language and UI theme cookies.

The following AWS CloudFormation stack creates all resources for both examples. The stack:

  • Creates the extended PetStore API represented using the OpenAPI 3.0 standard.
  • Creates three AWS Lambda functions. One each for implementing the pet search, get user profile, and set user profile functionality.
  • Creates two IAM roles. The Lambda function assumes one and API Gateway assumes the other to call the Lambda function.
  • Deploys the PetStore API to Staging.

Add pet search functionality to the PetStore API

As part of adding the search feature, I demonstrate how you can use multiValueQueryStringParameters for sending and retrieving multi-valued parameters in a query string.

Use the PetStore example API available in the API Gateway console and create a /search resource type under the /pets resource type with GET (read) access. Then, configure the GET method to use AWS Lambda proxy integration.

The CloudFormation stack launched earlier gives you the PetStore API staging endpoint as an output that you can use for testing the search functionality. Assume that the user has an interface to enter the pet types for searching and wants to search for “dog” and “fish.” The pet search API request looks like the following where the petType parameter is multi-valued:

https://xxxx.execute-api.us-east-1.amazonaws.com/staging/pets/search?petType=dog&petType=fish

When you invoke the pet search API action, you get a successful response with both the dog and fish details:

 [
  {
    "id": 11212,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 31231
    "type": "fish",
    "price": 0.99
  }
]

Processing multi-valued query string parameters

Here’s how the multi-valued parameter petType with value “petType=dog&petType=fish” gets processed by API Gateway. To demonstrate, here’s the input event sent by API Gateway to the Lambda function. The log details follow. As it was a long input, a few keys have been removed for brevity.

{ resource: '/pets/search',
path: '/pets/search',
httpMethod: 'GET',
headers: 
{ Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9',
Host: 'xyz.execute-api.us-east-1.amazonaws.com',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36',
Via: '1.1 382909590d138901660243559bc5e346.cloudfront.net (CloudFront)',
'X-Amz-Cf-Id': 'motXi0bgd4RyV--wvyJnKpJhLdgp9YEo7_9NeS4L6cbgHkWkbn0KuQ==',
'X-Amzn-Trace-Id': 'Root=1-5bab7b8b-f1333fbc610288d200cd6224',
'X-Forwarded-Proto': 'https' },
queryStringParameters: { petType: 'fish' },
multiValueQueryStringParameters: { petType: [ 'dog', 'fish' ] },
pathParameters: null,
stageVariables: null,
requestContext: 
{ resourceId: 'jy2rzf',
resourcePath: '/pets/search',
httpMethod: 'GET',
extendedRequestId: 'N1A9yGxUoAMFWMA=',
requestTime: '26/Sep/2018:12:28:59 +0000',
path: '/staging/pets/search',
protocol: 'HTTP/1.1',
stage: 'staging',
requestTimeEpoch: 1537964939459,
requestId: 'be70816e-c187-11e8-9d99-eb43dd4b0381',
apiId: 'xxxx' },
body: null,
isBase64Encoded: false }

There is a new key, multiValueQueryStringParameters, available in the input event. This key is added as part of the multi-value parameter feature to retain multiple values for the same parameter in the query string.

Before this change, API Gateway used to retain only the last value and drop everything else for a multi-valued parameter. You can see the original behavior in the queryStringParameters parameter in the above input, where only the “fish” value is retained.

Accessing the new multiValueQueryStringParameters key in a Lambda function

Use the new multiValueQueryStringParameters key available in the event context of the Lambda function to retrieve the multi-valued query string parameter petType that you passed in the query string of the search API request. You can use that value for searching pets. Retrieve the parameter values from the event context by parsing the event event.multiValueQueryStringParameters.petType.

exports.handler = (event, context, callback) => {
    
    //Log the input event
    console.log(event)
    
    //Extract multi-valued parameter from input
    var petTypes = event.multiValueQueryStringParameters.petType;
    
    //call search pets functionality
    var searchResults = searchPets(petTypes)
    
    const response = {
        statusCode: 200,
        body: searchResults
        
    };
    callback(null, response);
};

The multiValueQueryStringParameters key is present in the input request regardless of whether the request contains keys with multiple values. You don’t have to change your APIs to enable this feature, unless you are using a key of the same name as multiValueQueryStringParameters.

Add personalization

Here’s how the newly added multiValueHeaders key is useful for sending cookies with multiple Set-Cookie headers in an API response.

Personalize the Pet Store web application by setting a user-specific theme and language settings. Usually, web developers use cookies to store this kind of information. To accomplish this, send cookies that store theme and language information as part of the API response. The browser then stores them and the web application uses them to get the required information.

To set the cookies, you need the new API resources /users and /profiles, with read and write access in the PetStore API. Then, configure the GET and POST methods of the /profile resource to use AWS Lambda proxy integration. The CloudFormation stack that you launched earlier has already created the necessary resources.

Invoke the POST profile API call with the following request body. It is passing the user preferences for language and theme and returning the Set-Cookie headers:

https://XXXXXX.execute-api.us-east-1.amazonaws.com/staging/users/profile

The request body looks like the following:

{
    "userid" : 123456456,
    "preferences" : {"language":"en-US", "theme":"blue moon"}
}

You get a successful response with the “200 OK” status code and two Set-Cookie headers for setting the language and theme cookies.

Passing multiple Set-Cookie headers in the response

The following code example is the setUserProfile Lambda function code that processes the input request and sends the language and theme cookies:

exports.handler = (event, context, callback) => {

    //Get the request body
    var requestBody = JSON.parse(event.body);
 
    //Retrieve the language and theme values
    var language = requestBody.preferences.language;
    var theme = requestBody.preferences.theme; 
    
    const response = {
        isBase64Encoded: true,
        statusCode: 200,
        multiValueHeaders : {"Set-Cookie": [`language=${language}`, `theme=${theme}`]},
        body: JSON.stringify('User profile set successfully')
    };
    callback(null, response);
};

You can see the newly added multiValueHeaders key passes multiple cookies as a list in the response. The multiValueHeaders header is translated to multiple Set-Cookie headers by API Gateway and appears to the API client as the following:

Set-Cookie →language=en-US
Set-Cookie →theme=blue moon

You can also pass the header key along with the multiValueHeaders key. In that case, API Gateway merges the multiValueHeaders and headers maps while processing the integration response into a single Map<String, List<String>> value. If the same key-value pair is sent in both, it isn’t duplicated.

Retrieving the headers from the request

If you use the Postman tool (or something similar) to invoke the GET profile API call, you can send cookies as part of the request. The theme and language cookie that you set above in the POST /profile API request can now be sent as part of the GET /profile request.

https://xxx.execute-api.us-east-1.amazonaws.com/staging/users/profile

You get the following response with the details of the user preferences:

{
    "userId": 12343,
    "preferences": {
        "language": "en-US",
        "theme": "beige"
    }
}

When you log the input event of the getUserProfile Lambda function, you can see both newly added keys multiValueQueryStringParameters and multiValueHeaders. These keys are present in the event context of the Lambda function regardless of whether there is a value. You can retrieve the value by parsing the event context event.multiValueHeaders.Cookie.

{ resource: '/users/profile',
path: '/users/profile',
httpMethod: 'GET',
headers: 
{ Accept: '*/*',
'Accept-Encoding': 'gzip, deflate',
'cache-control': 'no-cache',
'CloudFront-Forwarded-Proto': 'https',
'CloudFront-Is-Desktop-Viewer': 'true',
'CloudFront-Is-Mobile-Viewer': 'false',
'CloudFront-Is-SmartTV-Viewer': 'false',
'CloudFront-Is-Tablet-Viewer': 'false',
'CloudFront-Viewer-Country': 'IN',
Cookie: 'language=en-US; theme=blue moon',
Host: 'xxxx.execute-api.us-east-1.amazonaws.com',
'Postman-Token': 'acd5b6b3-df97-44a3-8ea8-2d929efadd96',
'User-Agent': 'PostmanRuntime/7.3.0',
Via: '1.1 6bf9df28058e9b2a0034a51c5f555669.cloudfront.net (CloudFront)',
'X-Amz-Cf-Id': 'VRePX_ktEpyTYh4NGyk90D4lMUEL-LBYWNpwZEMoIOS-9KN6zljA7w==',
'X-Amzn-Trace-Id': 'Root=1-5bb6111e-e67c17ea657ed03d5dddf869',
'X-Forwarded-For': 'xx.xx.xx.xx, yy.yy.yy.yy',
'X-Forwarded-Port': '443',
'X-Forwarded-Proto': 'https' },
multiValueHeaders: 
{ Accept: [ '*/*' ],
'Accept-Encoding': [ 'gzip, deflate' ],
'cache-control': [ 'no-cache' ],
'CloudFront-Forwarded-Proto': [ 'https' ],
'CloudFront-Is-Desktop-Viewer': [ 'true' ],
'CloudFront-Is-Mobile-Viewer': [ 'false' ],
'CloudFront-Is-SmartTV-Viewer': [ 'false' ],
'CloudFront-Is-Tablet-Viewer': [ 'false' ],
'CloudFront-Viewer-Country': [ 'IN' ],
Cookie: [ 'language=en-US; theme=blue moon' ],
Host: [ 'xxxx.execute-api.us-east-1.amazonaws.com' ],
'Postman-Token': [ 'acd5b6b3-df97-44a3-8ea8-2d929efadd96' ],
'User-Agent': [ 'PostmanRuntime/7.3.0' ],
Via: [ '1.1 6bf9df28058e9b2a0034a51c5f555669.cloudfront.net (CloudFront)' ],
'X-Amz-Cf-Id': [ 'VRePX_ktEpyTYh4NGyk90D4lMUEL-LBYWNpwZEMoIOS-9KN6zljA7w==' ],
'X-Amzn-Trace-Id': [ 'Root=1-5bb6111e-e67c17ea657ed03d5dddf869' ],
'X-Forwarded-For': [ 'xx.xx.xx.xx, yy.yy.yy.yy' ],
'X-Forwarded-Port': [ '443' ],
'X-Forwarded-Proto': [ 'https' ] },
queryStringParameters: null,
multiValueQueryStringParameters: null,
pathParameters: null,
stageVariables: null,
requestContext: 
{ resourceId: '90cr24',
resourcePath: '/users/profile',
httpMethod: 'GET',
extendedRequestId: 'OPecvEtWoAMFpzg=',
requestTime: '04/Oct/2018:13:09:50 +0000',
path: '/staging/users/profile',
accountId: 'xxxxxx',
protocol: 'HTTP/1.1',
stage: 'staging',
requestTimeEpoch: 1538658590316,
requestId: 'c691746f-c7d6-11e8-83b8-176659b7d74d',
identity: 
{ cognitoIdentityPoolId: null,
accountId: null,
cognitoIdentityId: null,
caller: null,
sourceIp: 'xx.xx.xx.xx',
accessKey: null,
cognitoAuthenticationType: null,
cognitoAuthenticationProvider: null,
userArn: null,
userAgent: 'PostmanRuntime/7.3.0',
user: null },
apiId: 'xxxx' },
body: null,
isBase64Encoded: false }

HTTP proxy integration

For the API Gateway HTTP proxy integration, the headers and query string are proxied to the downstream HTTP call in the same way that they are invoked. The newly added keys aren’t present in the request or response. For example, the petType parameter from the search API goes as a list to the HTTP endpoint.

Similarly, for setting multiple Set-Cookie headers, you can set them the way that you would usually. For Node.js, it looks like the following:

res.setHeader("Set-Cookie", ["theme=beige", "language=en-US"]);

Mapping requests and responses for new keys

For mapping requests, you can access new keys by parsing the method request like method.request.multivaluequerystring.<KeyName> and method.request.multivalueheader.<HeaderName>. For mapping responses, you would parse the integration response like integration.response.multivalueheaders.<HeaderName>.

For example, the following is the search pet API request example:

curl https://{hostname}/pets/search?petType=dog&petType=fish \
-H 'cookie: language=en-US' \
-H 'cookie: theme=beige'

The new request mapping looks like the following:

"requestParameters" : {
    "integration.request.querystring.petType" : "method.request.multivaluequerystring.petType",
    "integration.request.header.cookie" : "method.request.multivalueheader.cookie"
}

The new response mapping looks like the following:

"responseParameters" : { 
 "method.response.header.Set-Cookie" : "integration.response.multivalueheaders.Set-Cookie", 
 ... 
}

For more information, see Set up Lambda Proxy Integrations in API Gateway.

Cleanup

To avoid incurring ongoing charges for the resources that you’ve used, delete the CloudFormation stack that you created earlier.

Conclusion

Multi-value parameter support enables you to pass multi-valued parameters in the query string and multi-valued headers as part of your API response. Both the multiValueQueryStringParameters and multiValueHeaders keys are present in the input request to Lambda function regardless of whether there is a value. These keys are also accessible for mapping requests and responses.

Overriding request/response parameters and response status in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/overriding-request-response-parameters-and-response-status-in-amazon-api-gateway/

This post is courtesy of Akash Jain, Partner Solutions Architect – AWS

APIs are driving modern architectures. Many customers are moving their traditional architecture to service-oriented or microservices-based architectures.

As part of this transition, you may face a few complex situations. For example, a backend API returns an HTTP status code of 2XX that must be changed to 4XX because of a new API architecture standard in the organization. Or you may want to override the headers or query string of the API to differentiate the client request without doing any API code changes.

Developers are looking for easier ways to handle such situations. With the new override feature of Amazon API Gateway, you can set the required status and parameter mappings and handle these kinds of situations.

As part of the override feature, AWS introduced new context variables for overriding both request and response parameters.

Request Body Mapping TemplateResponse Body Mapping Template
$context.requestOverride.header.<header_name>$context.responseOverride.header.<header_name>
$context.requestOverride.path.<path_name>$context.responseOverrider.status
$context.requestOverride.querystring.<querystring_name>

Using these context variables, you can:

  • Create a new header (or overwrite an existing header) as a concatenation of two parameters.
  • Override the response code to a success or failure code based on the contents of the body.
  • Conditionally remap a parameter based on its contents or the contents of some other parameter.
  • Iterate over the contents of a JSON body and remap key-value pairs to headers or query strings.

In this post, I demonstrate an example for overriding the status code without modifying the actual API code. Use the PetStore API, which is available as a sample API under Amazon API Gateway.

Override responses

Invoke the GET method on the /pets/{petId} resource by passing -1 as the petId value. You get the following response.

You can see that the status code is 200 and the error message is “The value is out of range”. You may challenge this response by asking why the status code is not “400 – Bad Request”, as the petId parameter value passed by the client is incorrect. Both approaches have merit. For this post, use “400 – Bad request” as 400 is the right code for a client error.

Your requirement is to implement this change without modifying the API code because of the risks involved or because there is no control over the legacy API. To implement this scenario, use the newly introduced override status feature with mapping templates.

To access mapping templates, under the GET request for the /pets/{petID} resource, choose Integration response.

Under Body Mapping templates, for Content-Type, enter application/json. Choose Add mapping template.

Paste the following code in the template area and choose Save.

#set($inputRoot = $input.path('$'))
$input.json("$")
#if($inputRoot.toString().contains("error"))
    #set($context.responseOverride.status = 400)
#end

When you run the test again by passing -1 for petId, you get the following response.

The status code changed from 200 to 400 without modifying the actual API.

In this PetStore API example, you use the $context.response.override.status variable to override the response status when you see an error key in the body content. The following code snippet searches the error string in the response body and does the status overriding.

#if($inputRoot.toString().contains("error"))
    #set($context.responseOverride.status = 400)

Override request headers, paths, and query strings

You can override the response status using the new response status override feature. Similarly, you can override request headers, paths, and query strings by using the $context.requestOverride variable.

Consider a situation where you have launched a new version of your API that is highly performant and much improved. However, your existing API clients are not willing to transition because of the changes that would be involved to their existing systems. You want to maintain a single API version for low maintenance, performance, and agility. What choices do you have?

The override request feature can come to the rescue. For example, you have an older version of an API endpoint that looks like the following, with “v1” in the path and a productId parameter in the query string with value=1234.

https://xxxx.execute-api.us-east-1.amazonaws.com/prod/v1/products?productId=1234

The new version of the endpoint looks like the following, with “v2” in the path and a productId parameter in the query string that requires the client to send value=4567. It also accepts a header to check if the request is from a client using the old API. To demonstrate this feature, create a request path parameter appversion that can accept any string as the version path.

https://xxxx.execute-api.us-east-1.amazonaws.com/prod/v2/pets?productId=4567

Using the new $context.requestOverride variable in the body mapping template, you can now override the requested path value from “v1” to “v2” and the query string parameter value from “1234” to “4567”. You can also add a new header to pass a flag for requests from an old API version. For example, see the following mapping template.

#set($version = "v2")
#set($newProductKey = "4567")
#set($customerFlag = "1")
## Perform normal body mapping. In this case, just pass the body mapping through **
$input.json("$")
## Override path parameter **
#set($context.requestOverride.path.appversion = $version)
## Override query string parameters **
#set($context.requestOverride.querystring.productId = $ newProductKey)
## Add new header parameter **
#set($context.requestOverride.header.flag = $customerFlag

In the above mapping template, first set the values of the required path, query string, and header for your new backend API. Then, use $input.json(“$”) to do normal body mapping. Finally, override the API path, query string, and header with the values set earlier.

When you run your API with the updated body template, the API Gateway logs show that overrides have been successfully applied.

Overrides are final. An override may only be applied to each parameter one time. Trying to override the same parameter multiple times results in 5XX responses from API Gateway.

If you must override the same parameter multiple times to build it throughout the template, I recommend creating a variable and applying the override at the end of the template. For more information, see Use a Mapping Template to Override an API’s Request and Response Parameters and Status Codes.

Conclusion

In this post, I showed you how to change the response status code of an API with the new override feature of API Gateway without modifying the API itself. You also saw how to transform requests coming to your new API from existing or legacy clients, without asking your customer to do any code changes.

The override feature enables you to have more control over the request and response parameters and status. You can now integrate legacy APIs to API Gateway, even when you want a different status from the API or to pass a different set of parameters based on the client call.

ICYMI: Serverless Q3 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/icymi-serverless-q3-2018/

Welcome to the third edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

If you didn’t see them, catch our Q1 ICYMI and Q2 ICYMI posts for what happened then.

So, what might you have missed this past quarter? Here’s the recap.

AWS Amplify CLI

In August, AWS Amplify launched the AWS Amplify Command Line Interface (CLI) toolchain for developers.

The AWS Amplify CLI enables developers to build, test, and deploy full web and mobile applications based on AWS Amplify directly from their CLI. It has built-in helpers for configuring AWS services such as Amazon Cognito for Auth , Amazon S3 and Amazon DynamoDB for storage, and Amazon API Gateway for APIs. With these helpers, developers can configure AWS services to interact with applications built in popular web frameworks such as React.

Get started with the AWS Amplify CLI toolchain.

New features

Rejoice Microsoft application developers: AWS Lambda now supports .NET Core 2.1 and PowerShell Core!

AWS SAM had a few major enhancements to help in both testing and debugging functions. The team launched support to locally emulate an endpoint for Lambda so that you can run automated tests against your functions. This differs from the existing functionality that emulated a proxy similar to API Gateway in front of your function. Combined with the new improved support for ‘sam local generate-event’ to generate over 50 different payloads, you can now test Lambda function code that would be invoked by almost all of the various services that interface with Lambda today. On the operational front, AWS SAM can now fetch, tail, and filter logs generated by your functions running live on AWS. Finally, with integration with Delve, a debugger for the Go programming language, you can more easily debug your applications locally.

If you’re part of an organization that uses AWS Service Catalog, you can now launch applications based on AWS SAM, too.

The AWS Serverless Application Repository launched new search improvements to make it even faster to find serverless applications that you can deploy.

In July, AWS AppSync added HTTP resolvers so that now you can query your REST APIs via GraphQL! API Inception! AWS AppSync also added new built-in scalar types to help with data validation at the GraphQL layer instead of having to do this in code that you write yourself. For building your GraphQL-based applications on AWS AppSync, an enhanced no-code GraphQL API builder enables you to model your data, and the service generates your GraphQL schema, Amazon DynamoDB tables, and resolvers for your backend. The team also published a Quick Start for using Amazon Aurora as a data source via a Lambda function. Finally, the service is now available in the Asia Pacific (Seoul) Region.

Amazon API Gateway announced support for AWS X-Ray!

With X-Ray integrated in API Gateway, you can trace and profile application workflows starting at the API layer and going through the backend. You can control the sample rates at a granular level.

API Gateway also announced improvements to usage plans that allow for method level throttling, request/response parameter and status overrides, and higher limits for the number of APIs per account for regional, private, and edge APIs. Finally, the team added support for the OpenAPI 3.0 API specification, the next generation of OpenAPI 2, formerly known as Swagger.

AWS Step Functions is now available in the Asia Pacific (Mumbai) Region. You can also build workflows visually with Step Functions and trigger them directly with AWS IoT Rules.

AWS [email protected] now makes the HTTP Request Body for POST and PUT requests available.

AWS CloudFormation announced Macros, a feature that enables customers to extend the functionality of AWS CloudFormation templates by calling out to transformations that Lambda powers. Macros are the same technology that enables SAM to exist.

Serverless posts

July:

August:

September:

Tech Talks

We hold several Serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. Here are the three tech talks that we delivered in Q3:

Twitch

We’ve been busy streaming deeply technical content to you the past few months! Check out awesome sessions like this one by AWS’s Heitor Lessa and Jason Barto diving deep into Continuous Learning for ML and the entire “Build on Serverless” playlist.

For information about upcoming broadcasts and recent live streams, keep an eye on AWS on Twitch for more Serverless videos and on the Join us on Twitch AWS page.

For AWS Partners

In September, we announced the AWS Serverless Navigate program for AWS APN Partners. Via this program, APN Partners can gain a deeper understanding of the AWS Serverless Platform, including many of the services mentioned in this post. The program’s phases help partners learn best practices such as the Well-Architected Framework, business and technical concepts, and growing their business’s ability to better support AWS customers in their serverless projects.

Check out more at AWS Serverless Navigate.

In other news

AWS re:Invent 2018 is coming in just a few weeks! For November 26–30 in Las Vegas, Nevada, join tens of thousands of AWS customers to learn, share ideas, and see exciting keynote announcements. The agenda for Serverless talks contains over 100 sessions where you can hear about serverless applications and technologies from fellow AWS customers, AWS product teams, solutions architects, evangelists, and more.

Register for AWS re:Invent now!

Want to get a sneak peek into what you can expect at re:Invent this year? Check out the awesome re:Invent Guides put out by AWS Community Heroes. AWS Community Hero Eric Hammond (@esh on Twitter) published one for advanced serverless attendees that you will want to read before the big event.

What did we do at AWS re:Invent 2017? Check out our recap: Serverless @ re:Invent 2017.

Still looking for more?

The Serverless landing page has lots of information. The resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

Investigating spikes in AWS Lambda function concurrency

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/investigating-spikes-in-aws-lambda-function-concurrency/

This post is courtesy of Ian Carlson, Principal Solutions Architect – AWS

As mentioned in an earlier post, a key benefit of serverless applications is the ease with which they can scale to meet traffic demands or requests. AWS Lambda is at the core of this platform.

Although this flexibility is hugely beneficial for our customers, sometimes an errant bit of code or upstream scaling can lead to spikes in concurrency. Unwanted usage can increase costs, pressure downstream systems, and throttle other functions in the account. Administrators new to serverless technologies need to leverage different metrics to manage their environment. In this blog, I walk through a sample scenario where I’m getting API errors. I trace those errors up through Amazon CloudWatch logs and leverage CloudWatch Metrics to understand what is happening in my environment. Finally, I show you how to set up alerts to reduce throttling surprises.

In my environment, I have a few Lambda functions configured. The first function is from Chris Munns’ concurrency blog, called concurrencyblog. I set up that function to execute behind an API hosted on Amazon API Gateway. In the background, I’m simulating activity with another function. This exercise uses the services in the following image.

To start, I make an API Gateway call to invoke the concurrencyblog function.

curl -i -XGET https://XXXXXXXX.execute-api.us-east-2.amazonaws.com/prod/concurrencyblog

I get the following output.

HTTP/2 502
content-type: application/json
content-length: 36
date: Wed, 01 Aug 2018 14:46:03 GMT
x-amzn-requestid: 9d5eca92-9599-11e8-bb13-dddafe0dbaa3
x-amz-apigw-id: K8ocOG_iiYcFa_Q=
x-cache: Error from cloudfront
via: 1.1 cb9e55028a8e7365209ebc8f2737b69b.cloudfront.net (CloudFront)
x-amz-cf-id: fk-gvFwSan8hzBtrC1hC_V5idaSDAKL9EwKDq205iN2RgQjnmIURYg==
 
{"message": "Internal server error"}

Hmmm, a 502 error. That shouldn’t happen. I don’t know the cause, but I configured logging for my API, so I can search for the requestid in CloudWatch logs. I navigate to the logs, select Search Log Group, and enter the x-amzn-requestid, enclosed in double quotes.

My API is invoking a Lambda function, and it’s getting an error from Lambda called ConcurrentInvocationLimitExceeded. This means my Lambda function was throttled. If I navigate to the function in the Lambda console, I get a similar message at the top.

If I scroll down, I observe that I don’t have throttling configured, so this must be coming from a different function or functions.

Using CloudWatch forensics

Lambda functions report lots of metrics in CloudWatch to tell you how they’re doing. Three of the metrics that I investigate here are Invocations, Duration, and ConcurrentExecutions. Invocations is incremented any time a Lambda function executes and is recorded for all functions and by individual functions. Duration is recorded to tell you how long Lambda functions take to execute. ConcurrentExecutions reports how many Lambda functions are executing at the same time and is emitted for the entire account and for functions that have a concurrency reservation set. Lambda emits CloudWatch metrics whenever there is Lambda activity in the account.

Lambda reports concurrency metrics for my account under AWS/Lambda/ConcurrentExecutions. To begin, I navigate to the Metrics pane of the CloudWatch console and choose Lambda on the All metrics tab.

Next, I choose Across All Functions.

Then I choose ConcurrentExecutions.

I choose the Graphed metrics tab and change Statistic from Average to Maximum, which shows me the peak concurrent executions in my account. For Period, I recommend reviewing 1-minute period data over the previous 2 weeks. After 2 weeks, the precision is aggregated over 5 minutes, which is a long time for Lambda!

In my test account, I find a concurrency spike at 14:46 UTC with 1000 concurrent executions.

Next, I want to find the culprit for this spike. I go back to the All metrics tab, but this time I choose By Function Name and enter Invocations in the search field. Then I select all of the functions listed.

The following image shows that BadLambdaConcurrency is the culprit.

It seems odd that there are only 331 invocations during that sample in the graph, so let’s dig in. Using the same method as before, I add the Duration metric for BadLambdaConcurrency. On average, this function is taking 30 seconds to complete, as shown in the following image.

Because there are 669 invocations the previous minute and the function is taking, on average, 30 seconds to complete, the next minute’s invocations (331) drives the concurrency up to 1000. Lambda functions can execute very quickly, so exact precision can be challenging, even over a 1-minute time period. However, this gives you a reasonable indication of the troublesome function in the account.

Automating this process

Investigating via the Lambda and CloudWatch console works fine if you have a few functions, but when you have tens or hundreds it can be pretty time consuming. Fortunately CloudWatch metrics are also available via API. To speed up this process I’ve written a script in Python that will go back over the last 7 days of metrics, find the minute with the highest concurrency, and output the Invocations and Average Duration for all functions for six minutes prior to that spike. You can download the script here. To execute, make sure you have rights to CloudWatch metrics, or are running from an EC2 instance that has those rights. Then you can execute:

sudo yum install python3
pip install boto3 --user
curl https://raw.githubusercontent.com/aws-samples/aws-lambda-concurrency-hunt/master/lambda-con-hunt.py -o lambda-con-hunt.py
python3 lambda-con-hunt.py

or, to output it to a file:

python3 lambda-con-hunt.py > output.csv

You should get output similar to the following image.

You can import this data into a spreadsheet program and sort it, or you can confirm visually that BadLambdaConcurrency is driving the concurrency.

Getting to the root cause

Now I want to understand what is driving that spike in Invocations for BadLambdaConcurrency, so I go to the Lambda console. It shows that API Gateway is triggering this Lambda function.

I choose API Gateway and scroll down to discover which API is triggering. Choosing the name (ConcurrencyTest) takes me to that API.

It’s the same API that I set up for concurrencyblog, but a different method. Because I already set up logging for this API, I can search the log group to check for interesting behavior. Perusing the logs, I check the method request headers for any insights as to who is calling this API. In real life I wouldn’t leave an API open without authentication, so I’ll have to do some guessing.

(a915ba7f-9591-11e8-8f19-a7737a1fb2d7) Method request headers: {CloudFront-Viewer-Country=US, CloudFront-Forwarded-Proto=https, CloudFront-Is-Tablet-Viewer=false, CloudFront-Is-Mobile-Viewer=false, User-Agent=hey/0.0.1, X-Forwarded-Proto=https, CloudFront-Is-SmartTV-Viewer=false, Host=xxxxxxxxx.execute-api.us-east-2.amazonaws.com, Accept-Encoding=gzip, X-Forwarded-Port=443, X-Amzn-Trace-Id=Root=1-5b61ba53-1958c6e2022ef9df9aac7bdb, Via=1.1 a0286f15cb377e35ea96015406919392.cloudfront.net (CloudFront), X-Amz-Cf-Id=O0GQ_V_eWRe5KydZNc46-aPSz7dfI19bmyhWCsbTBMoety73q0AtZA==, X-Forwarded-For=f.f.f.f, a.a.a.a, CloudFront-Is-Desktop-Viewer=true, Content-Type=text/html}

The method request headers have a user agent called hey. Hey, that’s a load testing utility! I bet that someone is load-testing this API, but it shouldn’t be allowed to consume all of my resources.

Applying rate and concurrency limiting

To keep this from happening, I place a throttle on the API method. In API Gateway console, in the APIs navigation pane, I choose Stages, choose prod, choose the Settings tab, and select the Enable throttling check box. Then I set a rate of 20 requests per second. It doesn’t sound like much, but with an average function duration of 30 seconds, 20 requests per second can use 600 concurrent Lambda executions.

I can also set a concurrency reservation on the function itself, as Chris pointed out in his blog.

If this is a bad function running amok or an emergency, I can throttle it directly, sometimes referred to as flipping a kill switch. I can do that quickly by choosing Throttle on the Lambda console.

I recommend throttling to zero only in emergency situations.

Investigating the duration

The other and larger problem is this function is taking 30 seconds to execute. That is a long time for an API, and the API Gateway integration timeout is 29 seconds. I wonder what is making it time out, so I check the traces in AWS X-Ray.

It initializes quickly enough, and I don’t find any downstream processes called. This function is a simple one, and the code is available from the Lambda console window. There I find my timeout culprit, a 30-second sleep call.

Not sure how that got through testing!

Setting up ongoing monitoring and alerting

To ensure that I’m not surprised again, let’s create a CloudWatch alert. In the CloudWatch console’s navigation pane, I choose Alarms and then choose Create Alarm.

When prompted, I choose Lambda and the ConcurrentExecutions metric across all functions, as shown in the following image.

Under Alarm Threshold, I give the alarm a name and description and enter 800 for is, as shown in the following image. I treat missing data as good because Lambda won’t publish a metric if there is no activity. I make sure that my period is 1 minute and use Maximum as the statistic. I want to be alerted only if this happens for any 2 minutes out of a 5-minute period. Finally, I can set up an Amazon SNS notification to alert me via email or text if this threshold is reached. This enables me to troubleshoot or request a limit increase for my account. Individual functions should be able to handle a throttling event through client-side retry and exponential backoff, but it’s still something that I want to know about.

Conclusion

In this blog, I walked through a method to investigate concurrency issues with Lambda, remediate those issues, and set up alerting. Managing concurrency is going to be new for a lot of people. As you deploy more applications, it’s especially important to segment them, monitor them, and understand how they are reporting their health. I hope you enjoyed this blog and start monitoring your functions today!