Tag Archives: serverless

Amazon Cognito for Alexa Skills User Management

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/amazon-cognito-for-alexa-skills-user-management/

This post is courtesy of Tom Moore, Solutions Architect – AWS

If your Alexa skill is a general information skill, such as a random facts skill or a news feed, you can provide information to any user who has an Alexa enabled device with your skill turned on. However, sometimes you need to know who the user is before you can provide information to them. You can fulfill this user management scenario with Amazon Cognito user pools.

This blog post will show you how to set up an Amazon Cognito user pool and how to use it to perform authentication for both your Alexa skill and a webpage.

Getting started

In order to complete the steps in this blog post you will need the following:

  • An AWS account
  • An Amazon developer account
  • A basic understanding of Amazon Alexa skill development

This example will use a sample Alexa skill deployed from one of the available skill templates. To fully develop your own Alexa skill, you will need a professional code editor or IDE, as well as knowledge of Alexa skill development. It is beyond the scope of this blog post to cover these details.

Before you begin, consider the set of services that you will use and their availability. To implement this solution, you will use Amazon Cognito for user accounts and AWS Lambda for the Alexa function.

Today, AWS Lambda supports calls from Alexa in the following regions:

  • Asia Pacific (Tokyo)
  • EU (Ireland)
  • US East (N. Virginia)
  • US West (Oregon)

These four regions also support Amazon Cognito. While it is possible to use Amazon Cognito in a different region than your Lambda function, I recommend choosing one of the four listed regions to deploy your entire solution for simplicity.

Setting up Amazon Cognito

To set up Amazon Cognito, you’ll need to create a user pool, create an Alexa client, and set up your authentication UI.

Create your Amazon Cognito user pool

  1. Sign in to the Amazon Cognito console. You might be prompted for your AWS credentials.
  2. From the console navigation bar, choose one of the four regions listed above. For the purposes of this blog, I’ll use US East (N. Virginia).
  3. Choose Manage User Pools.Amazon Cognito
  4. Choose Create a user pool, and provide a name for your user pool. Remember that user pools may be used across multiple applications and platforms including web, mobile, and Alexa. The pool name does not have to be globally unique, but it should be unique in your account so you can easily find the pool when needed. I have named my user pool “Alexa Demo.”Create A User Pool
  5. After you name your pool, choose Step through settings. You can accept the defaults for the remaining steps to set up your user pool, with the following exceptions:
    • Choose email address or phone number as the sign-in method, and then choose Allow both email addresses and phone numbers.
    • Enable Multi-Factor Authentication (MFA).You can use Amazon Cognito to enforce Multi-Factor Authentication for your users. Amazon Cognito also allows you to validate email and phone numbers when the user is created. The verification process for phone numbers requires that Amazon Cognito is able to access the Amazon Simple Notification Service (SNS) service in order to dispatch the SMS message for phone number verification. This access is granted through the use of an AWS Identity and Access Management (IAM) service role. The Amazon Cognito Setup process can automatically create this role for you.
  6. To set up Multi-Factor Authentication:
    • Under Do you want to enable Multi-Factor Authentication (MFA), choose Optional.
    • Choose SMS text message as a second authentication factor, and then choose the options you want to be verified.
    • Choose Create Role, and then choose Next Step.Configure Multi-Factor Authentication
    • For more information, see Adding Multi-Factor Authentication (MFA) to a User Pool.Because the verification process sends SMS messages, some costs will be incurred on your account. If you have not already done so, you will need to request a spending increase on your account to accommodate those charges. To learn more about costs for SMS messages, see SMS Text Messages MFA.
  7. Review the selections that you have made. If you are happy with the settings that you have selected, choose Create Pool.

Create the Alexa client

By completing the steps above, you will have created an Amazon Cognito user pool. The next task in setting up account linking is to create the Alexa client definition inside the Amazon Cognito user pool.

  1. From the Amazon Cognito console, choose Manage User Pools. Select the user pool you just created.
  2. From the General settings menu, choose App Clients to set up applications that will connect to your Amazon Cognito user pool.General Settings for App Clients
  3. Choose Add an App Client, and provide the App client name. In this example, I have chosen “Alexa.” Leave the rest of the options set to default and choose Create App Client to generate the client record for Alexa to use. This process creates an app client ID and a secret.App Client Settings
    To learn more, see Configuring a User Pool App Client.

Set up your Authentication UI

Amazon Cognito can set up and manage the Authentication UI for your application so that you don’t have to host your own sign-in and sign-up UI for your Alexa application.

  1. From the App integration menu, choose Domain name.Choose Domain Name
  2. For this example, I will use an Amazon Cognito domain. Provide a subdomain name and choose Check Availability. If the option is available, choose Save Changes.Choosing a Domain Name

Setting up the Alexa skill

Now you can create the Alexa skill and link it back to the Amazon Cognito user pool that you created.

For step-by-step instructions for creating a new Alexa skill, see Create a New Skill in the Alexa documentation. Follow those instructions, with the following specific selections:

Under Choose a model to add to your skill, keep the default option of Custom.


Under Choose a method to host your skill’s back end resources, keep the default selection of Self Hosted.
Self Hosted

For a custom skill, you can choose a predefined skill template for the back end code for your skill. For this example, I’ll use a Fact Skill template as a starting point. The skill template prepopulates the Lambda function that your Alexa skill uses.

Fact Skill
After you create your sample skill, you’ll need to complete a few basic operations:

  • Set the invocation name of the skill
  • Prepare a Lambda function to handle the skill invocation
  • Connect the Alexa skill to your lambda
  • Test your skill

A full description of these steps is beyond the scope of this blog post. To learn more, see Manage Skills in the Developer Console. Once you have completed these steps, return to this post to continue linking your skill with Amazon Cognito.

Linking Alexa with Amazon Cognito

To link your Alexa skill with Amazon Cognito user pools, you’ll need to update both the Amazon Cognito and Alexa interfaces with data from the other service. I recommend that you have both interfaces open in different tabs of your web browser to make it easy to move back and forth between the two services.

  1. In Amazon Cognito, open the app pool that you created. Under General Settings, choose App Clients. Next, choose Show Details in the section for the Alexa Client that you set up earlier. Make a note of the App client ID and the App client secret. These will be needed to configure Alexa skills app linking.App Client Settings
  2. Switch over to your Alexa developer account and open the skill that you are linking to Amazon Cognito. Choose Account Linking.
  3. Select the option to allow users to link accounts. Leave the default option for an Auth Code Grant selected.TheAccount Linking
    Authorization URI will be made up of the following template:

    https://{Sub-Domain}.auth.{Region}.amazoncognito.com/oauth2/authorize?response_type=code&redirect_uri=https://pitangui.amazon.com/api/skill/link/{Vendor ID}

  4. Replace the {Sub-Domain} with the sub domain that you selected when you set up your Amazon Cognito user pool. In my example, it was “mooretom-alexademo”
  5. Replace {Vendor ID} with your specific vendor ID for your Alexa development account. The easiest way to find this is to scroll down to the bottom of the account linking page. Your Vendor ID will be the final piece of information in the Redirect URI’s.Redirect URLs
  6. Replace {Region} with the name of the region you are deploying your resources into. In my example, was us-east-1.
  7. The Access Token URI will be made up of the following template:
    https://{Sub-Domain}.auth.{region}.amazoncognito.com/oauth2/token

  8. Enter the app client ID and the app client secret that you noted above, or return to the Amazon Cognito tab to copy and paste them.Grant Auth Code
  9. Choose Save at the top of the page. Make a note of the redirect URLs at the bottom of the page, as these will be required to finish the Amazon Cognito configuration in the next step.
  10. Switch back to your Amazon Cognito user pool. Under App Integration, choose App Client Settings. You will see the integration settings for the Alexa client in the details panel on the right.
  11. Under Enabled Identity Providers, choose Cognito User Pool.
  12. Under Callback URL(s) enter in the three callback URLs from your Alexa skill page. For example, here are all three URLs separated by commas:
    https://alexa.amazon.co.jp/api/skill/link/{Vendor ID},
    https://layla.amazon.com/api/skill/link/{Vendor ID},
    https://pitangui.amazon.com/api/skill/link/{Vendor ID}

    The Sign Out URL will follow this template:

    https://{SubDomain}.auth.us-east-1.amazoncognito.com/logout?response_type=code

  13. Under Allowed OAuth Flows, select Authorization code grant.
  14. Under Allowed OAuth Scopes, select phone, email, and openid.Enable Identity Providers
  15. Choose Save Changes.

Testing your Alexa skill

After you have linked Alexa with Amazon Cognito, return to the Alexa developer console and build your model. Then log into the Alexa application on your mobile phone and enable the skill. When the skill is enabled, you will be able to configure access and create a new user with phone number authentication included automatically.

After going through the account creation steps, you can return to your Amazon Cognito user pool and see the new user you created.

New Customer

Conclusion

By completing the steps in this post, you have leveraged Amazon Cognito as a source of authentication for your Amazon Alexa skill. Amazon Cognito provides user authentication as well as sign-in and sign-up functionality without requiring you to write any code. You can now use the Amazon Cognito user ID to personalize the user experience for your Alexa skill. You can also use Amazon Cognito to authenticate your users to a companion application or website.

Building serverless apps with components from the AWS Serverless Application Repository

Post Syndicated from Aleksandar Simovic original https://aws.amazon.com/blogs/aws/building-serverless-apps-with-components-from-the-aws-serverless-application-repository/

Guest post by AWS Serverless Hero Aleksandar Simovic. Aleksandar is a Senior Software Engineer at Science Exchange and co-author of “Serverless Applications with Node.js” with Slobodan Stojanovic, published by Manning Publications. He also writes on Medium on both business and technical aspects of serverless.

Many of you have built a user login or an authorization service from scratch a dozen times. And you’ve probably built another dozen services to process payments and another dozen to export PDFs. We’ve all done it, and we’ve often all done it redundantly. Using the AWS Serverless Application Repository, you can now spend more of your time and energy developing business logic to deliver the features that matter to customers, faster.

What is the AWS Serverless Application Repository?

The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources.

How to use AWS Serverless Application Repository in production

I wanted to build an application that enables customers to select a product and pay for it. Sounds like a substantial effort, right? Using AWS Serverless Application Repository, it didn’t actually take me much time.

Broadly speaking, I built:

  • A product page with a Buy button, automatically tied to the Stripe Checkout SDK. When a customer chooses Buy, the page displays the Stripe Checkout payment form.
  • A Stripe payment service with an API endpoint that accepts a callback from Stripe, charges the customer, and sends a notification for successful transactions.

For this post, I created a pre-built sample static page that displays the product details and has the Stripe Checkout JavaScript on the page.

Even with the pre-built page, integrating the payment service is still work. But many other developers have built a payment application at least once, so why should I spend time building identical features? This is where AWS Serverless Application Repository came in handy.

Find and deploy a component

First, I searched for an existing component in the AWS Serverless Application Repository public library. I typed “stripe” and opted in to see applications that created custom IAM roles or resource policies. I saw the following results:

I selected the application titled api-lambda-stripe-charge and chose Deploy on the component’s detail page.

Before I deployed any component, I inspected it to make sure it was safe and production-ready.

Evaluate a component

The recommended approach for evaluating an AWS Serverless Application Repository component is a four-step process:

  1. Check component permissions.
  2. Inspect the component implementation.
  3. Deploy and run the component in a restricted environment.
  4. Monitor the component’s behavior and cost before using in production.

This might appear to negate the quick delivery benefits of AWS Serverless Application Repository, but in reality, you only verify each component one time. Then you can easily reuse and share the component throughout your company.

Here’s how to apply this approach while adding the Stripe component.

1. Check component permissions

There are two types of components: public and private. Public components are open source, while private components do not have to be. In this case, the Stripe component is public. I reviewed the code to make sure that it doesn’t give unnecessary permissions that could potentially compromise security.

In this case, the Stripe component is on GitHub. On the component page, I opened the template.yaml file. There was only one AWS Lambda function there, so I found the Policies attribute and reviewed the policies that it uses.

  CreateStripeCharge:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs8.10
      Timeout: 10
      Policies:
        - SNSCrudPolicy:
          TopicName: !GetAtt SNSTopic.TopicName
        - Statement:
          Effect: Allow
          Action:
            - ssm:GetParameters
          Resource: !Sub
arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${SSMParameterPrefix}/*

The component was using a predefined AWS SAM policy template and a custom one. These predefined policy templates are sets of AWS permissions that are verified and recommended by the AWS security team. Using these policies to specify resource permissions is one of the recommended practices for serverless components on AWS Serverless Application Repository. The other custom IAM policy allows the function to retrieve AWS System Manager parameters, which is the best practice to store secure values, such as the Stripe secret key.

2. Inspect the component implementation

I wanted to ensure that the component’s main business logic did only what it was meant to do, which was to create a Stripe charge. It’s also important to look out for unknown third-party HTTP calls to prevent leaks. Then I reviewed this project’s dependencies. For this inspection, I used PureSec, but tools like those offered by Protego are another option.

The main business logic was in the charge-customer.js file. It revealed straightforward logic to simply invoke the Stripe create charge and then publish a notification with the created charge. I saw this reflected in the following code:

return paymentProcessor.createCharge(token, amount, currency, description)
    .then(chargeResponse => {
      createdCharge = chargeResponse;
      return pubsub.publish(createdCharge, TOPIC_ARN);
    })
    .then(() => createdCharge)
    .catch((err) => {
      console.log(err);
      throw err;
    });

The paymentProcessor and pubsub values are adapters for the communication with Stripe and Amazon SNS, respectively. I always like to look and see how they work.

3. Deploy and run the component in a restricted environment

Maintaining a separate, restricted AWS account in which to test your serverless applications is a best practice for serverless development. I always ensure that my test account has strict AWS Billing and Amazon CloudWatch alarms in place.

I signed in to this separate account, opened the Stripe component page, and manually deployed it. After deployment, I needed to verify how it ran. Because this component only has one Lambda function, I looked for that function in the Lambda console and opened its details page so that I could verify the code.

4. Monitor behavior and cost before using a component in production

When everything works as expected in my test account, I usually add monitoring and performance tools to my component to help diagnose any incidents and evaluate component performance. I often use Epsagon and Lumigo for this, although adding those steps would have made this post too long.

I also wanted to track the component’s cost. To do this, I added a strict Billing alarm that tracked the component cost and the cost of each AWS resource within it.

After the component passed these four tests, I was ready to deploy it by adding it to my existing product-selection application.

Deploy the component to an existing application

To add my Stripe component into my existing application, I re-opened the component Review, Configure, and Deploy page and chose Copy as SAM Resource. That copied the necessary template code to my clipboard. I then added it to my existing serverless application by pasting it into my existing AWS SAM template, under Resources. It looked like the following:

Resources:
  ShowProduct:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs8.10
      Timeout: 10
      Events:
        Api:
          Type: Api
          Properties:
            Path: /product/:productId
            Method: GET
  apilambdastripecharge:
    Type: AWS::Serverless::Application
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:375983427419:applications/api-lambda-stripe-charge
        SemanticVersion: 3.0.0
      Parameters:
        # (Optional) Cross-origin resource sharing (CORS) Origin. You can specify a single origin, all origins with "*", or leave it empty and no CORS is applied.
        CorsOrigin: YOUR_VALUE
        # This component assumes that the Stripe secret key needed to use the Stripe Charge API is stored as SecureStrings in Parameter Store under the prefix defined by this parameter. See the component README.
       # SSMParameterPrefix: lambda-stripe-charge # Uncomment to override the default value
Outputs:
  ApiUrl:
    Value: !Sub https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Stage/product/123
    Description: The URL of the sample API Gateway

I copied and pasted an AWS::Serverless::Application AWS SAM resource, which points to the component by ApplicationId and its SemanticVersion. Then, I defined the component’s parameters.

  • I set CorsOrigin to “*” for demonstration purposes.
  • I didn’t have to set the SSMParameterPrefix value, as it picks up a default value. But I did set up my Stripe secret key in the Systems Manager Parameter Store, by running the following command:

aws ssm put-parameter --name lambda-stripe-charge/stripe-secret-key --value --type SecureString --overwrite

In addition to parameters, components also contain outputs. An output is an externalized component resource or value that you can use with other applications or components. For example, the output for the api-lambda-stripe-charge component is SNSTopic, an Amazon SNS topic. This enables me to attach another component or business logic to get a notification when a successful payment occurs. For example, a lambda-send-email-ses component that sends an email upon successful payment could be attached, too.

To finish, I ran the following two commands:

aws cloudformation package --template-file template.yaml --output-template-file output.yaml --s3-bucket YOUR_BUCKET_NAME

aws cloudformation deploy --template-file output.yaml --stack-name product-show-n-pay --capabilities CAPABILITY_IAM

For the second command, you could add parameter overrides as needed.

My product-selection and payment application was successfully deployed!

Summary

AWS Serverless Application Repository enables me to share and reuse common components, services, and applications so that I can really focus on building core business value.

In a few steps, I created an application that enables customers to select a product and pay for it. It took a matter of minutes, not hours or days! You can see that it doesn’t take long to cautiously analyze and check a component. That component can now be shared with other teams throughout my company so that they can eliminate their redundancies, too.

Now you’re ready to use AWS Serverless Application Repository to accelerate the way that your teams develop products, deliver features, and build and share production-ready applications.

How to visualize Amazon GuardDuty findings: serverless edition

Post Syndicated from Ben Romano original https://aws.amazon.com/blogs/security/how-to-visualize-amazon-guardduty-findings-serverless-edition/

Note: This blog provides an alternate solution to Visualizing Amazon GuardDuty Findings, in which the authors describe how to build an Amazon Elasticsearch Service-powered Kibana dashboard to ingest and visualize Amazon GuardDuty findings.

Amazon GuardDuty is a managed threat detection service powered by machine learning that can monitor your AWS environment with just a few clicks. GuardDuty can identify threats such as unusual API calls or potentially unauthorized users attempting to access your servers. Many customers also like to visualize their findings in order to generate additional meaningful insights. For example, you might track resources affected by security threats to see how they evolve over time.

In this post, we provide a solution to ingest, process, and visualize your GuardDuty finding logs in a completely serverless fashion. Serverless applications automatically run and scale in response to events you define, rather than requiring you to provision, scale, and manage servers. Our solution covers how to build a pipeline that ingests findings into Amazon Simple Storage Service (Amazon S3), transforms their nested JSON structure into tabular form using Amazon Athena and AWS Glue, and creates visualizations using Amazon QuickSight. We aim to provide both an easy-to-implement and cost-effective solution for consuming and analyzing your GuardDuty findings, and to more generally showcase a repeatable example for processing and visualizing many types of complex JSON logs.

Many customers already maintain centralized logging solutions using Amazon Elasticsearch Service (Amazon ES). If you want to incorporate GuardDuty findings with an existing solution, we recommend referencing this blog post to get started. If you don’t have an existing solution or previous experience with Amazon ES, if you prefer to use serverless technologies, or if you’re familiar with more traditional business intelligence tools, read on!

Before you get started

To follow along with this post, you’ll need to enable GuardDuty in order to start generating findings. See Setting Up Amazon GuardDuty for details if you haven’t already done so. Once enabled, GuardDuty will automatically generate findings as events occur. If you have public-facing compute resources in the same region in which you’ve enabled GuardDuty, you may soon find that they are being scanned quite often. All the more reason to continue reading!

You’ll also need Amazon QuickSight enabled in your account for the visualization sections of this post. You can find instructions in Setting Up Amazon QuickSight.

Architecture from end to end

 

Figure 1:  Complete architecture from findings to visualization

Figure 1: Complete architecture from findings to visualization

Figure 1 highlights the solution architecture, from finding generation all the way through final visualization. The steps are as follows:

  1. Deliver GuardDuty findings to Amazon CloudWatch Events
  2. Push GuardDuty Events to S3 using Amazon Kinesis Data Firehose
  3. Use AWS Lambda to reorganize S3 folder structure
  4. Catalog your GuardDuty findings using AWS Glue
  5. Configure Views with Amazon Athena
  6. Build a GuardDuty findings dashboard in Amazon QuickSight

Below, we’ve included an AWS CloudFormation template to launch a complete ingest pipeline (Steps 1-4) so that we can focus this post on the steps dedicated to building the actual visualizations (Steps 5-6). We cover steps 1-4 briefly in the next section to provide context, and we provide links to the pertinent pages in the documentation for those of you interested in building your own pipeline.
 
Select this image to open a link that starts building the CloudFormation stack

Ingest (Steps 1-4): Get Amazon GuardDuty findings into Amazon S3 and AWS Glue Data Catalog

 

Figure 2: In this section, we'll cover the services highlighted in blue

Figure 2: In this section, we’ll cover the services highlighted in blue

Step 1: Deliver GuardDuty findings to Amazon CloudWatch Events

GuardDuty has integration with and can deliver findings to Amazon CloudWatch Events. To perform this manually, follow the instructions in Creating a CloudWatch Events Rule and Target for GuardDuty.

Step 2: Push GuardDuty events to Amazon S3 using Kinesis Data Firehose

Amazon CloudWatch Events can write to an Kinesis Data Firehose delivery stream to store your GuardDuty events in S3, where you can use AWS Lambda, AWS Glue, and Amazon Athena to build the queries you’ll need to visualize the data. You can create your own delivery stream by following the instructions in Creating a Kinesis Data Firehose Delivery Stream and then adding it as a target for CloudWatch Events.

Step 3: Use AWS Lambda to reorganize Amazon S3 folder structure

Kinesis Data Firehose will automatically create a datetime-based file hierarchy to organize the findings as they come in. Due to the variability of the GuardDuty finding types, we recommend reorganizing the file hierarchy with a folder for each finding type, with separate datetime subfolders for each. This will make it easier to target findings that you want to focus on in your visualization. The provided AWS CloudFormation template utilizes an AWS Lambda function to rewrite the files in a new hierarchy as new files are written to S3. You can use the code provided in it along with Using AWS Lambda with S3 to trigger your own function that reorganizes the data. Once the Lambda function has run, the S3 bucket structure should look similar to the structure we show in figure 3.
 

Figure 3: Sample S3 bucket structure

Figure 3: Sample S3 bucket structure

Step 4: Catalog the GuardDuty findings using AWS Glue

With the reorganized findings stored in S3, use an AWS Glue crawler to scan and catalog each finding type. The CloudFormation template we provided schedules the crawler to run once a day. You can also run it on demand as needed. To build your own crawler, refer to Cataloging Tables with a Crawler. Assuming GuardDuty has generated findings in your account, you can navigate to the GuardDuty findings database in the AWS Glue Data Catalog. It should look something like figure 4:
 

Figure 4: List of finding type tables in the AWS Glue Catalog

Figure 4: List of finding type tables in the AWS Glue Catalog

Note: Because AWS Glue crawlers will attempt to combine similar data into one table, you might need to generate sample findings to ensure enough variability for each finding type to have its own table. If you only intend to build your dashboard from a small subset of finding types, you can opt to just edit the crawler to have multiple data sources and specify the folder path for each desired finding type.

Explore the table structure

Before moving on to the next step, take some time to explore the schema structure of the tables. Selecting one of the tables will bring you to a page that looks like what’s shown in figure 5.
 

Figure 5: Schema information for a single finding table

Figure 5: Schema information for a single finding table

You should see that most of the columns contain basic information about each finding, but there’s a column named detail that is of type struct. Select it to expand, as shown in figure 6.
 

Figure 6: The "detail" column expanded

Figure 6: The “detail” column expanded

Ah, this is where the interesting information is tucked away! The tables for each finding may differ slightly, but in all cases the detail column will hold the bulk of the information you’ll want to visualize. See GuardDuty Active Finding Types for information on what you should expect to find in the logs for each finding type. In the next step, we’ll focus on unpacking detail to prepare it for visualization!

Process (Step 5): Unpack nested JSON and configure views with Amazon Athena

 

Figure 7: In this section, we'll cover the services highlighted in blue

Figure 7: In this section, we’ll cover the services highlighted in blue

Note: This step picks up where the CloudFormation template finishes

Explore the table structure (again) in the Amazon Athena console

Begin by navigating to Athena from the AWS Management Console. Once there, you should see a drop-down menu with a list of databases. These are the same databases that are available in the AWS Glue Data Catalog. Choose the database with your GuardDuty findings and expand a table.
 

Figure 8: Expanded table in the Athena console

Figure 8: Expanded table in the Athena console

This should look very familiar to the table information you explored in step 4, including the detail struct!

You’ll need a method to unpack the struct in order to effectively visualize the data. There are many methods and tools to approach this problem. One that we recommend (and will show) is to use SQL queries within Athena to construct tabular views. This approach will allow you to push the bulk of the processing work to Athena. It will also allow you to simplify building visualizations when using Amazon QuickSight by providing a more conventional tabular format.

Extract details for use in visualization using SQL

The following examples contain SQL statements that will provide everything necessary to extract the necessary fields from the detail struct of the Recon:EC2/PortProbeUnprotectedPort finding to build the Amazon QuickSight dashboard we showcase in the next section. The examples also cover most of the operations you’ll need to work with the elements found in GuardDuty findings (such as deeply nested data with lists), and they serve as a good starting point for constructing your own custom queries. In general, you’ll want to traverse the nested layers (i.e. root.detail.service.count) and create new records for each item in an embedded list that you want to target using the UNNEST function. See this blog for even more examples of constructing queries on complex JSON data using Amazon Athena.

Simply copy the SQL statements that you want into the Athena query field to build the port_probe_geo and affected_instances views.

Note: If your account has yet to generate Recon:EC2/PortProbeUnprotectedPort findings, you can generate sample findings to follow along.


CREATE OR REPLACE VIEW "port_probe_geo" AS

WITH getportdetails AS (
    SELECT id, portdetails
    FROM by_finding_type
    CROSS JOIN UNNEST(detail.service.action.portProbeAction.portProbeDetails) WITH ORDINALITY AS p (portdetails, portdetailsindex)
)

SELECT 
    root.id AS id,
    root.region AS region,
    root.time AS time,
    root.detail.type AS type,
    root.detail.service.count AS count, 
    portdetails.localportdetails.port AS localport, 
    portdetails.localportdetails.portname AS localportname, 
    portdetails.remoteipdetails.geolocation.lon AS longitude, 
    portdetails.remoteipdetails.geolocation.lat AS latitude, 
    portdetails.remoteipdetails.country.countryname AS country, 
    portdetails.remoteipdetails.city.cityname AS city 

FROM 
    by_finding_type  as root, getPortDetails
    
WHERE 
    root.id = getportdetails.id

CREATE OR REPLACE VIEW "affected_instances" AS

SELECT 
    max(root.detail.service.count) AS count,
    date_parse(root.time,'%Y-%m-%dT%H:%i:%sZ') as time,
    root.detail.resource.instancedetails.instanceid

FROM 
    recon_ec2_portprobeunprotectedport  AS root

GROUP BY  
    root.detail.resource.instancedetails.instanceid, 
    time

Visualize (Step 6): Build a GuardDuty findings dashboard in Amazon QuickSight

 

Figure 9: In this section we will cover the services highlighted in blue

Figure 9: In this section we will cover the services highlighted in blue

Now that you’ve created tabular views using Athena, you can jump into Amazon QuickSight from the AWS Management Console and begin visualizing! If you haven’t already done so, enable Amazon QuickSight in your account by following the instructions for Setting Up Amazon QuickSight.

For this example, we’ll leverage the geo_port_probe view to build a geographic visualization and see the locations from which nefarious actors are launching port probes.

Creating an analysis

In the upper left-hand corner of the Amazon QuickSight console select New analysis and then New data set.
 

Figure 10: Create a new analysis

Figure 10: Create a new analysis

To utilize the views you built in the previous step, select Athena as the data source. Give your data source a name (in our example, we use “port probe geo”), and select the database that contains the views you created in the previous section. Then select Visualize.
 

Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!

Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!

 

Figure 12: Select the "port prob geo view" you created in step 5

Figure 12: Select the “port prob geo view” you created in step 5

Viz time!

From the Visual types menu in the bottom left corner, select the globe icon to create a map. Then select the latitude and longitude geospatial coordinates. Choose count (with a max aggregation) for size. Finally, select localportname to break the data down by color.
 

Figure 13: A visual containing a map of port probe scans in Amazon QuickSight

Figure 13: A visual containing a map of port probe scans in Amazon QuickSight

Voila! A detailed map of your environment’s attackers!

Build out a dashboard

Once you like how everything looks, you can move on to adding more visuals to create a full monitoring dashboard.

To add another visual to the analysis, select Add and then Add visual.
 

Figure 14: Add another visual using the 'Add' option from the Amazon QuickSight menu bar

Figure 14: Add another visual using the ‘Add’ option from the Amazon QuickSight menu bar

If the new visual will use the same dataset, then you can immediately start selecting fields to build it. If you want to create a visual from a different data set (our example dashboard below adds the affected_instances view), follow the Creating Data Sets guide to add a new data set. Then return to the current analysis and associate the data set with the analysis by selecting the pencil icon shown below and selecting Add data set.
 

Figure 15: Adding a new data set to your Amazon QuickSight analysis

Figure 15: Adding a new data set to your Amazon QuickSight analysis

Repeat this process until you’ve built out everything you need in your monitoring dashboard. Once it’s completed, you can publish the dashboard by selecting Share and then Publish dashboard.
 

Figure 16: Publish your dashboard using the "Share" option of the Amazon QuickSight menu

Figure 16: Publish your dashboard using the “Share” option of the Amazon QuickSight menu

Here’s an example of a dashboard we created using the port_probe_geo and affected_instances views:
 

Figure 17: An example dashboard created using the "port_probe_geo" and "affected_instances" views

Figure 17: An example dashboard created using the “port_probe_geo” and “affected_instances” views

What does something like this cost?

To get an idea of the scale of the cost, we’ve provided a small pricing example (accurate as of the writing of this blog) that assumes 10,000 GuardDuty findings per month with an average payload size of 5KB.

Service Pricing Structure Amount Consumed Total Cost
Amazon CloudWatch Events $1 per million events/td>

10000 events $0.01
Amazon Kinesis Data Firehose $0.029 per GB ingested 0.05GB ingested $0.00145
Amazon S3 $0.029 per GB stored per month 0.1GB stored $0.00230
AWS Lambda First million invocations free ~200 invocations $0
Amazon Athena $5 per TB Scanned 0.003TB scanned (Assume 2 full data scans per day to refresh views) $0.015
AWS Glue $0.44 per DPU hour (2 DPU minimum and 10 minute minimum) = $0.15 per crawler run 30 crawler runs $4.50
Total Processing Cost $4.53

Oh, the joys of a consumption-based model: Less than five dollars per month for all of that processing!

From here, all that remains are your visualization costs using Amazon QuickSight. This pricing is highly dependent upon your number of users and their respective usage patterns. See the Amazon QuickSight pricing page for more specific details.

Summary

In this post, we demonstrated how you can ingest your GuardDuty findings into S3, process them with AWS Glue and Amazon Athena, and visualize with Amazon QuickSight. All serverless! Each portion of what we showed can be used in tandem or on its own for this or many other data sets. Go launch the template and get started monitoring your AWS environment!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Romano

Ben is a Solutions Architect in AWS supporting customers in their journey to the cloud with a focus on big data solutions. Ben loves to delight customers by diving deep on AWS technologies and helping them achieve their business and technology objectives.

Author

Jimmy Boyle

Jimmy is a Solutions Architect in AWS with a background in software development. He enjoys working with all things serverless because he doesn’t have to maintain infrastructure. Jimmy enjoys delighting customers to drive their business forward and design solutions that will scale as their business grows.

Deploying a personalized API Gateway serverless developer portal

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/deploying-a-personalized-api-gateway-serverless-developer-portal/

This post is courtesy of Drew Dresser, Application Architect – AWS Professional Services

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Customers of these APIs often want a website to learn and discover APIs that are available to them. These customers might include front-end developers, third-party customers, or internal system engineers. To produce such a website, we have created the API Gateway serverless developer portal.

The API Gateway serverless developer portal (developer portal or portal, for short) is an application that you use to make your API Gateway APIs available to your customers by enabling self-service discovery of those APIs. Your customers can use the developer portal to browse API documentation, register for, and immediately receive their own API key that they can use to build applications, test published APIs, and monitor their own API usage.

Over the past few months, the team has been hard at work contributing to the open source project, available on Github. The developer portal was relaunched on October 29, 2018, and the team will continue to push features and take customer feedback from the open source community. Today, we’re happy to highlight some key functionality of the new portal.

Benefits for API publishers

API publishers use the developer portal to expose the APIs that they manage. As an API publisher, you need to set up, maintain, and enable the developer portal. The new portal has the following benefits:

Benefits for API consumers

API Consumers use the developer portal as a traditional application user. An API consumer needs to understand the APIs being published. API consumers might be front-end developers, distributed system engineers, or third-party customers. The new developer portal comes with the following benefits for API consumers:

  • Explore – API consumers can quickly page through lists of APIs. When they find one they’re interested in, they can immediately see documentation on that API.
  • Learn – API consumers might need to drill down deeper into an API to learn its details. They want to learn how to form requests and what they can expect as a response.
  • Test – Through the developer portal, API consumers can get an API key and invoke the APIs directly. This enables developers to develop faster and with more confidence.

Architecture

The developer portal is a completely serverless application. It leverages Amazon API Gateway, Amazon Cognito User Pools, AWS Lambda, Amazon DynamoDB, and Amazon S3. Serverless architectures enable you to build and run applications without needing to provision, scale, and manage any servers. The developer portal is broken down in to multiple microservices, each with a distinct responsibility, as shown in the following image.

Identity management for the developer portal is performed by Amazon Cognito and a Lambda function in the Login & Registration microservice. An Amazon Cognito User Pool is configured out of the box to enable users to register and login. Additionally, you can deploy the developer portal to use a UI hosted by Amazon Cognito, which you can customize to match your style and branding.

Requests are routed to static content served from Amazon S3 and built using React. The React app communicates to the Lambda backend via API Gateway. The Lambda function is built using the aws-serverless-express library and contains the business logic behind the APIs. The business logic of the web application queries and add data to the API Key Creation and Catalog Update microservices.

To maintain the API catalog, the Catalog Update microservice uses an S3 bucket and a Lambda function. When an API’s Swagger file is added or removed from the bucket, the Lambda function triggers and maintains the API catalog by updating the catalog.json file in the root of the S3 bucket.

To manage the mapping between API keys and customers, the application uses the API Key Creation microservice. The service updates API Gateway with API key creations or deletions and then stores the results in a DynamoDB table that maps customers to API keys.

Deploying the developer portal

You can deploy the developer portal using AWS SAM, the AWS SAM CLI, or the AWS Serverless Application Repository. To deploy with AWS SAM, you can simply clone the repository and then deploy the application using two commands from your CLI. For detailed instructions for getting started with the portal, see Use the Serverless Developer Portal to Catalog Your API Gateway APIs in the Amazon API Gateway Developer Guide.

Alternatively, you can deploy using the AWS Serverless Application Repository as follows:

  1. Navigate to the api-gateway-dev-portal application and choose Deploy in the top right.
  2. On the Review page, for ArtifactsS3BucketName and DevPortalSiteS3BucketName, enter globally unique names. Both buckets are created for you.
  3. To deploy the application with these settings, choose Deploy.
  4. After the stack is complete, get the developer portal URL by choosing View CloudFormation Stack. Under Outputs, choose the URL in Value.

The URL opens in your browser.

You now have your own serverless developer portal application that is deployed and ready to use.

Publishing a new API

With the developer portal application deployed, you can publish your own API to the portal.

To get started:

  1. Create the PetStore API, which is available as a sample API in Amazon API Gateway. The API must be created and deployed and include a stage.
  2. Create a Usage Plan, which is required so that API consumers can create API keys in the developer portal. The API key is used to test actual API calls.
  3. On the API Gateway console, navigate to the Stages section of your API.
  4. Choose Export.
  5. For Export as Swagger + API Gateway Extensions, choose JSON. Save the file with the following format: apiId_stageName.json.
  6. Upload the file to the S3 bucket dedicated for artifacts in the catalog path. In this post, the bucket is named apigw-dev-portal-artifacts. To perform the upload, run the following command.
    aws s3 cp apiId_stageName.json s3://yourBucketName/catalog/apiId_stageName.json

Uploading the file to the artifacts bucket with a catalog/ key prefix automatically makes it appear in the developer portal.

This might be familiar. It’s your PetStore API documentation displayed in the OpenAPI format.

With an API deployed, you’re ready to customize the portal’s look and feel.

Customizing the developer portal

Adding a customer’s own look and feel to the developer portal is easy, and it creates a great user experience. You can customize the domain name, text, logo, and styling. For a more thorough walkthrough of customizable components, see Customization in the GitHub project.

Let’s walk through a few customizations to make your developer portal more familiar to your API consumers.

Customizing the logo and images

To customize logos, images, or content, you need to modify the contents of the your-prefix-portal-static-assets S3 bucket. You can edit files using the CLI or the AWS Management Console.

Start customizing the portal by using the console to upload a new logo in the navigation bar.

  1. Upload the new logo to your bucket with a key named custom-content/nav-logo.png.
    aws s3 cp {myLogo}.png s3://yourPrefix-portal-static-assets/custom-content/nav-logo.png
  2. Modify object permissions so that the file is readable by everyone because it’s a publicly available image. The new navigation bar looks something like this:

Another neat customization that you can make is to a particular API and stage image. Maybe you want your PetStore API to have a dog picture to represent the friendliness of the API. To add an image:

  1. Use the command line to copy the image directly to the S3 bucket location.
    aws s3 cp my-image.png s3://yourPrefix-portal-static-assets/custom-content/api-logos/apiId-stageName.png
  2. Modify object permissions so that the file is readable by everyone.

Customizing the text

Next, make sure that the text of the developer portal welcomes your pet-friendly customer base. The YAML files in the static assets bucket under /custom-content/content-fragments/ determine the portal’s text content.

To edit the text:

  1. On the AWS Management Console, navigate to the website content S3 bucket and then navigate to /custom-content/content-fragments/.
  2. Home.md is the content displayed on the home page, APIs.md controls the tab text on the navigation bar, and GettingStarted.md contains the content of the Getting Started tab. All three files are written in markdown. Download one of them to your local machine so that you can edit the contents. The following image shows Home.md edited to contain custom text:
  3. After editing and saving the file, upload it back to S3, which results in a customized home page. The following image reflects the configuration changes in Home.md from the previous step:

Customizing the domain name

Finally, many customers want to give the portal a domain name that they own and control.

To customize the domain name:

  1. Use AWS Certificate Manager to request and verify a managed certificate for your custom domain name. For more information, see Request a Public Certificate in the AWS Certificate Manager User Guide.
  2. Copy the Amazon Resource Name (ARN) so that you can pass it to the developer portal deployment process. That process is now includes the certificate ARN and a property named UseRoute53Nameservers. If the property is set to true, the template creates a hosted zone and record set in Amazon Route 53 for you. If the property is set to false, the template expects you to use your own name server hosting.
  3. If you deployed using the AWS Serverless Application Repository, navigate to the Application page and deploy the application along with the certificate ARN.

After the developer portal is deployed and your CNAME record has been added, the website is accessible from the custom domain name as well as the new Amazon CloudFront URL.

Customizing the logo, text content, and domain name are great tools to make the developer portal feel like an internally developed application. In this walkthrough, you completely changed the portal’s appearance to enable developers and API consumers to discover and browse APIs.

Conclusion

The developer portal is available to use right away. Support and feature enhancements are tracked in the public GitHub. You can contribute to the project by following the Code of Conduct and Contributing guides. The project is open-sourced under the Amazon Open Source Code of Conduct. We plan to continue to add functionality and listen to customer feedback. We can’t wait to see what customers build with API Gateway and the API Gateway serverless developer portal.

Working with AWS Lambda and Lambda Layers in AWS SAM

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/working-with-aws-lambda-and-lambda-layers-in-aws-sam/

The introduction of serverless technology has enabled developers to shed the burden of managing infrastructure and concentrate on their application code. AWS Lambda has taken on that management by providing isolated, event-driven compute environments for the execution of application code. To use a Lambda function, a developer just needs to package their code and any dependencies into a zip file and upload that file to AWS. However, as serverless applications get larger and more functions are required for those applications, there is a need for the ability to share code across multiple functions within the application.

To meet this need, AWS released Lambda layers, providing a mechanism to externally package dependencies that can be shared across multiple Lambda functions. Lambda layers reduces lines of code and size of application artifacts and simplifies dependency management. Along with the release of Lambda layers, AWS also released support for layers in the AWS Serverless Application Model (SAM) and the AWS SAM command line interface (CLI). SAM is a template specification that enables developers to define a serverless application in clean and simple syntax. The SAM CLI is a command line tool that operates on SAM templates and application code. SAM can now define Lambda layers with the AWS::Serverless::LayerVersion type. The SAM CLI can build and test your layers locally as well as package, deploy, and publish your layers for public consumption.

How layers work

To understand how SAM CLI supports layers, you need to understand how layers work on AWS. When a Lambda function configured with a Lambda layer is executed, AWS downloads any specified layers and extracts them to the /opt directory on the function execution environment. Each runtime then looks for a language-specific folder under the /opt directory.

Lambda layers can come from multiple sources. You can create and upload your own layers for sharing, you can implement an AWS managed layer such as SciPi, or you can grab a third-party layer from an APN Partner or another trusted developer. The following image shows how layers work with multiple sources.AWS Lambda Layers diagram

How layers work in the AWS SAM CLI

To support Lambda layers, SAM CLI replicates the AWS layer process locally by downloading all associated layers and caching them on your development machine. This happens the first time you run sam local invoke or the first time you execute your Lambda functions using sam local start-lambda or sam local start-api.

Two specific flags in SAM CLI are helpful when you’re working with Lambda layers locally. To specify where the layer cache should be located, pass the –layer-cache-basedir flag, followed by your desired cache directory. To force SAM CLI to rebuild the layer cache, pass the –force-image-build flag.

Time for some code

Now you’re going to create a simple application that does some temperature conversions using a simple library named temp-units-conv. After the app is running, you move the dependencies to Lambda layers using SAM. Finally, you add a layer managed by an AWS Partner Network Partner, Epsagon, to enhance the monitoring of the Lambda function.

Creating a serverless application

To create a serverless application, use the SAM CLI. If you don’t have SAM CLI installed, see Installing the AWS SAM CLI in the AWS Serverless Application Model Developer Guide.

  1. To initialized a new application, run the following command.
    $ sam init -r nodejs8.10

    This creates a simple node application under the directory sam-app that looks like this.

    $ tree sam-app
    sam-app
    ├── README.md
    ├── hello-world
    │   ├── app.js
    │   ├── package.json
    │   └── tests
    └── template.yaml

    The template.yaml file is a SAM template describing the infrastructure for the application, and the app.js file contains the application code.

  2. To install the dependencies for the application, run the following command from within the sam-app/hello-world directory.
    $ npm install temp-units-conv
  3. The application is going to perform temperature scale conversions for Celsius, Fahrenheit, and Kelvin using the following code. In a code editor, open the file sam-app/hello-world/app.js and replace its contents with the following.
    const tuc = require('temp-units-conv');
    let response;
    
    const scales = {
        c: "celsius",
        f: "fahrenheit",
        k: "kelvin"
    }
    
    exports.lambdaHandler = async (event) => {
        let conversion = event.pathParameters.conversion
        let originalValue = event.pathParameters.value
        let answer = tuc[conversion](originalValue)
        try {
            response = {
                'statusCode': 200,
                'body': JSON.stringify({
                    source: scales[conversion[0]],
                    target: scales[conversion[2]],
                    original: originalValue,
                    answer: answer
                })
            }
        } catch (err) {
            console.log(err);
            return err;
        }
    
        return response
    };
  4. Update the SAM template. Open the sam-app/template.yaml file. Replace the contents with the following. This is a YAML file, so spacing and indentation is important.
    AWSTemplateFormatVersion: '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
    Description: sam app
    Globals:
        Function:
            Timeout: 3
            Runtime: nodejs8.10
    
    Resources:
        TempConversionFunction:
            Type: AWS::Serverless::Function 
            Properties:
                CodeUri: hello-world/
                Handler: app.lambdaHandler
                Events:
                    HelloWorld:
                        Type: Api
                        Properties:
                            Path: /{conversion}/{value}
                            Method: get

    This change dropped out some comments and output parameters and updated the function resource to TempConversionFunction. The primary change is the Path: /{conversion}/{value} line. This enables you to use path mapping for our conversion type and value.

  5. Okay, now you have a simple app that does temperature conversions. Time to spin it up and make sure that it works. In the sam-app directory, run the following command.
    $ sam local start-api

  6. Using curl or your browser, navigate to the address output by the previous command with a conversion and value attached. For reference, c = Celsius, f = Fahrenheit, and k = Kelvin. Use the pattern c2f/ followed by the temperature that you want to convert.
    $ curl http://127.0.0.1:3000/f2c/45
    {"source":"fahrenheit","target":"celsius","original":"45","answer":7.222222222222222}

Deploying the application

Now that you have a working application that you have tested locally, deploy it to AWS. This enables the application to run on Lambda, providing a public endpoint that you can share with others.

  1. Create a resource bucket. The resource bucket gives you a place to upload the application so that AWS CloudFormation can access it when you run the deploy process. Run the following command to create a resource bucket.
    $ aws s3api create-bucket –bucket <your unique bucket name>
  2. Use SAM to package the application. From the sam-app directory, run the following command.
    $ sam package --template-file template.yaml --s3-bucket <your bucket> --output-template-file out.yaml
  3. Now you can use SAM to deploy the application. Run the following command from the sam-app folder.
    $ sam deploy --template-file ./out.yaml --stack-name <your stack name> --capabilities CAPABILITY_IAM

Sign in to the AWS Management Console. Navigate to the Lambda console to find your function.

Lambda Console

Now that the application is deployed, you can access it via the API endpoint. In the Lambda console, click on the API Gateway option and scroll down. You will find a link to your API Gateway endpoint.

API Gateway Endpoint

Using that value, you can test the live application. Your endpoint will be different from the one in the following image.

Live Demo

Let’s take a moment to talk through the structure of our new application. Because you installed temp-units-conv, there is a dependency folder named sam-app`hello-world/node_modules that you need to include when you upload the application.

$ tree sam-app
sam-app
├── README.md
├── hello-world
│   ├── app.js
│   ├── node_modules
│   ├── package-lock.json
│   ├── package.json
│   └── tests
└── template.yaml

Because you’re a node user, you can use something like webpack to minimize your uploads. However, this requires a processing step to pack your code, and it still forces you to upload unchanging, static code on every update. To simplify this, create a layer to separate the dependencies from the application code.

Creating a layer

To create and manage the dependency layer, you need to update your directory structure a bit.

$ tree sam-app
sam-app
├── README.md
├── dependencies
│   └── nodejs
│       └── package.json
├── hello-world
│   ├── app.js
│   └── tests
│       └── unit
└── template.yaml

In the root, create a new directory named dependencies. Under that directory, create a second directory named nodejs. This is the structure required for layers to be injected into a Lambda function. Next, move the package.json file from the hello-world directory to the dependencies/nodejs directory. Finally, clean up the hello-world directory by deleting the node_modules folder and the package-lock.json file.

Before you gather your dependencies, edit the sam-app/dependencies/nodejs/pakage.json file. Replace the entire contents with the following.

{
  "dependencies": {
    "temp-units-conv": "^1.0.2"
  }
}

Now that you have the package file cleaned up, install the required packages into the dependencies directory. From the sam-app/dependencies/nodejs directory, run the following command.

$ npm install

You now have a node_modules directory under the nodejs directory. With this in place, you have everything in place to create your first layer using SAM.

The next step is to update the AWS SAM template. Replace the contents of your sam-app/template.yaml file with the following.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: sam app
Globals:
    Function:
        Timeout: 3
        Runtime: nodejs8.10

Resources:
    TempConversionFunction:
        Type: AWS::Serverless::Function 
        Properties:
            CodeUri: hello-world/
            Handler: app.lambdaHandler
            Layers:
              - !Ref TempConversionDepLayer
            Events:
                HelloWorld:
                    Type: Api
                    Properties:
                        Path: /{conversion}/{value}
                        Method: get

    TempConversionDepLayer:
        Type: AWS::Serverless::LayerVersion
        Properties:
            LayerName: sam-app-dependencies
            Description: Dependencies for sam app [temp-units-conv]
            ContentUri: dependencies/
            CompatibleRuntimes:
              - nodejs6.10
              - nodejs8.10
            LicenseInfo: 'MIT'
            RetentionPolicy: Retain

There are two changes to the template. The first is a new resource named TempConversionDepLayer, which defines the new layer and points to the dependencies folder as the code source for the layer. The second is the addition of the Layers parameter in the TempConversionFunction resource. The single layer entry references the layer that is being created in the template file.

With that final change, you have separated the application code from the dependencies. Try out the application and see if it still works. From the sam-app directory, run the following command.

$ sam local start-api

If all went well, you can open your browser back up and try another conversion.

One other thing to note here is that you didn’t have to change our application code at all. As far as the application is concerned, nothing has changed. However, under the hood, SAM CLI is doing a bit of magic. SAM CLI is creating an image of the layer and caching it locally. It then makes that layer available in the /opt directory on the container being used to execute the Lambda function locally.

Using layers from APN Partners and third parties

So far, you have used a layer of your own creation. Now you’re going to branch out and add a managed layer by an APN Partner, Epsagon, who provides a tool to help with monitoring and troubleshooting your serverless applications. If you want to try this demo, you can sign up for a free trial on their website. After you create an account, you need to get the Epsagon token from the Settings page of your dashboard.

Epsagon Settings

  1. Add Epsagon layer reference. Edit the sam-app/template.yaml file. Update the Layers section of the TempConversionFunction resource to the following.
    Layers:
      - !Ref TempConversionDepLayer
      - arn:aws:lambda:us-east-1:066549572091:layer:epsagon-node-layer:1
    

    Note: This demo uses us-east-1 for the AWS Region. If you plan to deploy your Lambda function to a different Region, update the Epsagon LayerVersion Amazon Resource Name (ARN) accordingly. For more information, see the Epsagon blog post on layers.

  2. To use the Epsagon library in our code, you need to add or modify nine lines of code. You reference and initialize the library, wrap the handler with the Epsagon library, and modify the output. Open the sam-app/hello-world/app.js file and replace the entire contents with the following. The changes are highlighted. Be sure to update 1122334455 with your token from Epsagon.
    const tuc = require('temp-units-conv');
    const epsagon = require('epsagon');
    epsagon.init({
        token: '1122334455',
        appName: 'layer-demo-app',
        metadataOnly: false, // Optional, send more trace data
    });
    
    let response;
    
    const scales = {
        c: "celsius",
        f: "fahrenheit",
        k: "kelvin"
    }
    
    exports.lambdaHandler = epsagon.lambdaWrapper((event, context, callback) => {
        let conversion = event.pathParameters.conversion
        let originalValue = event.pathParameters.value
        let answer = tuc[conversion](originalValue)
        try {
            response = {
                'statusCode': 200,
                'body': JSON.stringify({
                    source: scales[conversion[0]],
                    target: scales[conversion[2]],
                    original: originalValue,
                    answer: answer
                })
            }
        } catch (err) {
            console.log(err);
            return err;
        }
    
        callback(null, response)
    });

Test the change to make sure that everything still works. From the sam-app directory, run the following command.

$ sam cli start-api

Use curl to test your code.

$ curl http://127.0.0.1:3000/k2c/100

Your answer should be the following.

{"source":"kelvin","target":"celsius","original":"100","answer":-173.14999999999998}

Your Epsagon dashboard should display traces from your Lambda function, as shown in the following image.

Epsagon Dashboard

Deploying the application with layers

Now that you have a functioning application that uses Lambda layers, you can package and deploy it.

  1. To package the application, run the following command.
    $ sam package --template-file template.yaml --s3-bucket <your bucket> --output-template-file out.yaml
  2. To deploy the application, run the following command.
    $ sam deploy --template-file ./out.yaml --stack-name <your stack name> --capabilities CAPABILITY_IAM

The Lambda console for your function has updated, as shown in the following image.

Lambda Console

Also, the dependency code isn’t in your code environment in the Lambda function.

Lambda Console Code

There you have it! You just deployed your Lambda function and your dependencies layer for that function. It’s important to note that you did not publish the Epsagon layer. You just told AWS to grab their layer and extract it in to your function’s execution environment. The following image shows the flow of this process.

Epsagon Layer

Options for managing layers

You have several options for managing your layers through AWS SAM.

First, following the pattern you just walked through releases a new version of your layer each time you deploy your application. If you remember, one of the advantages of using layers is not having to upload the dependencies each time. One option to avoid this is to keep your dependencies in a separate template and deploy them only when the dependencies have changed.

Second, this pattern always uses the latest build of dependencies. If for some reason you want to get a specific version of your dependencies, you can. After you deploy the application for the first time, you can run the following command.

$ aws lambda list-layer-versions --layer-name sam-app-depedencies

You should see a response like the following.

{
    "LayerVersions": [
        {
            "LayerVersionArn": "arn:aws:lambda:us-east-1:5555555:layer:sam-app-dependencies:1",
            "Version": 1,
            "Description": "Dependencies for sam app",
            "CreatedDate": "2019-01-08T18:04:51.833+0000",
            "CompatibleRuntimes": [
                "nodejs6.10",
                "nodejs8.10"
            ],
            "LicenseInfo": "MIT"
        }
    ]
}

The critical information here is the LayerArnVersion. Returning to the sam-app/template.yaml file, you can change the Layers section of the TempConversionFunction resource to use this version.

Layers:
  - arn:aws:lambda:us-east-1:5555555:layer:sam-app-dependencies:1
  - arn:aws:lambda:us-east-1:066549572091:layer:epsagon-node-layer:1

Conclusion

This blog post demonstrates how AWS SAM can manage Lambda layers via AWS SAM templates. It also demonstrates how the AWS SAM CLI creates a local development environment that provides layer support without any changes in the application.

Developers are often taught to think of code in an object-oriented manner and to code in a DRY way (don’t repeat yourself). For developers of serverless applications, these practices remain true. As serverless applications grow in size and require more Lambda functions, using Lambda layers provides an efficient mechanism to reuse code and libraries. Layers also reduce the size of your upload packages, making iterations faster.

We’re excited to see what you do with layers and to hear how AWS SAM is helping you. As always, we welcome your feedback.

Now go code!

Learn about AWS Services & Solutions – February 2019 AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-february-2019-aws-online-tech-talks/

AWS Tech Talks

Join us this February to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Application Integration

February 20, 2019 | 11:00 AM – 12:00 PM PTCustomer Showcase: Migration & Messaging for Mission Critical Apps with S&P Global Ratings – Learn how S&P Global Ratings meets the high availability and fault tolerance requirements of their mission critical applications using the Amazon MQ.

AR/VR

February 28, 2019 | 1:00 PM – 2:00 PM PTBuild AR/VR Apps with AWS: Creating a Multiplayer Game with Amazon Sumerian – Learn how to build real-world augmented reality, virtual reality and 3D applications with Amazon Sumerian.

Blockchain

February 18, 2019 | 11:00 AM – 12:00 PM PTDeep Dive on Amazon Managed Blockchain – Explore the components of blockchain technology, discuss use cases, and do a deep dive into capabilities, performance, and key innovations in Amazon Managed Blockchain.

Compute

February 25, 2019 | 9:00 AM – 10:00 AM PTWhat’s New in Amazon EC2 – Learn about the latest innovations in Amazon EC2, including new instances types, related technologies, and consumption options that help you optimize running your workloads for performance and cost.

February 27, 2019 | 1:00 PM – 2:00 PM PTDeploy and Scale Your First Cloud Application with Amazon Lightsail – Learn how to quickly deploy and scale your first multi-tier cloud application using Amazon Lightsail.

Containers

February 19, 2019 | 9:00 AM – 10:00 AM PTSecuring Container Workloads on AWS Fargate – Explore the security controls and best practices for securing containers running on AWS Fargate.

Data Lakes & Analytics

February 18, 2019 | 1:00 PM – 2:00 PM PTAmazon Redshift Tips & Tricks: Scaling Storage and Compute Resources – Learn about the tools and best practices Amazon Redshift customers can use to scale storage and compute resources on-demand and automatically to handle growing data volume and analytical demand.

Databases

February 18, 2019 | 9:00 AM – 10:00 AM PTBuilding Real-Time Applications with Redis – Learn about Amazon’s fully managed Redis service and how it makes it easier, simpler, and faster to build real-time applications.

February 21, 2019 | 1:00 PM – 2:00 PM PT – Introduction to Amazon DocumentDB (with MongoDB Compatibility) – Get an introduction to Amazon DocumentDB (with MongoDB compatibility), a fast, scalable, and highly available document database that makes it easy to run, manage & scale MongoDB-workloads.

DevOps

February 20, 2019 | 1:00 PM – 2:00 PM PTFireside Chat: DevOps at Amazon with Ken Exner, GM of AWS Developer Tools – Join our fireside chat with Ken Exner, GM of Developer Tools, to learn about Amazon’s DevOps transformation journey and latest practices and tools that support the current DevOps model.

End-User Computing

February 28, 2019 | 9:00 AM – 10:00 AM PTEnable Your Remote and Mobile Workforce with Amazon WorkLink – Learn about Amazon WorkLink, a new, fully-managed service that provides your employees secure, one-click access to internal corporate websites and web apps using their mobile phones.

Enterprise & Hybrid

February 26, 2019 | 1:00 PM – 2:00 PM PTThe Amazon S3 Storage Classes – For cloud ops professionals, by cloud ops professionals. Wallace and Orion will tackle your toughest AWS hybrid cloud operations questions in this live Office Hours tech talk.

IoT

February 26, 2019 | 9:00 AM – 10:00 AM PTBring IoT and AI Together – Learn how to bring intelligence to your devices with the intersection of IoT and AI.

Machine Learning

February 19, 2019 | 1:00 PM – 2:00 PM PTGetting Started with AWS DeepRacer – Learn about the basics of reinforcement learning, what’s under the hood and opportunities to get hands on with AWS DeepRacer and how to participate in the AWS DeepRacer League.

February 20, 2019 | 9:00 AM – 10:00 AM PTBuild and Train Reinforcement Models with Amazon SageMaker RL – Learn about Amazon SageMaker RL to use reinforcement learning and build intelligent applications for your businesses.

February 21, 2019 | 11:00 AM – 12:00 PM PTTrain ML Models Once, Run Anywhere in the Cloud & at the Edge with Amazon SageMaker Neo – Learn about Amazon SageMaker Neo where you can train ML models once and run them anywhere in the cloud and at the edge.

February 28, 2019 | 11:00 AM – 12:00 PM PTBuild your Machine Learning Datasets with Amazon SageMaker Ground Truth – Learn how customers are using Amazon SageMaker Ground Truth to build highly accurate training datasets for machine learning quickly and reduce data labeling costs by up to 70%.

Migration

February 27, 2019 | 11:00 AM – 12:00 PM PTMaximize the Benefits of Migrating to the Cloud – Learn how to group and rationalize applications and plan migration waves in order to realize the full set of benefits that cloud migration offers.

Networking

February 27, 2019 | 9:00 AM – 10:00 AM PTSimplifying DNS for Hybrid Cloud with Route 53 Resolver – Learn how to enable DNS resolution in hybrid cloud environments using Amazon Route 53 Resolver.

Productivity & Business Solutions

February 26, 2019 | 11:00 AM – 12:00 PM PTTransform the Modern Contact Center Using Machine Learning and Analytics – Learn how to integrate Amazon Connect and AWS machine learning services, such Amazon Lex, Amazon Transcribe, and Amazon Comprehend, to quickly process and analyze thousands of customer conversations and gain valuable insights.

Serverless

February 19, 2019 | 11:00 AM – 12:00 PM PTBest Practices for Serverless Queue Processing – Learn the best practices of serverless queue processing, using Amazon SQS as an event source for AWS Lambda.

Storage

February 25, 2019 | 11:00 AM – 12:00 PM PT Introducing AWS Backup: Automate and Centralize Data Protection in the AWS Cloud – Learn about this new, fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services in the cloud as well as on-premises.

ICYMI: Serverless Q4 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/icymi-serverless-q4-2018/

This post is courtesy of Eric Johnson, Senior Developer Advocate – AWS Serverless

Welcome to the fourth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

This edition of ICYMI includes all announcements from AWS re:Invent 2018!

If you didn’t see them, check our Q1 ICYMIQ2 ICYMI, and Q3 ICYMI posts for what happened then.

So, what might you have missed this past quarter? Here’s the recap.

New features

AWS Lambda introduced the Lambda runtime API and Lambda layers, which enable developers to bring their own runtime and share common code across Lambda functions. With the release of the runtime API, we can now support runtimes from AWS partners such as the PHP runtime from Stackery and the Erlang and Elixir runtimes from Alert Logic. Using layers, partners such as Datadog and Twistlock have also simplified the process of using their Lambda code libraries.

To meet the demand of larger Lambda payloads, Lambda doubled the payload size of asynchronous calls to 256 KB.

In early October, Lambda also lengthened the runtime limit by enabling Lambda functions that can run up to 15 minutes.

Lambda also rolled out native support for Ruby 2.5 and Python 3.7.

You can now process Amazon Kinesis streams up to 68% faster with AWS Lambda support for Kinesis Data Streams enhanced fan-out and HTTP/2 for faster streaming.

Lambda also released a new Application view in the console. It’s a high-level view of all of the resources in your application. It also gives you a quick view of deployment status with the ability to view service metrics and custom dashboards.

Application Load Balancers added support for targeting Lambda functions. ALBs can provide a simple HTTP/S front end to Lambda functions. ALB features such as host- and path-based routing are supported to allow flexibility in triggering Lambda functions.

Amazon API Gateway added support for AWS WAF. You can use AWS WAF for your Amazon API Gateway APIs to protect from attacks such as SQL injection and cross-site scripting (XSS).

API Gateway has also improved parameter support by adding support for multi-value parameters. You can now pass multiple values for the same key in the header and query string when calling the API. Returning multiple headers with the same name in the API response is also supported. For example, you can send multiple Set-Cookie headers.

In October, API Gateway relaunched the Serverless Developer Portal. It provides a catalog of published APIs and associated documentation that enable self-service discovery and onboarding. You can customize it for branding through either custom domain names or logo/styling updates. In November, we made it easier to launch the developer portal from the Serverless Application Repository.

In the continuous effort to decrease customer costs, API Gateway introduced tiered pricing. The tiered pricing model allows the cost of API Gateway at scale with an API Requests price as low as $1.51 per million requests at the highest tier.

Last but definitely not least, API Gateway released support for WebSocket APIs in mid-December as a final holiday gift. With this new feature, developers can build bidirectional communication applications without having to provision and manage any servers. This has been a long-awaited and highly anticipated announcement for the serverless community.

AWS Step Functions added eight new service integrations. With this release, the steps of your workflow can exist on Amazon ECS, AWS Fargate, Amazon DynamoDB, Amazon SNS, Amazon SQS, AWS Batch, AWS Glue, and Amazon SageMaker. This is in addition to the services that Step Functions already supports: AWS Lambda and Amazon EC2.

Step Functions expanded by announcing availability in the EU (Paris) and South America (São Paulo) Regions.

The AWS Serverless Application Repository increased its functionality by supporting more resources in the repository. The Serverless Application Repository now supports Application Auto Scaling, Amazon Athena, AWS AppSync, AWS Certificate Manager, Amazon CloudFront, AWS CodeBuild, AWS CodePipeline, AWS Glue, AWS dentity and Access Management, Amazon SNS, Amazon SQS, AWS Systems Manager, and AWS Step Functions.

The Serverless Application Repository also released support for nested applications. Nested applications enable you to build highly sophisticated serverless architectures by reusing services that are independently authored and maintained but easily composed using AWS SAM and the Serverless Application Repository.

AWS SAM made authorization simpler by introducing SAM support for authorizers. Enabling authorization for your APIs is as simple as defining an Amazon Cognito user pool or an API Gateway Lambda authorizer as a property of your API in your SAM template.

AWS SAM CLI introduced two new commands. First, you can now build locally with the sam build command. This functionality allows you to compile deployment artifacts for Lambda functions written in Python. Second, the sam publish command allows you to publish your SAM application to the Serverless Application Repository.

Our SAM tooling team also released the AWS Toolkit for PyCharm, which provides an integrated experience for developing serverless applications in Python.

The AWS Toolkits for Visual Studio Code (Developer Preview) and IntelliJ (Developer Preview) are still in active development and will include similar features when they become generally available.

AWS SAM and the AWS SAM CLI implemented support for Lambda layers. Using a SAM template, you can manage your layers, and using the AWS SAM CLI, you can develop and debug Lambda functions that are dependent on layers.

Amazon DynamoDB added support for transactions, allowing developer to enforce all-or-nothing operations. In addition to transaction support, Amazon DynamoDB Accelerator also added support for DynamoDB transactions.

Amazon DynamoDB also announced Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance costs and performance.

AWS Amplify released the Amplify Console, which is a continuous deployment and hosting service for modern web applications with serverless backends. Modern web applications include single-page app frameworks such as React, Angular, and Vue and static-site generators such as Jekyll, Hugo, and Gatsby.

Amazon SQS announced support for Amazon VPC Endpoints using PrivateLink.

Serverless blogs

October

November

December

Tech talks

We hold several Serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. Here are the three tech talks that we delivered in Q4:

Twitch

We’ve been so busy livestreaming on Twitch that you’re most certainly missing out if you aren’t following along!

For information about upcoming broadcasts and recent livestreams, keep an eye on AWS on Twitch for more Serverless videos and on the Join us on Twitch AWS page.

New Home for SAM Docs

This quarter, we moved all SAM docs to https://docs.aws.amazon.com/serverless-application-model. Everything you need to know about SAM is there. If you don’t find what you’re looking for, let us know!

In other news

The schedule is out for 2019 AWS Global Summits in cities around the world. AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Summits are held in major cities around the world. They attract technologists from all industries and skill levels who want to discover how AWS can help them innovate quickly and deliver flexible, reliable solutions at scale. Get notified when to register and learn more at the AWS Global Summits website.

Still looking for more?

The Serverless landing page has lots of information. The resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

Learn about AWS Services & Solutions – January AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-january-aws-online-tech-talks/

AWS Tech Talks

Happy New Year! Join us this January to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Containers

January 22, 2019 | 9:00 AM – 10:00 AM PTDeep Dive Into AWS Cloud Map: Service Discovery for All Your Cloud Resources – Learn how to increase your application availability with AWS Cloud Map, a new service that lets you discover all your cloud resources.

Data Lakes & Analytics

January 22, 2019 | 1:00 PM – 2:00 PM PT– Increase Your Data Engineering Productivity Using Amazon EMR Notebooks – Learn how to develop analytics and data processing applications faster with Amazon EMR Notebooks.

Enterprise & Hybrid

January 29, 2019 | 1:00 PM – 2:00 PM PTBuild Better Workloads with the AWS Well-Architected Framework and Tool – Learn how to apply architectural best practices to guide your cloud migration.

IoT

January 29, 2019 | 9:00 AM – 10:00 AM PTHow To Visually Develop IoT Applications with AWS IoT Things Graph – See how easy it is to build IoT applications by visually connecting devices & web services.

Mobile

January 21, 2019 | 11:00 AM – 12:00 PM PTBuild Secure, Offline, and Real Time Enabled Mobile Apps Using AWS AppSync and AWS Amplify – Learn how to easily build secure, cloud-connected data-driven mobile apps using AWS Amplify, GraphQL, and mobile-optimized AWS services.

Networking

January 30, 2019 | 9:00 AM – 10:00 AM PTImprove Your Application’s Availability and Performance with AWS Global Accelerator – Learn how to accelerate your global latency-sensitive applications by routing traffic across AWS Regions.

Robotics

January 29, 2019 | 11:00 AM – 12:00 PM PTUsing AWS RoboMaker Simulation for Real World Applications – Learn how AWS RoboMaker simulation works and how you can get started with your own projects.

Security, Identity & Compliance

January 23, 2019 | 1:00 PM – 2:00 PM PTCustomer Showcase: How Dow Jones Uses AWS to Create a Secure Perimeter Around Its Web Properties – Learn tips and tricks from a real-life example on how to be in control of your cloud security and automate it on AWS.

January 30, 2019 | 11:00 AM – 12:00 PM PTIntroducing AWS Key Management Service Custom Key Store – Learn how you can generate, store, and use your KMS keys in hardware security modules (HSMs) that you control.

Serverless

January 31, 2019 | 9:00 AM – 10:00 AM PT Nested Applications: Accelerate Serverless Development Using AWS SAM and the AWS Serverless Application Repository – Learn how to compose nested applications using the AWS Serverless Application Model (SAM), SAM CLI, and the AWS Serverless Application Repository.

January 31, 2019 | 11:00 AM – 12:00 PM PTDeep Dive Into Lambda Layers and the Lambda Runtime API – Learn how to use Lambda Layers to enable re-use and sharing of code, and how you can build and test Layers locally using the AWS Serverless Application Model (SAM).

Storage

January 28, 2019 | 11:00 AM – 12:00 PM PTThe Amazon S3 Storage Classes – Learn about the Amazon S3 Storage Classes and how to use them to optimize your storage resources.

January 30, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Amazon FSx for Windows File Server: Running Windows on AWS – Learn how to deploy Amazon FSx for Windows File Server in some of the most common use cases.

Serverless and startups, the beginning of a beautiful friendship

Post Syndicated from Slobodan Stojanović original https://aws.amazon.com/blogs/aws/serverless-and-startups/

Guest post by AWS Serverless Hero Slobodan Stojanović. Slobodan is the co-author of the book Serverless Applications with Node.js; CTO of Cloud Horizon, a software development studio; and CTO of Vacation Tracker, a Slack-based, leave management app. Slobodan is excited with serverless because it allows him to build software faster and cheaper. He often writes about serverless and talks about it at conferences.

Serverless seems to be perfect for startups. The pay-per-use pricing model and infrastructure that costs you nothing if no one is using your app makes it cheap for early-stage startups.

On the other side, it’s fully managed and scales automatically, so you don’t have to be afraid of large marketing campaigns or unexpected traffic. That’s why we decided to use serverless when we started working on our first product: Vacation Tracker.

Vacation Tracker is a Slack-based app that helps you to track and manage your team’s vacations and days off. Both our Slack app and web-based dashboard needed an API, so an AWS Lambda function with an Amazon API Gateway trigger was a logical starting point. API Gateway provides a public API. Each time that the API receives the request, the Lambda function is triggered to answer that request.

Our app is focused on small and medium teams and is the app that you use less than a few times per day. Periodic usage makes the pay-per-use, serverless pricing model a big win for us because both API Gateway and Lambda cost $0 initially.

Start small, grow tall

We decided to start small, with a simple prototype. Our prototype was a real Slack app with a few hardcoded actions and a little calendar. We used Claudia.js and Bot Builder to build it. Claudia.js is a simple tool for the deployment of Node.js serverless functions to Lambda and API Gateway.

After we finished our prototype, we published a landing page. But we continued building our product even as users signed up for the closed beta access.

Just a few months later, we had a bunch of serverless functions in production: chatbot, dashboard API, Slack notifications, a few tasks for Stripe billing, etc. Each of these functions had their own triggers and roles. While our app was working without issues, it was harder and harder to deploy a new stage.

It was clear that we had to organize our app better. So, our Vacation Tracker koala met the AWS SAM squirrel.

Herding Lambda functions

We started by mapping all our services. Then, we tried to group them into flows. We decided to migrate piece by piece to AWS Serverless Application Model (AWS SAM), an open-source framework for building serverless apps on AWS. As AWS SAM is language-agnostic, it still doesn’t know how to handle Node.js dependencies. We used Claudia’s pack command as a build step.

Grouping services in serverless apps brought back easier deployments. With just a single command, we had a new environment ready for our tester.

Soon after, we had AWS CloudFormation templates for a Slack chatbot, an API, a Stripe-billing based payment flow, notifications flow, and a few other flows for our app.

Like other startups, Vacation Tracker’s goal is to be able to evolve fast and adapt to user needs. To do so, you often need to run experiments and change things on the fly. With that in mind, our goal was to extract some common functionalities from the app flow to reusable components.

For example, as we used multiple Slack slash commands in some of the experiments, we extracted it from the Slack chatbot flow into a reusable component.

Our Slack slash command component consists of a few elements:

  • An API Gateway API with routes for the Slack slash command and message action webhooks
  • A Lambda function that handles slash commands
  • A Lambda function that handles message actions
  • An Amazon SNS topic for parsed Slack data

With this component, we can run our slash command experiments faster. Adding a new Slack slash command to our app requires the deployment of the slash command. We also wrote a few Lambda functions that are triggered by the SNS topic or handle business logic.

While working on Vacation Tracker, we realized the potential value of reusable components. Imagine how fast you would be able to assemble your MVP if you could use the standard components that someone else built? Building an app would require writing glue between reused parts and focus on a business logic that makes your app unique.

This dream can become a reality with AWS Serverless Application Repository, a repository for open source serverless components.

Instead of dreaming, we decided to publish a few reusable components to the Serverless Application Repository, starting with the Slack slash command app. But to do so, we had to have a well-tested app, which led us to the next challenge: how to architect and test a reusable serverless app?

Hexagonal architecture to the rescue

Our answer was simple: Hexagonal architecture, or ports and adapters. It is a pattern that allows an app to be equally driven by users, programs, automated tests, or batch scripts. The app can be developed and tested in isolation from its eventual runtime devices and databases. This makes hexagonal architecture a perfect fit for microservices and serverless apps.

Applying this to Vacation Tracker, we ended up with a setup similar to the following diagram. It consists of the following:

  • lambda.js and main.js files. lambda.js has no tests, as it simply wires the dependencies, such as sns-notification-repository.js, and invokes main.js.
  • main.js has its own unit and integration tests. Integration tests are using local integrations.
  • Each repository has its own unit and integration tests. In their integration tests, repositories connect to AWS services. For example, sns-notification-repository.js integration tests connect to Amazon SNS.

Each of our functions has at least two files: lambda.js and main.js. The first file is small and just invokes main.js with all the dependencies (adapters). This file doesn’t have automated tests, and it looks similar to the following code snippet:

const {
 httpResponse,
 SnsNotificationRepository
} = require('@serverless-slack-command/common')
const main = require('./main')
async function handler(event) {
  const notification = new SnsNotificationRepository(process.env.notificationTopic)
  await main(event.body, event.headers, event.requestContext, notification)
  return httpResponse()
}

exports.handler = handler

The second, and more critical file of each function is main.js. This file contains the function’s business logic, and it must be well tested. In our case, this file has its own unit and integration tests. But the business logic often relies on external integrations, for example sending an SNS notification. Instead of testing all external notifications, we test this file with other adapters, such as a local notification repository.

This file looks similar to the following code snippet:

const qs = require('querystring')

async function slashCommand(slackEvent, headers, requestContext, notification) {
  const eventData = qs.parse(slackEvent);
  return await notification.send({
    type: 'SLASH_COMMAND',
    payload: eventData,
    metadata: {
      headers,
      requestContext
    }
  })
}

module.exports = slashCommand

Adapters for external integrations have their own unit and integration tests, including tests that check the integration with the AWS service. This way we minimized the number of tests that rely on AWS services but still kept our app covered with all necessary tests.

And they lived happily ever after…

Migration to AWS SAM simplified and improved our deployment process. Setting up a new environment now takes minutes, and it can be additionally reduced in future by nesting AWS CloudFormation stacks. Development and testing for our components are easy using hexagonal architecture. Reusable components and Serverless Application Repository put the cherry on top of our serverless cake.

This could be the start of a beautiful friendship between serverless and startups. With serverless, your startup infrastructure is fully managed, and you pay it only if someone is using your app. The serverless pricing model allows you to start cheap. With Serverless Application Repository, you can build your MVPs faster, as you can reuse existing components. These combined benefits give you superpowers and enough velocity to be able to compete with other products with larger teams and budgets.

We are happy to see what startups can build (and outsource) using Serverless Application Repository.

In the meantime, you can see the source of our first open source serverless component on GitHub: https://github.com/vacationtracker/serverless-slack-slash-command-app.

And if you want to try Vacation Tracker, visit https://vacationtracker.io, and you can double your free trial period using the AWS_IS_AWESOME promo code.

Announcing WebSocket APIs in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/

This post is courtesy of Diego Magalhaes, AWS Senior Solutions Architect – World Wide Public Sector-Canada & JT Thompson, AWS Principal Software Development Engineer – Amazon API Gateway

Starting today, you can build bidirectional communication applications using WebSocket APIs in Amazon API Gateway without having to provision and manage any servers.

HTTP-based APIs use a request/response model with a client sending a request to a service and the service responding synchronously back to the client. WebSocket-based APIs are bidirectional in nature. This means that a client can send messages to a service and services can independently send messages to its clients.

This bidirectional behavior allows for richer types of client/service interactions because services can push data to clients without a client needing to make an explicit request. WebSocket APIs are often used in real-time applications such as chat applications, collaboration platforms, multiplayer games, and financial trading platforms.

This post explains how to build a serverless, real-time chat application using WebSocket API and API Gateway.

Overview

Historically, building WebSocket APIs required setting up fleets of hosts that were responsible for managing the persistent connections that underlie the WebSocket protocol. Now, with API Gateway, this is no longer necessary. API Gateway handles the connections between the client and service. It lets you build your business logic using HTTP-based backends such as AWS Lambda, Amazon Kinesis, or any other HTTP endpoint.

Before you begin, here are a couple of the concepts of a WebSocket API in API Gateway. The first is a new resource type called a route. A route describes how API Gateway should handle a particular type of client request, and includes a routeKey parameter, a value that you provide to identify the route.

A WebSocket API is composed of one or more routes. To determine which route a particular inbound request should use, you provide a route selection expression. The expression is evaluated against an inbound request to produce a value that corresponds to one of your route’s routeKey values.

There are three special routeKey values that API Gateway allows you to use for a route:

  • $default—Used when the route selection expression produces a value that does not match any of the other route keys in your API routes. This can be used, for example, to implement a generic error handling mechanism.
  • $connect—The associated route is used when a client first connects to your WebSocket API.
  • $disconnect—The associated route is used when a client disconnects from your API. This call is made on a best-effort basis.

You are not required to provide routes for any of these special routeKey values.

Build a serverless, real-time chat application

To understand how you can use the new WebSocket API feature of API Gateway, I show you how to build a real-time chat app. To simplify things, assume that there is only a single chat room in this app. Here are the capabilities that the app includes:

  • Clients join the chat room as they connect to the WebSocket API.
  • The backend can send messages to specific users via a callback URL that is provided after the user is connected to the WebSocket API.
  • Users can send messages to the room.
  • Disconnected clients are removed from the chat room.

Here’s an overview of the real-time chat application:

A serverless real-time chat application using WebSocket API on Amazon API Gateway

The application is composed of the WebSocket API in API Gateway that handles the connectivity between the client and servers (1). Two AWS Lambda functions react when clients connect (2) or disconnect (5) from the API. The sendMessage function (3) is invoked when the clients send messages to the server. The server sends the message to all connected clients (4) using the new API Gateway Management API. To track each of the connected clients, use a DynamoDB table to persist the connection identifiers.

To help you see this in action more quickly, I’ve provided an AWS SAM application that includes the Lambda functions, DynamoDB table definition, and IAM roles. I placed it into the AWS Serverless Application Repository. For more information, see the simple-websockets-chat-app repo on GitHub.

Create a new WebSocket API

  1. In the API Gateway console, choose Create API, New API.
  2. Under Choose the protocol, choose WebSocket.
  3. For API name, enter My Chat API.
  4. For Route Selection Expression, enter $request.body.action.
  5. Enter a description if you’d like and click Create API.

The attribute described by the Route Selection Expression value should be present in every message that a client sends to the API. Here’s one example of a request message:

{
    "action": "sendmessage",
    “data”: "Hello, I am using WebSocket APIs in API Gateway.”
}

To route messages based on the message content, the WebSocket API expects JSON-formatted messages with a property serving as a routing key.

Manage routes

Now, configure your new API to respond to incoming requests. For this app, create three routes as described earlier. First, configure the sendmessage route with a Lambda integration.

  1. In the API Gateway console, under My Chat API, choose Routes.
  2. For New Route Key, enter sendmessage and confirm it.

Each route includes route-specific information such as its model schemas, as well as a reference to its target integration. With the sendmessage route created, use a Lambda function as an integration that is responsible for sending messages.

At this point, you’ve either deployed the AWS Serverless Application Repository backend or deployed it yourself using AWS SAM. In the Lambda console, look at the sendMessage function. This function receives the data sent by one of the clients, looks up all the currently connected clients, and sends the provided data to each of them. Here’s a snippet from the Lambda function:

DDB.scan(scanParams, function (err, data) {
  // some code omitted for brevity

  var apigwManagementApi = new AWS.APIGatewayManagementAPI({
    apiVersion: "2018-11-29",
    endpoint: event.requestContext.domainName " /" + event.requestContext.stage
  });
  var postParams = {
    Data: JSON.parse(event.body).data
  };

  data.Items.forEach(function (element) {
    postParams.ConnectionId = element.connectionId.S;
    apigwManagementApi.postToConnection(postParams, function (err, data) {
      // some code omitted for brevity
    });
  });

When one of the clients sends a message with an action of “sendmessage,” this function scans the DynamoDB table to find all of the currently connected clients. For each of them, it makes a postToConnection call with the provided data.

To send messages properly, you must know the API to which your clients are connected. This means explicitly setting the SDK endpoint to point to your API.

What’s left is to properly track the clients that are connected to the chat room. For this, take advantage of two of the special routeKey values and implement routes for $connect and $disconnect.

The onConnect function inserts the connectionId value from requestContext to the DynamoDB table.

exports.handler = function(event, context, callback) {
  var putParams = {
    TableName: process.env.TABLE_NAME,
    Item: {
      connectionId: { S: event.requestContext.connectionId }
    }
  };

  DDB.putItem(putParams, function(err, data) {
    callback(null, {
      statusCode: err ? 500 : 200,
      body: err ? "Failed to connect: " + JSON.stringify(err) : "Connected"
    });
  });
};

The onDisconnect function removes the record corresponding with the specified connectionId value.

exports.handler = function(event, context, callback) {
  var deleteParams = {
    TableName: process.env.TABLE_NAME,
    Key: {
      connectionId: { S: event.requestContext.connectionId }
    }
  };

  DDB.deleteItem(deleteParams, function(err) {
    callback(null, {
      statusCode: err ? 500 : 200,
      body: err ? "Failed to disconnect: " + JSON.stringify(err) : "Disconnected."
    });
  });
};

As I mentioned earlier, the onDisconnect function may not be called. To keep from retrying connections that are never going to be available again, add some additional logic in case the postToConnection call does not succeed:

if (err.statusCode === 410) {
  console.log("Found stale connection, deleting " + postParams.connectionId);
  DDB.deleteItem({ TableName: process.env.TABLE_NAME,
                   Key: { connectionId: { S: postParams.connectionId } } });
} else {
  console.log("Failed to post. Error: " + JSON.stringify(err));
}

API Gateway returns a status of 410 GONE when the connection is no longer available. If this happens, delete the identifier from the DynamoDB table.

To make calls to your connected clients, your application needs a new permission: “execute-api:ManageConnections”. This is handled for you in the AWS SAM template.

Deploy the WebSocket API

The next step is to deploy the WebSocket API. Because it’s the first time that you’re deploying the API, create a stage, such as “dev,” and give a sample description.

The Stage editor screen displays all the information for managing your WebSocket API, such as Settings, Logs/Tracing, Stage Variables, and Deployment History. Also, make sure that you check your limits and read more about API Gateway throttling.

Make sure that you monitor your tests using the dashboard available for your newly created WebSocket API.

Test the chat API

To test the WebSocket API, you can use wscat, an open-source, command line tool.

  1. Install NPM.
  2. Install wscat:
    $ npm install -g wscat
  3. On the console, connect to your published API endpoint by executing the following command:
    $ wscat -c wss://{YOUR-API-ID}.execute-api.{YOUR-REGION}.amazonaws.com/{STAGE}
  4. To test the sendMessage function, send a JSON message like the following example. The Lambda function sends it back using the callback URL:
    $ wscat -c wss://{YOUR-API-ID}.execute-api.{YOUR-REGION}.amazonaws.com/dev
    connected (press CTRL+C to quit)
    > {"action":"sendmessage", "data":"hello world"}
    < hello world

If you run wscat in multiple consoles, each will receive the “hello world”.

You’re now ready to send messages and get responses back and forth from your WebSocket API!

Conclusion

In this post, I showed you how to use WebSocket APIs in API Gateway, including a message exchange between client and server using routes and the route selection expression. You used Lambda and DynamoDB to create a serverless real-time chat application. While this post only scratched the surface of what is possible, you did all of this without needing to manage any servers at all.

You can get started today by creating your own WebSocket APIs in API Gateway. To learn more, see the Amazon API Gateway Developer Guide. You can also watch the Building Real-Time Applications using WebSocket APIs Supported by Amazon API Gateway webinar hosted by George Mao.

I’m excited to see what new applications come up with your use of the new WebSocket API. I welcome any feedback here, in the AWS forums, or on Twitter.

Happy coding!

Announcing nested applications for AWS SAM and the AWS Serverless Application Repository

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-nested-applications-for-aws-sam-and-the-aws-serverless-application-repository/

Serverless application architectures enable you to break large projects into smaller, more manageable services that are highly reusable and independently scalable, secured, and evolved over time. As serverless architectures grow, we have seen common patterns that get reimplemented across companies, teams, and projects, hurting development velocity and leading to wasted effort. We have made it easier to develop new serverless architectures that meet organizational best practices by supporting nested applications in AWS SAM and the AWS Serverless Application Repository.

How it works

Nested applications build off a concept in AWS CloudFormation called nested stacks. With nested applications, serverless applications are deployed as stacks, or collections of resources, that contain one or more other serverless application stacks. You can reference resources created in these nested templates to either the parent stack or other nested stacks to manage these collections of resources more easily.

This enables you to build sophisticated serverless architectures by reusing services that are authored and maintained independently but easily composed via AWS SAM and the AWS Serverless Application Repository. These applications can be either publicly or privately available in the AWS Serverless Application Repository. It’s just as easy to create your own serverless applications that you then consume again later in other nested applications. You can access the application code, configure parameters exposed via the nested application’s template, and later manage its configuration completely.

Building a nested application

Suppose that I want to build an API powered by a serverless application architecture made up of AWS Lambda and Amazon API Gateway. I can use AWS SAM to create Lambda functions, configure API Gateway, and deploy and manage them both. To start building, I can use the sam init command.

$ sam init -r python2.7
[+] Initializing project structure...
[SUCCESS] - Read sam-app/README.md for further instructions on how to proceed
[*] Project initialization is now complete

The sam-app directory has everything that I need to start building a serverless application.

$ tree sam-app/
sam-app/
├── hello_world
│   ├── app.py
│   ├── app.pyc
│   ├── __init__.py
│   ├── __init__.pyc
│   └── requirements.txt
├── README.md
├── template.yaml
└── tests
    └── unit
        ├── __init__.py
        ├── __init__.pyc
        ├── test_handler.py
        └── test_handler.pyc

3 directories, 11 files

The README in the sam-app directory points me to using the new sam build command to install any requirements of my application.

$ cd sam-app/
$ sam build
2018-11-21 20:41:23 Building resource 'HelloWorldFunction'
2018-11-21 20:41:23 Running PythonPipBuilder:ResolveDependencies
2018-11-21 20:41:24 Running PythonPipBuilder:CopySource

Build Succeeded

Built Artifacts  : .aws-sam/build
Built Template   : .aws-sam/build/template.yaml
...

At this point, I have a fully functioning serverless application based on Lambda that I can test and debug using the AWS SAM CLI locally. For example, I can invoke my Lambda function directly, as in the following code.

$ cd .aws-sam/build/
$ sam local invoke --no-event
2018-11-21 20:43:52 Invoking app.lambda_handler (python2.7)

....trimmed output....

{"body": "{\"message\": \"hello world\", \"location\": \"34.239.158.3\"}", "statusCode": 200}

Or I can use the actual API Gateway interface for it with the sam local start-api command. I can also now use the sam package and sam deploy commands to get this application running on Lambda and begin letting my customers consume it. What I really want to do, though, is expand on this API by adding an authorization mechanism to add some security. API Gateway supports several methods for doing this, but in this case, I want to leverage a basic form of HTTP Basic Auth. Although not the best way to secure an API, this example highlights the power of nested applications.

To start, I search an existing serverless application that meets my needs. I can access the AWS Serverless Application Repository either directly or via the AWS Lambda console. Then I search for “http basic auth,” as shown in the following image.

There is already an application for HTTP Basic Auth that someone else made.

I can review the AWS SAM template, license, and permissions created in the AWS Serverless Application Repository. Reading through the linked GitHub repository and README, I find that this application meets my needs, and I can review its code. The application enables me to store a user’s username and password in Amazon DynamoDB and use them to provide authorization to an API.

I could deploy this application directly from the console and then plug the launched Lambda function into my API Gateway configuration manually. This would work, but it doesn’t give me a way to more directly relate the two applications together as one application, which to me it is. If I remove the authorizer application, my API will break. If I decide to shut down my API, I might forget about the authorizer. Logically thinking about them as a single application is difficult. This is where nested applications come in.

The launch of nested applications comes with a new AWS SAM resource, AWS::Serverless::Application, which you can use to refer to serverless applications that you want to nest inside another application. The specification for this resource is straightforward and at a minimum resembles the following template.

application-alias-name:
  Type: AWS::Serverless::Application
  Properties:
    Location:
    Parameters:

For the full resource specification, see AWS::Serverless::Application on the GitHub website.

Currently, the location can be one of two places. It can be in the AWS Serverless Application Repository.

Location:
  ApplicationId: arn:aws:serverlessrepo:region:account-id:applications/application-name
  SemanticVersion: 1.0.0

Or it can be in Amazon S3.

Location: https://s3.region.amazonaws.com/bucket-name/sam-template-object

The sam-template-object is the name of a packaged AWS SAM template.

Parameters is the parameters that the nested application requires at launch as defined by its template. You can discover them via the template that the nested application references or in the console under Application Settings. For this Basic HTTP Auth application, there are no parameters, and so there is nothing for me to do here.

We have included a feature to help you to figure out what you need in your template. First, navigate to the Review, configure and deploy page by choosing Deploy from the Application details page or when you choose an application via the Lambda console’s view into the app repository. Then choose Copy as SAM Resource.

This button copies to your clipboard exactly what you need to start nesting this application. For this application, choosing Copy as SAM Resource resulted in the following template.

lambdaauthorizerbasicauth:
  Type: AWS::Serverless::Application
  Properties:
    Location:
      ApplicationId: arn:aws:serverlessrepo:us-east-1:560348900601:applications/lambda-authorizer-basic-auth
      SemanticVersion: 0.2.0

Note: The name of this resource defaults to lambdaauthorizerbasicauth, but best practice is to give it a name that is more descriptive for your use case.

If I stopped here and launched the application, this second application’s stack would launch as a nested application off my application. Before I do that, I want to use the function that this application creates as the authorizer for the API Gateway endpoint created in my existing AWS SAM template. To reference this Lambda function, I can check the template for this application to see what (if any) outputs are created.

I can get the Amazon Resource Name (ARN) from this function by referencing the output named LambdaAuthorizerBasicAuthFunction directly as an attribute of the nested application.

!GetAtt lambdaauthorizerbasicauth.Outputs.LambdaAuthorizerBasicAuthFunction

To configure the authorizer for my existing function, I create an AWS::Serverless::Api resource and then configure the authorizer with the FunctionArn attribute set to this value. The following template shows the entire syntax.

MyApi:
  Type: AWS::Serverless::Api
  Properties:
    StageName: Prod      
    Auth:
      DefaultAuthorizer: MyLambdaRequestAuthorizer
      Authorizers:
        MyLambdaRequestAuthorizer:
          FunctionPayloadType: REQUEST
          FunctionArn: !GetAtt lambdaauthorizerbasicauth.Outputs.LambdaAuthorizerBasicAuthFunction
          Identity:
            Headers:
              - Authorization

This creates a new API Gateway stage, configures the authorizer to point to my nested stack’s Lambda function, and specifies what information is required from clients of the API. In my actual function definition, I refer to the new API stage definition and the authorizer. The last three lines of the following template are new.

Events:
   HelloWorld:
      Type: Api
      Properties:
        Method: get
        Path: /hello
        RestApiId: !Ref MyApi
        Auth:
          Authorizers: MyLambdaRequestAuthorizer

Finally, because of the code created by the sam init command that I ran earlier, I need to update the HelloWorldApi output to use my new API resource.

Outputs:
  HelloWorldApi:
    Description: API Gateway endpoint URL for Prod stage for Hello World function
    Value:
      Fn::Sub: https://${MyApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/

At this point, my combined template file is 70 lines of YAML that define the following:

  • My Lambda function
  • The API Gateway endpoint configuration that I’ll use to interface with my business logic
  • The nested application that represents my authorizer
  • Other bits such as the outputs, globals, and parameters of my template

I can use the sam package and sam deploy commands to launch this whole stack of resources.

sam package --template-file template.yaml --output-template-file packaged-template.yaml --s3-bucket my-bucket

Successfully packaged artifacts and wrote output template to file packaged-template.yaml.
Execute the following command to deploy the packaged template
aws cloudformation deploy --template-file packaged-template.yaml --stack-name <YOUR STACK NAME>

To deploy applications that have nested applications that come from the app repository, I use a new capability in AWS CloudFormation called auto-expand. I pass it in by adding CAPABILITY_AUTO_EXPAND to the –capabilities flag of the deploy command.

sam deploy --template-file packaged-template.yaml --stack-name SimpleAuthExample --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - SimpleAuthExample

With the successful creation of my application stack, I can find what I created. The stack can be in one of two places: the AWS CloudFormation console or the Lambda console’s Application view. Because I launched this application via AWS SAM, I can use the Application view to manage serverless applications. That page displays two application stacks named SimpleAuthExample.

The second application stack has the appended name of the nested application that I launched. Because it was also a serverless application launched with AWS SAM, the Application view enables me to manage them independently if I want. Selecting the SimpleAuthExample application shows me its details page, where I can get more information, including the various launched resources of this application.

There are four top-level resources or resource groupings:

  • My Lambda function, which expands to show the related permissions and role for the function
  • The nested application stack for my authorizer
  • The API Gateway resource and related deployment and stage
  • The Lambda permission that AWS SAM created to allow API Gateway to use my nested application’s Lambda function as an authorizer

Selecting the logical ID of my nested application opens the AWS CloudFormation console, where I note its relation to the root stack.

Testing the authorizer

With my application and authorizer set up, I want to confirm its functionality. The Application view for the authorizer application shows the DynamoDB table that stores my user credentials.

Selecting the logical ID of the table opens the DynamoDB console. In the Item view, I can create a user and its password.

Back in the Application view for my main application, expanding the API Gateway resource displays the physical ID link to the API Gateway stage for this application. It represents the API endpoint that I would point my clients at.

I can copy that destination URL to my clipboard, and with a tool such as curl, I can test my application.

curl -u foo:bar https://dw0wr24jwg.execute-api.us-east-1.amazonaws.com/Prod/hello
{"message": "hello world", "location": "34.239.119.48"}

To confirm that my authorizer is working, I test with bad credentials.

curl -u bar:foo https://dw0wr24jwg.execute-api.us-east-1.amazonaws.com/Prod/hello
{"Message":"User is not authorized to access this resource with an explicit deny"}

Everything works!

To delete this entire setup, I delete the main stack of this application, which deletes all of its resources and the nested application as well.

Conclusion

With the growth of serverless applications, we’re finding that developers can be reusing and sharing common patterns, both publicly and privately. This led to the release of the AWS Serverless Application Repository in early 2018. Now nested applications bring the additional benefit of simplifying the consumption of these reusable application components alongside your own applications. This blog post demonstrates how to find and discover applications in the AWS Serverless Application Repository and nest them in your own applications. We have shown how you can modify an AWS SAM template to refer to these other components, launch your new nested applications, and get their improved management and organizational capabilities.

We’re excited to see what this enables you to do, and we welcome any feedback here, in the AWS forums, and on Twitter.

Happy coding!

Learn about New AWS re:Invent Launches – December AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-new-aws-reinvent-launches-december-aws-online-tech-talks/

AWS Tech Talks

Join us in the next couple weeks to learn about some of the new service and feature launches from re:Invent 2018. Learn about features and benefits, watch live demos and ask questions! We’ll have AWS experts online to answer any questions you may have. Register today!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Compute

December 19, 2018 | 01:00 PM – 02:00 PM PTDeveloping Deep Learning Models for Computer Vision with Amazon EC2 P3 Instances – Learn about the different steps required to build, train, and deploy a machine learning model for computer vision.

Containers

December 11, 2018 | 01:00 PM – 02:00 PM PTIntroduction to AWS App Mesh – Learn about using AWS App Mesh to monitor and control microservices on AWS.

Data Lakes & Analytics

December 10, 2018 | 11:00 AM – 12:00 PM PTIntroduction to AWS Lake Formation – Build a Secure Data Lake in Days – AWS Lake Formation (coming soon) will make it easy to set up a secure data lake in days. With AWS Lake Formation, you will be able to ingest, catalog, clean, transform, and secure your data, and make it available for analysis and machine learning.

December 12, 2018 | 11:00 AM – 12:00 PM PTIntroduction to Amazon Managed Streaming for Kafka (MSK) – Learn about features and benefits, use cases and how to get started with Amazon MSK.

Databases

December 10, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon RDS on VMware – Learn how Amazon RDS on VMware can be used to automate on-premises database administration, enable hybrid cloud backups and read scaling for on-premises databases, and simplify database migration to AWS.

December 13, 2018 | 09:00 AM – 10:00 AM PTServerless Databases with Amazon Aurora and Amazon DynamoDB – Learn about the new serverless features and benefits in Amazon Aurora and DynamoDB, use cases and how to get started.

Enterprise & Hybrid

December 19, 2018 | 11:00 AM – 12:00 PM PTHow to Use “Minimum Viable Refactoring” to Achieve Post-Migration Operational Excellence – Learn how to improve the security and compliance of your applications in two weeks with “minimum viable refactoring”.

IoT

December 17, 2018 | 11:00 AM – 12:00 PM PTIntroduction to New AWS IoT Services – Dive deep into the AWS IoT service announcements from re:Invent 2018, including AWS IoT Things Graph, AWS IoT Events, and AWS IoT SiteWise.

Machine Learning

December 10, 2018 | 09:00 AM – 10:00 AM PTIntroducing Amazon SageMaker Ground Truth – Learn how to build highly accurate training datasets with machine learning and reduce data labeling costs by up to 70%.

December 11, 2018 | 09:00 AM – 10:00 AM PTIntroduction to AWS DeepRacer – AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and a global racing league.

December 12, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon Forecast and Amazon Personalize – Learn about Amazon Forecast and Amazon Personalize – what are the key features and benefits of these managed ML services, common use cases and how you can get started.

December 13, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon Textract: Now in Preview – Learn how Amazon Textract, now in preview, enables companies to easily extract text and data from virtually any document.

Networking

December 17, 2018 | 01:00 PM – 02:00 PM PTIntroduction to AWS Transit Gateway – Learn how AWS Transit Gateway significantly simplifies management and reduces operational costs with a hub and spoke architecture.

Robotics

December 18, 2018 | 11:00 AM – 12:00 PM PTIntroduction to AWS RoboMaker, a New Cloud Robotics Service – Learn about AWS RoboMaker, a service that makes it easy to develop, test, and deploy intelligent robotics applications at scale.

Security, Identity & Compliance

December 17, 2018 | 09:00 AM – 10:00 AM PTIntroduction to AWS Security Hub – Learn about AWS Security Hub, and how it gives you a comprehensive view of high-priority security alerts and your compliance status across AWS accounts.

Serverless

December 11, 2018 | 11:00 AM – 12:00 PM PTWhat’s New with Serverless at AWS – In this tech talk, we’ll catch you up on our ever-growing collection of natively supported languages, console updates, and re:Invent launches.

December 13, 2018 | 11:00 AM – 12:00 PM PTBuilding Real Time Applications using WebSocket APIs Supported by Amazon API Gateway – Learn how to build, deploy and manage APIs with API Gateway.

Storage

December 12, 2018 | 09:00 AM – 10:00 AM PTIntroduction to Amazon FSx for Windows File Server – Learn about Amazon FSx for Windows File Server, a new fully managed native Windows file system that makes it easy to move Windows-based applications that require file storage to AWS.

December 14, 2018 | 01:00 PM – 02:00 PM PTWhat’s New with AWS Storage – A Recap of re:Invent 2018 Announcements – Learn about the key AWS storage announcements that occurred prior to and at re:Invent 2018. With 15+ new service, feature, and device launches in object, file, block, and data transfer storage services, you will be able to start designing the foundation of your cloud IT environment for any application and easily migrate data to AWS.

December 18, 2018 | 09:00 AM – 10:00 AM PTIntroduction to Amazon FSx for Lustre – Learn about Amazon FSx for Lustre, a fully managed file system for compute-intensive workloads. Process files from S3 or data stores, with throughput up to hundreds of GBps and sub-millisecond latencies.

December 18, 2018 | 01:00 PM – 02:00 PM PTIntroduction to New AWS Services for Data Transfer – Learn about new AWS data transfer services, and which might best fit your requirements for data migration or ongoing hybrid workloads.

Introducing the C++ Lambda Runtime

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/introducing-the-c-lambda-runtime/

This post is courtesy of Marco Magdy, AWS Software Development Engineer – AWS SDKs and Tools

Today, AWS Lambda announced the availability of the Runtime API. The Runtime API allows you to write your Lambda functions in any language, provided that you bundle it with your application artifact or as a Lambda layer that your application uses.

As an example of using this API and based on the customer demand, AWS is releasing a reference implementation of a C++ runtime for Lambda. This C++ runtime brings the simplicity and expressiveness of interpreted languages while maintaining the superiority of C++ performance and low memory footprint. These are benefits that align well with the event-driven, function-based, development model of Lambda applications.

Hello World

Start by writing a Hello World Lambda function in C++ using this runtime.

Prerequisites

You need a Linux-based environment (I recommend Amazon Linux), with the following packages installed:

  • A C++11 compiler, either GCC 5.x or later or Clang 3.3 or later. On Amazon Linux, run the following commands:
    $ yum install gcc64-c++ libcurl-devel
    $ export CC=gcc64
    $ export CXX=g++64
  • CMake v.3.5 or later. On Amazon Linux, run the following command:
    $ yum install cmake3
  • Git

Download and compile the runtime

The first step is to download & compile the runtime:

$ cd ~ 
$ git clone https://github.com/awslabs/aws-lambda-cpp.git
$ cd aws-lambda-cpp
$ mkdir build
$ cd build
$ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF \
   -DCMAKE_INSTALL_PREFIX=~/out
$ make && make install

This builds and installs the runtime as a static library under the directory ~/out.

Create your C++ function

The next step is to build the Lambda C++ function.

  1. Create a new directory for this project:
    $ mkdir hello-cpp-world
    $ cd hello-cpp-world
  2. In that directory, create a file named main.cpp with the following content:
    // main.cpp
    #include <aws/lambda-runtime/runtime.h>
    
    using namespace aws::lambda_runtime;
    
    invocation_response my_handler(invocation_request const& request)
    {
       return invocation_response::success("Hello, World!", "application/json");
    }
    
    int main()
    {
       run_handler(my_handler);
       return 0;
    }
  3. Create a file named CMakeLists.txt in the same directory, with the following content:
    cmake_minimum_required(VERSION 3.5)
    set(CMAKE_CXX_STANDARD 11)
    project(hello LANGUAGES CXX)
    
    find_package(aws-lambda-runtime REQUIRED)
    add_executable(${PROJECT_NAME} "main.cpp")
    target_link_libraries(${PROJECT_NAME} PUBLIC AWS::aws-lambda-runtime)
    aws_lambda_package_target(${PROJECT_NAME})
  4. To build this executable, create a build directory and run CMake from there:
    $ mkdir build
    $ cd build
    $ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=~/out
    $ make

    This compiles and links the executable in release mode.

  5. To package this executable along with all its dependencies, run the following command:
    $ make aws-lambda-package-hello

    This creates a zip file in the same directory named after your project, in this case hello.zip.

Create the Lambda function

Using the AWS CLI, you create the Lambda function. First, create a role for the Lambda function to execute under.

  1. Create the following JSON file for the trust policy and name it trust-policy.json.
    {
     "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": ["lambda.amazonaws.com"]
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. Using the AWS CLI, run the following command:
    $ aws iam create-role \
    --role-name lambda-cpp-demo \
    --assume-role-policy-document file://trust-policy.json

    This should output JSON that contains the newly created IAM role information. Make sure to note down the “Arn” value from that JSON. You need it later. The Arn looks like the following:

    “Arn”: “arn:aws:iam::<account_id>:role/lambda-cpp-demo”

  3. Create the Lambda function:
    $ aws lambda create-function \
    --function-name hello-world \
    --role <specify the role arn from the previous step> \
    --runtime provided \
    --timeout 15 \
    --memory-size 128 \
    --handler hello \
    --zip-file fileb://hello.zip
  4. Invoke the function using the AWS CLI:
    <bash>

    $ aws lambda invoke --function-name hello-world --payload '{ }' output.txt

    You should see the following output:

    {
      "StatusCode": 200
    }

    A file named output.txt containing the words “Hello, World!” should be in the current directory.

Beyond Hello

OK, well that was exciting, but how about doing something slightly more interesting?

The following example shows you how to download a file from Amazon S3 and do some basic processing of its contents. To interact with AWS, you need the AWS SDK for C++.

Prerequisites

If you don’t have them already, install the following libraries:

  • zlib-devel
  • openssl-devel
  1. Build the AWS SDK for C++:
    $ cd ~
    $ git clone https://github.com/aws/aws-sdk-cpp.git
    $ cd aws-sdk-cpp
    $ mkdir build
    $ cd build
    $ cmake3 .. -DBUILD_ONLY=s3 \
     -DBUILD_SHARED_LIBS=OFF \
     -DENABLE_UNITY_BUILD=ON \
     -DCMAKE_BUILD_TYPE=Release \
     -DCMAKE_INSTALL_PREFIX=~/out
    
    $ make && make install

    This builds the S3 SDK as a static library and installs it in ~/out.

  2. Create a directory for the new application’s logic:
    $ cd ~
    $ mkdir cpp-encoder-example
    $ cd cpp-encoder-example
  3. Now, create the following main.cpp:
    // main.cpp
    #include <aws/core/Aws.h>
    #include <aws/core/utils/logging/LogLevel.h>
    #include <aws/core/utils/logging/ConsoleLogSystem.h>
    #include <aws/core/utils/logging/LogMacros.h>
    #include <aws/core/utils/json/JsonSerializer.h>
    #include <aws/core/utils/HashingUtils.h>
    #include <aws/core/platform/Environment.h>
    #include <aws/core/client/ClientConfiguration.h>
    #include <aws/core/auth/AWSCredentialsProvider.h>
    #include <aws/s3/S3Client.h>
    #include <aws/s3/model/GetObjectRequest.h>
    #include <aws/lambda-runtime/runtime.h>
    #include <iostream>
    #include <memory>
    
    using namespace aws::lambda_runtime;
    
    std::string download_and_encode_file(
        Aws::S3::S3Client const& client,
        Aws::String const& bucket,
        Aws::String const& key,
        Aws::String& encoded_output);
    
    std::string encode(Aws::String const& filename, Aws::String& output);
    char const TAG[] = "LAMBDA_ALLOC";
    
    static invocation_response my_handler(invocation_request const& req, Aws::S3::S3Client const& client)
    {
        using namespace Aws::Utils::Json;
        JsonValue json(req.payload);
        if (!json.WasParseSuccessful()) {
            return invocation_response::failure("Failed to parse input JSON", "InvalidJSON");
        }
    
        auto v = json.View();
    
        if (!v.ValueExists("s3bucket") || !v.ValueExists("s3key") || !v.GetObject("s3bucket").IsString() ||
            !v.GetObject("s3key").IsString()) {
            return invocation_response::failure("Missing input value s3bucket or s3key", "InvalidJSON");
        }
    
        auto bucket = v.GetString("s3bucket");
        auto key = v.GetString("s3key");
    
        AWS_LOGSTREAM_INFO(TAG, "Attempting to download file from s3://" << bucket << "/" << key);
    
        Aws::String base64_encoded_file;
        auto err = download_and_encode_file(client, bucket, key, base64_encoded_file);
        if (!err.empty()) {
            return invocation_response::failure(err, "DownloadFailure");
        }
    
        return invocation_response::success(base64_encoded_file, "application/base64");
    }
    
    std::function<std::shared_ptr<Aws::Utils::Logging::LogSystemInterface>()> GetConsoleLoggerFactory()
    {
        return [] {
            return Aws::MakeShared<Aws::Utils::Logging::ConsoleLogSystem>(
                "console_logger", Aws::Utils::Logging::LogLevel::Trace);
        };
    }
    
    int main()
    {
        using namespace Aws;
        SDKOptions options;
        options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Trace;
        options.loggingOptions.logger_create_fn = GetConsoleLoggerFactory();
        InitAPI(options);
        {
            Client::ClientConfiguration config;
            config.region = Aws::Environment::GetEnv("AWS_REGION");
            config.caFile = "/etc/pki/tls/certs/ca-bundle.crt";
    
            auto credentialsProvider = Aws::MakeShared<Aws::Auth::EnvironmentAWSCredentialsProvider>(TAG);
            S3::S3Client client(credentialsProvider, config);
            auto handler_fn = [&client](aws::lambda_runtime::invocation_request const& req) {
                return my_handler(req, client);
            };
            run_handler(handler_fn);
        }
        ShutdownAPI(options);
        return 0;
    }
    
    std::string encode(Aws::IOStream& stream, Aws::String& output)
    {
        Aws::Vector<unsigned char> bits;
        bits.reserve(stream.tellp());
        stream.seekg(0, stream.beg);
    
        char streamBuffer[1024 * 4];
        while (stream.good()) {
            stream.read(streamBuffer, sizeof(streamBuffer));
            auto bytesRead = stream.gcount();
    
            if (bytesRead > 0) {
                bits.insert(bits.end(), (unsigned char*)streamBuffer, (unsigned char*)streamBuffer + bytesRead);
            }
        }
        Aws::Utils::ByteBuffer bb(bits.data(), bits.size());
        output = Aws::Utils::HashingUtils::Base64Encode(bb);
        return {};
    }
    
    std::string download_and_encode_file(
        Aws::S3::S3Client const& client,
        Aws::String const& bucket,
        Aws::String const& key,
        Aws::String& encoded_output)
    {
        using namespace Aws;
    
        S3::Model::GetObjectRequest request;
        request.WithBucket(bucket).WithKey(key);
    
        auto outcome = client.GetObject(request);
        if (outcome.IsSuccess()) {
            AWS_LOGSTREAM_INFO(TAG, "Download completed!");
            auto& s = outcome.GetResult().GetBody();
            return encode(s, encoded_output);
        }
        else {
            AWS_LOGSTREAM_ERROR(TAG, "Failed with error: " << outcome.GetError());
            return outcome.GetError().GetMessage();
        }
    }

    This Lambda function expects an input payload to contain an S3 bucket and S3 key. It then downloads that resource from S3, encodes it as base64, and sends it back as the response of the Lambda function. This can be useful to display an image in a webpage, for example.

  4. Next, create the following CMakeLists.txt file in the same directory.
    cmake_minimum_required(VERSION 3.5)
    set(CMAKE_CXX_STANDARD 11)
    project(encoder LANGUAGES CXX)
    
    find_package(aws-lambda-runtime REQUIRED)
    find_package(AWSSDK COMPONENTS s3)
    
    add_executable(${PROJECT_NAME} "main.cpp")
    target_link_libraries(${PROJECT_NAME} PUBLIC
                          AWS::aws-lambda-runtime
                           ${AWSSDK_LINK_LIBRARIES})
    
    aws_lambda_package_target(${PROJECT_NAME})
  5. Follow the same build steps as before:
    $ mkdir build
    $ cd build
    $ cmake3 .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=~/out
    $ make
    $ make aws-lambda-package-encoder

    Notice how the target name for packaging has changed to aws-lambda-package-encoder. The CMake function aws_lambda_package_target() always creates a target based on its input name.

    You should now have a file named “encoder.zip” in your build directory.

  6. Before you create the Lambda function, modify the IAM role that you created earlier to allow it to access S3.
    $ aws iam attach-role-policy \
    --role-name lambda-cpp-demo \
    --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
  7. Using the AWS CLI, create the new Lambda function:
    $ aws lambda create-function \
    --function-name encode-file \
    --role <specify the same role arn used in the prior Lambda> \
    --runtime provided \
    --timeout 15 \
    --memory-size 128 \
    --handler encoder \
    --zip-file fileb://encoder.zip
  8. Using the AWS CLI, run the function. Make sure to use a S3 bucket in the same Region as the Lambda function:
    $ aws lambda invoke --function-name encode-file --payload '{"s3bucket": "your_bucket_name", "s3key":"your_file_key" }' base64_image.txt

    You can use an online base64 image decoder and paste the contents of the output file to verify that everything is working. In a real-world scenario, you would inject the output of this Lambda function in an HTML img tag, for example.

Conclusion

With the new Lambda Runtime API, a new door of possibilities is open. This C++ runtime enables you to do more with Lambda than you ever could have before.

More in-depth details, along with examples, can be found on the GitHub repository. With it, you can start writing Lambda functions with C++ today. AWS will continue evolving the contents of this repository with additional enhancements and samples. I’m so excited to see what you build using this runtime. I appreciate feedback sent via issues in GitHub.

Happy hacking!

New for AWS Lambda – Use Any Programming Language and Share Common Components

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-lambda-use-any-programming-language-and-share-common-components/

I remember the excitement when AWS Lambda was announced in 2014Four years on, customers are using Lambda functions for many different use cases. For example, iRobot is using AWS Lambda to provide compute services for their Roomba robotic vacuum cleaners, Fannie Mae to run Monte Carlo simulations for millions of mortgages, Bustle to serve billions of requests for their digital content.

Today, we are introducing two new features that are going to make serverless development even easier:

  • Lambda Layers, a way to centrally manage code and data that is shared across multiple functions.
  • Lambda Runtime API, a simple interface to use any programming language, or a specific language version, for developing your functions.

These two features can be used together: runtimes can be shared as layers so that developers can pick them up and use their favorite programming language when authoring Lambda functions.

Let’s see how they work more in detail.

Lambda Layers

When building serverless applications, it is quite common to have code that is shared across Lambda functions. It can be your custom code, that is used by more than one function, or a standard library, that you add to simplify the implementation of your business logic.

Previously, you would have to package and deploy this shared code together with all the functions using it. Now, you can put common components in a ZIP file and upload it as a Lambda Layer. Your function code doesn’t need to be changed and can reference the libraries in the layer as it would normally do.

Layers can be versioned to manage updates, each version is immutable. When a version is deleted or permissions to use it are revoked, functions that used it previously will continue to work, but you won’t be able to create new ones.

In the configuration of a function, you can reference up to five layers, one of which can optionally be a runtime. When the function is invoked, layers are installed in /opt in the order you provided. Order is important because layers are all extracted under the same path, so each layer can potentially overwrite the previous one. This approach can be used to customize the environment. For example, the first layer can be a runtime and the second layer adds specific versions of the libraries you need.

The overall, uncompressed size of function and layers is subject to the usual unzipped deployment package size limit.

Layers can be used within an AWS account, shared between accounts, or shared publicly with the broad developer community.

There are many advantages when using layers. For example, you can use Lambda Layers to:

  • Enforce separation of concerns, between dependencies and your custom business logic.
  • Make your function code smaller and more focused on what you want to build.
  • Speed up deployments, because less code must be packaged and uploaded, and dependencies can be reused.

Based on our customer feedback, and to provide an example of how to use Lambda Layers, we are publishing a public layer which includes NumPy and SciPy, two popular scientific libraries for Python. This prebuilt and optimized layer can help you start very quickly with data processing and machine learning applications.

In addition to that, you can find layers for application monitoring, security, and management from partners such as Datadog, Epsagon, IOpipe, NodeSource, Thundra, Protego, PureSec, Twistlock, Serverless, and Stackery.

Using Lambda Layers

In the Lambda console I can now manage my own layers:

I don’t want to create a new layer now but use an existing one in a function. I create a new Python function and, in the function configuration, I can see that there are no referenced layers. I choose to add a layer:

From the list of layers compatible with the runtime of my function, I select the one with NumPy and SciPy, using the latest available version:

After I add the layer, I click Save to update the function configuration. In case you’re using more than one layer, you can adjust here the order in which they are merged with the function code.

To use the layer in my function, I just have to import the features I need from NumPy and SciPy:

import numpy as np
from scipy.spatial import ConvexHull

def lambda_handler(event, context):

    print("\nUsing NumPy\n")

    print("random matrix_a =")
    matrix_a = np.random.randint(10, size=(4, 4))
    print(matrix_a)

    print("random matrix_b =")
    matrix_b = np.random.randint(10, size=(4, 4))
    print(matrix_b)

    print("matrix_a * matrix_b = ")
    print(matrix_a.dot(matrix_b)
    print("\nUsing SciPy\n")

    num_points = 10
    print(num_points, "random points:")
    points = np.random.rand(num_points, 2)
    for i, point in enumerate(points):
        print(i, '->', point)

    hull = ConvexHull(points)
    print("The smallest convex set containing all",
        num_points, "points has", len(hull.simplices),
        "sides,\nconnecting points:")
    for simplex in hull.simplices:
        print(simplex[0], '<->', simplex[1])

I run the function, and looking at the logs, I can see some interesting results.

First, I am using NumPy to perform matrix multiplication (matrices and vectors are often used to represent the inputs, outputs, and weights of neural networks):

random matrix_1 =
[[8 4 3 8]
[1 7 3 0]
[2 5 9 3]
[6 6 8 9]]
random matrix_2 =
[[2 4 7 7]
[7 0 0 6]
[5 0 1 0]
[4 9 8 6]]
matrix_1 * matrix_2 = 
[[ 91 104 123 128]
[ 66 4 10 49]
[ 96 35 47 62]
[130 105 122 132]]

Then, I use SciPy advanced spatial algorithms to compute something quite hard to build by myself: finding the smallest “convex set” containing a list of points on a plane. For example, this can be used in a Lambda function receiving events from multiple geographic locations (corresponding to buildings, customer locations, or devices) to visually “group” similar events together in an efficient way:

10 random points:
0 -> [0.07854072 0.91912467]
1 -> [0.11845307 0.20851106]
2 -> [0.3774705 0.62954561]
3 -> [0.09845837 0.74598477]
4 -> [0.32892855 0.4151341 ]
5 -> [0.00170082 0.44584693]
6 -> [0.34196204 0.3541194 ]
7 -> [0.84802508 0.98776034]
8 -> [0.7234202 0.81249389]
9 -> [0.52648981 0.8835746 ]
The smallest convex set containing all 10 points has 6 sides,
connecting points:
1 <-> 5
0 <-> 5
0 <-> 7
6 <-> 1
8 <-> 7
8 <-> 6

When I was building this example, there was no need to install or package dependencies. I could quickly iterate on the code of the function. Deployments were very fast because I didn’t have to include large libraries or modules.

To visualize the output of SciPy, it was easy for me to create an additional layer to import matplotlib, a plotting library. Adding a few lines of code at the end of the previous function, I can now upload to Amazon Simple Storage Service (S3) an image that shows how the “convex set” is wrapping all the points:

    plt.plot(points[:,0], points[:,1], 'o')
    for simplex in hull.simplices:
        plt.plot(points[simplex, 0], points[simplex, 1], 'k-')
        
    img_data = io.BytesIO()
    plt.savefig(img_data, format='png')
    img_data.seek(0)

    s3 = boto3.resource('s3')
    bucket = s3.Bucket(S3_BUCKET_NAME)
    bucket.put_object(Body=img_data, ContentType='image/png', Key=S3_KEY)
    
    plt.close()

Lambda Runtime API

You can now select a custom runtime when creating or updating a function:

With this selection, the function must include (in its code or in a layer) an executable file called bootstrap, responsible for the communication between your code (that can use any programming language) and the Lambda environment.

The runtime bootstrap uses a simple HTTP based interface to get the event payload for a new invocation and return back the response from the function. Information on the interface endpoint and the function handler are shared as environment variables.

For the execution of your code, you can use anything that can run in the Lambda execution environment. For example, you can bring an interpreter for the programming language of your choice.

You only need to know how the Runtime API works if you want to manage or publish your own runtimes. As a developer, you can quickly use runtimes that are shared with you as layers.

We are making these open source runtimes available today:

We are also working with our partners to provide more open source runtimes:

  • Erlang (Alert Logic)
  • Elixir (Alert Logic)
  • Cobol (Blu Age)
  • N|Solid (NodeSource)
  • PHP (Stackery)

The Runtime API is the future of how we’ll support new languages in Lambda. For example, this is how we built support for the Ruby language.

Available Now

You can use runtimes and layers in all regions where Lambda is available, via the console or the AWS Command Line Interface (CLI). You can also use the AWS Serverless Application Model (SAM) and the SAM CLI to test, deploy and manage serverless applications using these new features.

There is no additional cost for using runtimes and layers. The storage of your layers takes part in the AWS Lambda Function storage per region limit.

To learn more about using the Runtime API and Lambda Layers, don’t miss our webinar on December 11, hosted by Principal Developer Advocate Chris Munns.

I am so excited by these new features, please let me know what are you going to build next!

Announcing Ruby Support for AWS Lambda

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-ruby-support-for-aws-lambda/

This post is courtesy of Xiang Shen Senior – AWS Solutions Architect and Alex Wood Software Development Engineer – AWS SDKs and Tools

Ruby remains a popular programming language for AWS customers. In the summer of 2011, AWS introduced the initial release of AWS SDK for Ruby, which has helped Ruby developers to better integrate and use AWS resources. The SDK is now in its third major version and it continues to improve and deliver AWS API updates.

Today, AWS is excited to announce Ruby as a supported language for AWS Lambda.

Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default. That makes it easy to interact with the AWS resources directly from your functions. In this post, we walk you through how it works, using examples:

  • Creating a Hello World example
  • Including dependencies
  • Migrating a Sinatra application

Creating a Hello World example

If you are new to Lambda, it’s simple to create a function using the console.

  1. Open the Lambda console.
  2. Choose Create function.
  3. Select Author from scratch.
  4. Name your function something like hello_ruby.
  5. For Runtime, choose Ruby 2.5.
  6. For Role, choose Create a new role from one or more templates.
  7. Name your role something like hello_ruby_role.
  8. Choose Create function.

Your function is created and you are directed to your function’s console page.

You can modify all aspects of your function, such editing the function’s code, assigning one or more triggering services, or configuring additional services that your function can interact with. From the Monitoring tab, you can view various metrics about your function’s usage as well as a link to CloudWatch Logs.

As you can see in the code editor, the Ruby code for this Hello World example is basic. It has a single handler function named lambda_handler and returns an HTTP status code of 200 and the text “Hello from Lambda!” in a JSON structure. You can learn more about the programming model for Lambda functions.

Next, test this Lambda function and confirm that it is working.

  1. On your function console page, choose Test.
  2. Name the test HelloRubyTest and clear out the data in the brackets. This function takes no input.
  3. Choose Save.
  4. Choose Test.

You should now see the results of a success invocation of your Ruby Lambda function.

Including dependencies

When developing Lambda functions with Ruby, you probably need to include other dependencies in your code. To achieve this, use the tool bundle to download the needed RubyGems to a local directory and create a deployable application package. All dependencies need to be included in either this package or in a Lambda layer.

Do this with a Lambda function that is using the gem aws-record to save data into an Amazon DynamoDB table.

  1. Create a directory for your new Ruby application in your development environment:
    mkdir hello_ruby
    cd hello_ruby
  2. Inside of this directory, create a file Gemfile and add aws-record to it:
    source 'https://rubygems.org'
    gem 'aws-record', '~> 2'
  3. Create a hello_ruby_record.rb file with the following code. In the code, put_item is the handler method, which expects an event object with a body attribute. After it’s invoked, it saves the value of the body attribute along with a UUID to the table.
    # hello_ruby_record.rb
    require 'aws-record'
    
    class DemoTable
      include Aws::Record
      set_table_name ENV[‘DDB_TABLE’]
      string_attr :id, hash_key: true
      string_attr :body
    end
    
    def put_item(event:,context:)
      body = event["body"]
      item = DemoTable.new(id: SecureRandom.uuid, body: body)
      item.save! # raise an exception if save fails
      item.to_h
    end 
  4. Next, bring in the dependencies for this application. Bundler is a tool used to manage RubyGems. From your application directory, run the following two commands. They create the Gemfile.lock file and download the gems to the local directory instead of to the local systems Ruby directory. This way, they ensure that all your dependencies are included in the function deployment package.
    bundle install
    bundle install --deployment
  5. AWS SAM is a templating tool that you can use to create and manage serverless applications. With it, you can define the structure of your Lambda application, define security policies and invocation sources, and manage or create almost any AWS resource. Use it now to help define the function and its policy, create your DynamoDB table and then deploy the application.
    Create a new file in your hello_ruby directory named template.yaml with the following contents:

    AWSTemplateFormatVersion: '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
    Description: 'sample ruby application'
    
    Resources:
      HelloRubyRecordFunction:
        Type: AWS::Serverless::Function
        Properties:
          Handler: hello_ruby_record.put_item
          Runtime: ruby2.5
          Policies:
          - DynamoDBCrudPolicy:
              TableName: !Ref RubyExampleDDBTable 
          Environment:
            Variables:
              DDB_TABLE: !Ref RubyExampleDDBTable
    
      RubyExampleDDBTable:
        Type: AWS::Serverless::SimpleTable
        Properties:
          PrimaryKey:
            Name: id
            Type: String
    
    Outputs:
      HelloRubyRecordFunction:
        Description: Hello Ruby Record Lambda Function ARN
        Value:
          Fn::GetAtt:
          - HelloRubyRecordFunction
          - Arn

    In this template file, you define Serverless::Function and Serverless::SimpleTable as resources, which correspond to a Lambda function and DynamoDB table.

    The line Policies in the function and the following line DynamoDBCrudPolicy refer to an AWS SAM policy template, which greatly simplifies granting permissions to Lambda functions. The DynamoDBCrudPolicy allows you to create, read, update, and delete DynamoDB resources and items in tables.

    In this example, you limit permissions by specifying TableName and passing a reference to Serverless::SimpleTable that the template creates. Next in importance is the Environment section of this template, where you create a variable named DDB_TABLE and also pass it a reference to Serverless::SimpleTable. Lastly, the Outputs section of the template allows you to easily find the function that was created.

    The directory structure now should look like the following:

    $ tree -L 2 -a
    .
    ├── .bundle
    │   └── config
    ├── Gemfile
    ├── Gemfile.lock
    ├── hello_ruby_record.rb
    ├── template.yaml
    └── vendor
        └── bundle
  6. Now use the template file to package and deploy your application. An AWS SAM template can be deployed using the AWS CloudFormation console, AWS CLI, or AWS SAM CLI. The AWS SAM CLI is a tool that simplifies serverless development across the lifecycle of your application. That includes the initial creation of a serverless project, to local testing and debugging, to deployment up to AWS. Follow the steps for your platform to get the AWS SAM CLI Installed.
  7. Create an Amazon S3 bucket to store your application code. Run the following AWS CLI command to create an S3 bucket with a custom name:
    aws s3 mb s3://<bucketname>
  8. Use the AWS SAM CLI to package your application:
    sam package --template-file template.yaml \
    --output-template-file packaged-template.yaml \
    --s3-bucket <bucketname>

    This creates a new template file named packaged-template.yaml.

  9. Use the AWS SAM CLI to deploy your application. Use any stack-name value:
    sam deploy --template-file packaged-template.yaml \
    --stack-name helloRubyRecord \
    --capabilities CAPABILITY_IAM
    
    Waiting for changeset to be created...
    Waiting for stack create/update to complete
    Successfully created/updated stack - helloRubyRecord

    This can take a few moments to create all of the resources from the template. After you see the output “Successfully created/updated stack,” it has completed.

  10. In the Lambda Serverless Applications console, you should see your application listed:

  11. To see the application dashboard, choose the name of your application stack. Because this application was deployed with either AWS SAM or AWS CloudFormation, the dashboard allows you to manage the resources as a single group with a number of features. You can view the stack resources, its template, recent deployments, metrics, including any custom dashboards you might make.

Now test the Lambda function and confirm that it is working.

  1. Choose Overview. Under Resources, select the Lambda function created in this application:

  2. In the Lambda function console, configure a test as you did earlier. Use the following JSON:
    {"body": "hello lambda"}
  3. Execute the test and you should see a success message:

  4. In the Lambda Serverless Applications console for this stack, select the DynamoDB table created:

  5. Choose Items

The id and body should match the output from the Lambda function test run.

You just created a Ruby-based Lambda application using AWS SAM!

Migrating a Sinatra application

Sinatra is a popular open source framework for Ruby that launched over a decade ago. It allows you to quickly create powerful web applications with minimal effort. Until today, you still would have needed servers to run those applications. Now, you can just deploy a Sinatra app to Lambda and move to a serverless world!

Thanks to Rack, a Ruby webserver interface, you only need to create a simple Lambda function to bridge the gap between the HTTP requests and the serverless Sinatra application. You don’t need to make additional changes to other Sinatra files at all. Paired with Amazon API Gateway and DynamoDB, your Sinatra application runs completely serverless!

For this post, take an existing Sinatra application and make it function in Lambda.

  1. Clone the serverless-sinatra-sample GitHub repository into your local environment.
    Under the app directory, find the Sinatra application files. The files enable you to specify routes to return either JSON or HTML that is generated from ERB templates in the server.rb file.

    ├── app
    │   ├── config.ru
    │   ├── server.rb
    │   └── views
    │       ├── feedback.erb
    │       ├── index.erb
    │       └── layout.erb

    In the root of the directory, you also find the template.yaml and lambda.rb files. The template.yaml includes four resources:

    • Serverless::Function
    • Serverless::API
    • Serverless::SimpleTable
    • Lambda::Permission

    In the lambda.rb file, you find the main handler for this function, which calls Rack to interface with the Sinatra application.

  2. This application has several dependencies, so use bundle to install them:
    bundle install
    bundle install --deployment
  3. Package this Lambda function and the related application components using the AWS SAM CLI:
    sam package --template-file template.yaml \
    --output-template-file packaged-template.yaml \
    --s3-bucket <bucketname>
  4. Next, deploy the application:
    sam deploy --template-file packaged-template.yaml \
    --stack-name LambdaSinatra \
    --capabilities CAPABILITY_IAM                                                                                                        
    
    Waiting for changeset to be created..
    Waiting for stack create/update to complete
    Successfully created/updated stack - LambdaSinatra
  5. In the Lambda Serverless Applications console, select your application:

  6. Choose Overview. Under Resources, find the ApiGateway RestApi entry and select the Logical ID. Below it is SinatraAPI:

  7. In the API Gateway console, in the left navigation pane, choose Dashboard. Copy the URL from Invoke this API and paste it in another browser tab:

  8. Add on to the URL a route from the Sinatra application, as seen in the server.rb.

For example, this is the hello-world GET route:

And this is the /feedback route:

Congratulations, you’ve just successfully deployed a Sinatra-based Ruby application inside of a Lambda function!

Conclusion

As you’ve seen in this post, getting started with Ruby on Lambda is made easy via either the AWS Management Console or the AWS SAM CLI.

You might even be able to easily port existing applications to Lambda without needing to change your code base. The new support for Ruby allows you to benefit from the greatly reduced operational overhead, scalability, availability, and pay–per-use pricing of Lambda.

If you are excited about this feature as well, there is even more information on writing Lambda functions in Ruby in the AWS Lambda Developer Guide.

Happy coding!

New – AWS Toolkits for PyCharm, IntelliJ (Preview), and Visual Studio Code (Preview)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-aws-toolkits-for-pycharm-intellij-preview-and-visual-studio-code-preview/

Software developers have their own preferred tools. Some use powerful editors, others Integrated Development Environments (IDEs) that are tailored for specific languages and platforms. In 2014 I created my first AWS Lambda function using the editor in the Lambda console. Now, you can choose from a rich set of tools to build and deploy serverless applications. For example, the editor in the Lambda console has been greatly enhanced last year when AWS Cloud9 was released. For .NET applications, you can use the AWS Toolkit for Visual Studio and AWS Tools for Visual Studio Team Services.

AWS Toolkits for PyCharm, IntelliJ, and Visual Studio Code

Today, we are announcing the general availability of the AWS Toolkit for PyCharm. We are also announcing the developer preview of the AWS Toolkits for IntelliJ and Visual Studio Code, which are under active development in GitHub. These open source toolkits will enable you to easily develop serverless applications, including a full create, step-through debug, and deploy experience in the IDE and language of your choice, be it Python, Java, Node.js, or .NET.

For example, using the AWS Toolkit for PyCharm you can:

These toolkits are distributed under the open source Apache License, Version 2.0.

Installation

Some features use the AWS Serverless Application Model (SAM) CLI. You can find installation instructions for your system here.

The AWS Toolkit for PyCharm is available via the IDEA Plugin Repository. To install it, in the Settings/Preferences dialog, click Plugins, search for “AWS Toolkit”, use the checkbox to enable it, and click the Install button. You will need to restart your IDE for the changes to take effect.

The AWS Toolkit for IntelliJ and Visual Studio Code are currently in developer preview and under active development. You are welcome to build and install these from the GitHub repositories:

Building a Serverless application with PyCharm

After installing AWS SAM CLI and AWS Toolkit, I create a new project in PyCharm and choose SAM on the left to create a serverless application using the AWS Serverless Application Model. I call my project hello-world in the Location field. Expanding More Settings, I choose which SAM template to use as the starting point for my project. For this walkthrough, I select the “AWS SAM Hello World”.

In PyCharm you can use credentials and profiles from your AWS Command Line Interface (CLI) configuration. You can change AWS region quickly if you have multiple environments.
The AWS Explorer shows Lambda functions and AWS CloudFormation stacks in the selected AWS region. Starting from a CloudFormation stack, you can see which Lambda functions are part of it.

The function handler is in the app.py file. After I open the file, I click on the Lambda icon on the left of the function declaration to have the option to run the function locally or start a local step-by-step debugging session.

First, I run the function locally. I can configure the payload of the event that is provided in input for the local invocation, starting from the event templates provided for most services, such as the Amazon API Gateway, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and so on. You can use a file for the payload, or select the share checkbox to make it available to other team members. The function is executed locally, but here you can choose the credentials and the region to be used if the function is calling other AWS services, such as Amazon Simple Storage Service (S3) or Amazon DynamoDB.

A local container is used to emulate the Lambda execution environment. This function is implementing a basic web API, and I can check that the result is in the format expected by the API Gateway.

After that, I want to get more information on what my code is doing. I set a breakpoint and start a local debugging session. I use the same input event as before. Again, you can choose the credentials and region for the AWS services used by the function.

I step over the HTTP request in the code to inspect the response in the Variables tab. Here you have access to all local variables, including the event and the context provided in input to the function.

After that, I resume the program to reach the end of the debugging session.

Now I am confident enough to deploy the serverless application right-clicking on the project (or the SAM template file). I can create a new CloudFormation stack, or update an existing one. For now, I create a new stack called hello-world-prod. For example, you can have a stack for production, and one for testing. I select an S3 bucket in the region to store the package used for the deployment. If your template has parameters, here you can set up the values used by this deployment.

After a few minutes, the stack creation is complete and I can run the function in the cloud with a right-click in the AWS Explorer. Here there is also the option to jump to the source code of the function.

As expected, the result of the remote invocation is the same as the local execution. My serverless application is in production!

Using these toolkits, developers can test locally to find problems before deployment, change the code of their application or the resources they need in the SAM template, and update an existing stack, quickly iterating until they reach their goal. For example, they can add an S3 bucket to store images or documents, or a DynamoDB table to store your users, or change the permissions used by their functions.

I am really excited by how much faster and easier it is to build your ideas on AWS. Now you can use your preferred environment to accelerate even further. I look forward to seeing what you will do with these new tools!

Python 3.7 runtime now available in AWS Lambda

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/python-3-7-runtime-now-available-in-aws-lambda/

This post is courtesy of Shivansh Singh, Partner Solutions Architect – AWS

We are excited to announce that you can now develop your AWS Lambda functions using the Python 3.7 runtime. Start using this new version today by specifying a runtime parameter value of “python3.7″ when creating or updating functions. AWS continues to support creating new Lambda functions on Python 3.6 and Python 2.7.

Here’s a quick primer on some of the major new features in Python 3.7:

  • Data classes
  • Customization of access to module attributes
  • Typing enhancements
  • Time functions with nanosecond resolution

Data classes

In object-oriented programming, if you have to create a class, it looks like the following in Python 3.6:

class Employee:
    def __init__(self, name: str, dept: int) -> None:
        self.name = name
        self.dept = dept
        
    def is_fulltime(self) -> bool:
        """Return True if employee is a Full-time employee, else False"""
        return self.dept > 25

The __init__ method receives multiple arguments to initialize a call. These arguments are set as class instance attributes.

With Python 3.7, you have dataclasses, which make class declarations easier and more readable. You can use the @dataclass decorator on the class declaration and self-assignment is taken care of automatically. It generates __init__, __repr__, __eq__, __hash__, and other special methods. In Python 3.7, the Employee class defined earlier looks like the following:

@dataclass
Class Employee:
    name: str
    dept: int
    
    def is_fulltime(self) -> bool:
        """Return True if employee is a Full-time employee, else False"""
        return self.dept > 25

Customization of access to module attributes

Attributes are widely used in Python. Most commonly, they used in classes. However, attributes can be put on functions and modules as well. Attributes are retrieved using the dot notation: something.attribute. You can also get attributes that are named at runtime using the getattr() function.

For classes, something.attr first looks for attr defined on something. If it’s not found, then the special method something.__getattr__(“attr”) is called. The .getattr__() function can be used to customize access to attributes on objects. This customization is not easily available for module attributes, until Python 3.7. But PEP 562 provides __getattr__() on modules, along with a corresponding __dir__() function.

Typing enhancements

Type annotations are commonly used for code hints. However, there were two common issues with using type hints extensively in the code:

  • Annotations could only use names that were already available in the current scope. In other words, they didn’t support forward references.
  • Annotating source code had adverse effects on the startup time of Python programs.

Both of these issues are fixed in Python 3.7, by postponing the evaluation of annotations. Instead of compiling code that executes expressions in annotations at their definition time, the compiler stores the annotation in a string form equivalent to the AST of the expression in question.

For example, the following code fails, as spouse cannot be defined as type Employee, given that Employee is not defined yet.

class Employee:
    def __init__(self, name: str, spouse: Employee) --> None
        pass

In Python 3.7, the evaluation of annotation is postponed. It gets stored as a string and optionally evaluated as needed. You do need to import __future__, which breaks the backward compatibility of the code with previous versions of Python.

from __future__ import annotations

class Employee:
    def __init__(self, name: str, spouse: Employee) --> None
        pass

Time functions with nanosecond resolution

The time module gets some new functions in Python 3.7. The clock resolution can exceed the limited precision of a floating point number returned by the time.time() function and its variants. The following new functions are being added:

  • clock_gettime_ns()
  • clock_settime_ns()
  • monotonic_ns()
  • perf_counter_ns()
  • process_time_ns()
  • time_ns()

These functions are similar to already existing functions without the _ns suffix. The difference is that the above functions return a number of nanoseconds as an int instead of a number of seconds as a float.

For more information, see the AWS Lambda Developer Guide.

Hope you enjoy… go build with Python 3.7!

Amazon API Gateway adds support for AWS WAF

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/amazon-api-gateway-adds-support-for-aws-waf/

This post courtesy of Heitor Lessa, AWS Specialist Solutions Architect – Serverless

Today, I’m excited to tell you about the Amazon API Gateway native integration with AWS WAF. Previously, if you wanted to secure your API in Amazon API Gateway with AWS WAF, you had to deploy a Regional API endpoint and use your own Amazon CloudFront distribution. This new feature now enables you to provision any­­ API Gateway endpoint and secure it with AWS WAF without having to configure your own CloudFront distribution to add that capability.

In Part 1 of this series, I described how to protect your API provided by API Gateway using AWS WAF.

In Part 2 of this series, I described how to use API keys as a shared secret between a CloudFront distribution and API Gateway to secure public access to your API in API Gateway. This new AWS WAF integration means that the method described in Part 2 is no longer necessary.

The following image describes methods to secure your API in API Gateway before and after this feature was made available.

Where:

  1. AWS WAF securing CloudFront endpoint only.
  2. AWS WAF securing both CloudFront and API Gateway endpoints natively.
  3. AWS WAF securing Amazon API Gateway endpoints natively where global presence is not needed.

Enabling AWS WAF for an API managed by Amazon API Gateway

For this walkthrough, you can use an existing Pet Store API or any API in API Gateway that you may already have deployed. You create a new AWS WAF web ACL that is later associated with your API Gateway stage.

Follow these steps to create a web ACL:

  1. Open the AWS WAF console.
  2. Choose Create web ACL.
  3. For Web ACL Name, enter ApiGateway-HTTP-Flood-Sample.
  4. For Region, choose US East (N. Virginia).
  5. Choose Next until you reach Step 3: Create rules.
  6. Choose Create rule and enter HTTP Flood Sample.
  7. For Rule type, choose Rate-based rule.
  8. For Rate limit, enter 2000 and choose Create.
  9. For Default action, choose Allow all requests that don’t match any rules.
  10. Choose Review and create.
  11. Confirm that your options look similar to the following image and choose Confirm and create next.

You can now follow the steps to enable the AWS WAF web ACL for an existing API in API Gateway:

  1. Open the Amazon API Gateway console.
  2. Choose Stages, prod.
  3. Under Web Application Firewall (WAF), choose ApiGateway-HTTP-Flood-Sample (or the web ACL that you just created).
  4. Choose Save Changes.

Testing your API in API Gateway now secured by AWS WAF

AWS WAF provides HTTP flood protection that is a rate-based rule. The rate-based rule is automatically triggered when web requests from a client exceed a configurable threshold. The threshold is defined by the maximum number of incoming requests allowed from a single IP address within a five-minute period.

After this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold. For this example, you defined 2000 requests as a threshold for the HTTP flood rate–based rule.

Artillery, an open source modern load testing toolkit, is used to send a large number of requests directly to the API Gateway Invoke URL to test whether your AWS WAF native integration is working correctly.

Firstly, follow these steps to retrieve the correct Invoke URL of your Pet Store API:

  1. Open the API Gateway console.
  2. In the left navigation pane, open the PetStore API.
  3. Choose Stages, select prod, and copy the Invoke URL value.

Secondly, use cURL to query your distribution and see the API output before the rate limit rule is triggered:

$ curl -s INVOKE_URL/pets

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
] 

Then, use Artillery to send a large number of requests in a short period of time to trigger your rate limit rule:

$ artillery quick -n 2000 --count 10 INVOKE_URL/pets

With this command, Artillery sends 2000 requests to your PetStore API from 10 concurrent users. By doing so, you trigger the rate limit rule in less than the 5-minute threshold. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try re-running the cURL command. You should no longer see a list of pets:

{“message”:”Forbidden”}

As you can see from the output, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

As you can see, with the AWS WAF native integration with Amazon API Gateway, you no longer have to manage your own Amazon CloudFront distribution in order to secure your API with AWS WAF. The AWS WAF native integration makes this process seamless.

I hope that you found the information in this post helpful. Remember that you can use this integration today with all Amazon API Gateway endpoints (Edge, Regional, and Private). It is available in the following Regions:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • US West (N. California)
  • EU (Ireland)
  • EU (Frankfurt)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)

Support for multi-value parameters in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/support-for-multi-value-parameters-in-amazon-api-gateway/

This post is courtesy of Akash Jain, Partner Solutions Architect – AWS

The new multi-value parameter support feature for Amazon API Gateway allows you to pass multiple values for the same key in the header and query string as part of your API request. It also allows you to pass multi-value headers in the API response to implement things like sending multiple Set-Cookie headers.

As part of this feature, AWS added two new keys:

  • multiValueQueryStringParameters—Used in the API Gateway request to support a multi-valued parameter in the query string.
  • multiValueHeaders—Used in both the request and response to support multi-valued headers.

In this post, I walk you through examples for using this feature by extending the PetStore API. You add pet search functionality to the PetStore API and then add personalization by setting the language and UI theme cookies.

The following AWS CloudFormation stack creates all resources for both examples. The stack:

  • Creates the extended PetStore API represented using the OpenAPI 3.0 standard.
  • Creates three AWS Lambda functions. One each for implementing the pet search, get user profile, and set user profile functionality.
  • Creates two IAM roles. The Lambda function assumes one and API Gateway assumes the other to call the Lambda function.
  • Deploys the PetStore API to Staging.

Add pet search functionality to the PetStore API

As part of adding the search feature, I demonstrate how you can use multiValueQueryStringParameters for sending and retrieving multi-valued parameters in a query string.

Use the PetStore example API available in the API Gateway console and create a /search resource type under the /pets resource type with GET (read) access. Then, configure the GET method to use AWS Lambda proxy integration.

The CloudFormation stack launched earlier gives you the PetStore API staging endpoint as an output that you can use for testing the search functionality. Assume that the user has an interface to enter the pet types for searching and wants to search for “dog” and “fish.” The pet search API request looks like the following where the petType parameter is multi-valued:

https://xxxx.execute-api.us-east-1.amazonaws.com/staging/pets/search?petType=dog&petType=fish

When you invoke the pet search API action, you get a successful response with both the dog and fish details:

 [
  {
    "id": 11212,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 31231
    "type": "fish",
    "price": 0.99
  }
]

Processing multi-valued query string parameters

Here’s how the multi-valued parameter petType with value “petType=dog&petType=fish” gets processed by API Gateway. To demonstrate, here’s the input event sent by API Gateway to the Lambda function. The log details follow. As it was a long input, a few keys have been removed for brevity.

{ resource: '/pets/search',
path: '/pets/search',
httpMethod: 'GET',
headers: 
{ Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9',
Host: 'xyz.execute-api.us-east-1.amazonaws.com',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36',
Via: '1.1 382909590d138901660243559bc5e346.cloudfront.net (CloudFront)',
'X-Amz-Cf-Id': 'motXi0bgd4RyV--wvyJnKpJhLdgp9YEo7_9NeS4L6cbgHkWkbn0KuQ==',
'X-Amzn-Trace-Id': 'Root=1-5bab7b8b-f1333fbc610288d200cd6224',
'X-Forwarded-Proto': 'https' },
queryStringParameters: { petType: 'fish' },
multiValueQueryStringParameters: { petType: [ 'dog', 'fish' ] },
pathParameters: null,
stageVariables: null,
requestContext: 
{ resourceId: 'jy2rzf',
resourcePath: '/pets/search',
httpMethod: 'GET',
extendedRequestId: 'N1A9yGxUoAMFWMA=',
requestTime: '26/Sep/2018:12:28:59 +0000',
path: '/staging/pets/search',
protocol: 'HTTP/1.1',
stage: 'staging',
requestTimeEpoch: 1537964939459,
requestId: 'be70816e-c187-11e8-9d99-eb43dd4b0381',
apiId: 'xxxx' },
body: null,
isBase64Encoded: false }

There is a new key, multiValueQueryStringParameters, available in the input event. This key is added as part of the multi-value parameter feature to retain multiple values for the same parameter in the query string.

Before this change, API Gateway used to retain only the last value and drop everything else for a multi-valued parameter. You can see the original behavior in the queryStringParameters parameter in the above input, where only the “fish” value is retained.

Accessing the new multiValueQueryStringParameters key in a Lambda function

Use the new multiValueQueryStringParameters key available in the event context of the Lambda function to retrieve the multi-valued query string parameter petType that you passed in the query string of the search API request. You can use that value for searching pets. Retrieve the parameter values from the event context by parsing the event event.multiValueQueryStringParameters.petType.

exports.handler = (event, context, callback) => {
    
    //Log the input event
    console.log(event)
    
    //Extract multi-valued parameter from input
    var petTypes = event.multiValueQueryStringParameters.petType;
    
    //call search pets functionality
    var searchResults = searchPets(petTypes)
    
    const response = {
        statusCode: 200,
        body: searchResults
        
    };
    callback(null, response);
};

The multiValueQueryStringParameters key is present in the input request regardless of whether the request contains keys with multiple values. You don’t have to change your APIs to enable this feature, unless you are using a key of the same name as multiValueQueryStringParameters.

Add personalization

Here’s how the newly added multiValueHeaders key is useful for sending cookies with multiple Set-Cookie headers in an API response.

Personalize the Pet Store web application by setting a user-specific theme and language settings. Usually, web developers use cookies to store this kind of information. To accomplish this, send cookies that store theme and language information as part of the API response. The browser then stores them and the web application uses them to get the required information.

To set the cookies, you need the new API resources /users and /profiles, with read and write access in the PetStore API. Then, configure the GET and POST methods of the /profile resource to use AWS Lambda proxy integration. The CloudFormation stack that you launched earlier has already created the necessary resources.

Invoke the POST profile API call with the following request body. It is passing the user preferences for language and theme and returning the Set-Cookie headers:

https://XXXXXX.execute-api.us-east-1.amazonaws.com/staging/users/profile

The request body looks like the following:

{
    "userid" : 123456456,
    "preferences" : {"language":"en-US", "theme":"blue moon"}
}

You get a successful response with the “200 OK” status code and two Set-Cookie headers for setting the language and theme cookies.

Passing multiple Set-Cookie headers in the response

The following code example is the setUserProfile Lambda function code that processes the input request and sends the language and theme cookies:

exports.handler = (event, context, callback) => {

    //Get the request body
    var requestBody = JSON.parse(event.body);
 
    //Retrieve the language and theme values
    var language = requestBody.preferences.language;
    var theme = requestBody.preferences.theme; 
    
    const response = {
        isBase64Encoded: true,
        statusCode: 200,
        multiValueHeaders : {"Set-Cookie": [`language=${language}`, `theme=${theme}`]},
        body: JSON.stringify('User profile set successfully')
    };
    callback(null, response);
};

You can see the newly added multiValueHeaders key passes multiple cookies as a list in the response. The multiValueHeaders header is translated to multiple Set-Cookie headers by API Gateway and appears to the API client as the following:

Set-Cookie →language=en-US
Set-Cookie →theme=blue moon

You can also pass the header key along with the multiValueHeaders key. In that case, API Gateway merges the multiValueHeaders and headers maps while processing the integration response into a single Map<String, List<String>> value. If the same key-value pair is sent in both, it isn’t duplicated.

Retrieving the headers from the request

If you use the Postman tool (or something similar) to invoke the GET profile API call, you can send cookies as part of the request. The theme and language cookie that you set above in the POST /profile API request can now be sent as part of the GET /profile request.

https://xxx.execute-api.us-east-1.amazonaws.com/staging/users/profile

You get the following response with the details of the user preferences:

{
    "userId": 12343,
    "preferences": {
        "language": "en-US",
        "theme": "beige"
    }
}

When you log the input event of the getUserProfile Lambda function, you can see both newly added keys multiValueQueryStringParameters and multiValueHeaders. These keys are present in the event context of the Lambda function regardless of whether there is a value. You can retrieve the value by parsing the event context event.multiValueHeaders.Cookie.

{ resource: '/users/profile',
path: '/users/profile',
httpMethod: 'GET',
headers: 
{ Accept: '*/*',
'Accept-Encoding': 'gzip, deflate',
'cache-control': 'no-cache',
'CloudFront-Forwarded-Proto': 'https',
'CloudFront-Is-Desktop-Viewer': 'true',
'CloudFront-Is-Mobile-Viewer': 'false',
'CloudFront-Is-SmartTV-Viewer': 'false',
'CloudFront-Is-Tablet-Viewer': 'false',
'CloudFront-Viewer-Country': 'IN',
Cookie: 'language=en-US; theme=blue moon',
Host: 'xxxx.execute-api.us-east-1.amazonaws.com',
'Postman-Token': 'acd5b6b3-df97-44a3-8ea8-2d929efadd96',
'User-Agent': 'PostmanRuntime/7.3.0',
Via: '1.1 6bf9df28058e9b2a0034a51c5f555669.cloudfront.net (CloudFront)',
'X-Amz-Cf-Id': 'VRePX_ktEpyTYh4NGyk90D4lMUEL-LBYWNpwZEMoIOS-9KN6zljA7w==',
'X-Amzn-Trace-Id': 'Root=1-5bb6111e-e67c17ea657ed03d5dddf869',
'X-Forwarded-For': 'xx.xx.xx.xx, yy.yy.yy.yy',
'X-Forwarded-Port': '443',
'X-Forwarded-Proto': 'https' },
multiValueHeaders: 
{ Accept: [ '*/*' ],
'Accept-Encoding': [ 'gzip, deflate' ],
'cache-control': [ 'no-cache' ],
'CloudFront-Forwarded-Proto': [ 'https' ],
'CloudFront-Is-Desktop-Viewer': [ 'true' ],
'CloudFront-Is-Mobile-Viewer': [ 'false' ],
'CloudFront-Is-SmartTV-Viewer': [ 'false' ],
'CloudFront-Is-Tablet-Viewer': [ 'false' ],
'CloudFront-Viewer-Country': [ 'IN' ],
Cookie: [ 'language=en-US; theme=blue moon' ],
Host: [ 'xxxx.execute-api.us-east-1.amazonaws.com' ],
'Postman-Token': [ 'acd5b6b3-df97-44a3-8ea8-2d929efadd96' ],
'User-Agent': [ 'PostmanRuntime/7.3.0' ],
Via: [ '1.1 6bf9df28058e9b2a0034a51c5f555669.cloudfront.net (CloudFront)' ],
'X-Amz-Cf-Id': [ 'VRePX_ktEpyTYh4NGyk90D4lMUEL-LBYWNpwZEMoIOS-9KN6zljA7w==' ],
'X-Amzn-Trace-Id': [ 'Root=1-5bb6111e-e67c17ea657ed03d5dddf869' ],
'X-Forwarded-For': [ 'xx.xx.xx.xx, yy.yy.yy.yy' ],
'X-Forwarded-Port': [ '443' ],
'X-Forwarded-Proto': [ 'https' ] },
queryStringParameters: null,
multiValueQueryStringParameters: null,
pathParameters: null,
stageVariables: null,
requestContext: 
{ resourceId: '90cr24',
resourcePath: '/users/profile',
httpMethod: 'GET',
extendedRequestId: 'OPecvEtWoAMFpzg=',
requestTime: '04/Oct/2018:13:09:50 +0000',
path: '/staging/users/profile',
accountId: 'xxxxxx',
protocol: 'HTTP/1.1',
stage: 'staging',
requestTimeEpoch: 1538658590316,
requestId: 'c691746f-c7d6-11e8-83b8-176659b7d74d',
identity: 
{ cognitoIdentityPoolId: null,
accountId: null,
cognitoIdentityId: null,
caller: null,
sourceIp: 'xx.xx.xx.xx',
accessKey: null,
cognitoAuthenticationType: null,
cognitoAuthenticationProvider: null,
userArn: null,
userAgent: 'PostmanRuntime/7.3.0',
user: null },
apiId: 'xxxx' },
body: null,
isBase64Encoded: false }

HTTP proxy integration

For the API Gateway HTTP proxy integration, the headers and query string are proxied to the downstream HTTP call in the same way that they are invoked. The newly added keys aren’t present in the request or response. For example, the petType parameter from the search API goes as a list to the HTTP endpoint.

Similarly, for setting multiple Set-Cookie headers, you can set them the way that you would usually. For Node.js, it looks like the following:

res.setHeader("Set-Cookie", ["theme=beige", "language=en-US"]);

Mapping requests and responses for new keys

For mapping requests, you can access new keys by parsing the method request like method.request.multivaluequerystring.<KeyName> and method.request.multivalueheader.<HeaderName>. For mapping responses, you would parse the integration response like integration.response.multivalueheaders.<HeaderName>.

For example, the following is the search pet API request example:

curl https://{hostname}/pets/search?petType=dog&petType=fish \
-H 'cookie: language=en-US' \
-H 'cookie: theme=beige'

The new request mapping looks like the following:

"requestParameters" : {
    "integration.request.querystring.petType" : "method.request.multivaluequerystring.petType",
    "integration.request.header.cookie" : "method.request.multivalueheader.cookie"
}

The new response mapping looks like the following:

"responseParameters" : { 
 "method.response.header.Set-Cookie" : "integration.response.multivalueheaders.Set-Cookie", 
 ... 
}

For more information, see Set up Lambda Proxy Integrations in API Gateway.

Cleanup

To avoid incurring ongoing charges for the resources that you’ve used, delete the CloudFormation stack that you created earlier.

Conclusion

Multi-value parameter support enables you to pass multi-valued parameters in the query string and multi-valued headers as part of your API response. Both the multiValueQueryStringParameters and multiValueHeaders keys are present in the input request to Lambda function regardless of whether there is a value. These keys are also accessible for mapping requests and responses.

Overriding request/response parameters and response status in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/overriding-request-response-parameters-and-response-status-in-amazon-api-gateway/

This post is courtesy of Akash Jain, Partner Solutions Architect – AWS

APIs are driving modern architectures. Many customers are moving their traditional architecture to service-oriented or microservices-based architectures.

As part of this transition, you may face a few complex situations. For example, a backend API returns an HTTP status code of 2XX that must be changed to 4XX because of a new API architecture standard in the organization. Or you may want to override the headers or query string of the API to differentiate the client request without doing any API code changes.

Developers are looking for easier ways to handle such situations. With the new override feature of Amazon API Gateway, you can set the required status and parameter mappings and handle these kinds of situations.

As part of the override feature, AWS introduced new context variables for overriding both request and response parameters.

Request Body Mapping Template Response Body Mapping Template
$context.requestOverride.header.<header_name> $context.responseOverride.header.<header_name>
$context.requestOverride.path.<path_name> $context.responseOverrider.status
$context.requestOverride.querystring.<querystring_name>

Using these context variables, you can:

  • Create a new header (or overwrite an existing header) as a concatenation of two parameters.
  • Override the response code to a success or failure code based on the contents of the body.
  • Conditionally remap a parameter based on its contents or the contents of some other parameter.
  • Iterate over the contents of a JSON body and remap key-value pairs to headers or query strings.

In this post, I demonstrate an example for overriding the status code without modifying the actual API code. Use the PetStore API, which is available as a sample API under Amazon API Gateway.

Override responses

Invoke the GET method on the /pets/{petId} resource by passing -1 as the petId value. You get the following response.

You can see that the status code is 200 and the error message is “The value is out of range”. You may challenge this response by asking why the status code is not “400 – Bad Request”, as the petId parameter value passed by the client is incorrect. Both approaches have merit. For this post, use “400 – Bad request” as 400 is the right code for a client error.

Your requirement is to implement this change without modifying the API code because of the risks involved or because there is no control over the legacy API. To implement this scenario, use the newly introduced override status feature with mapping templates.

To access mapping templates, under the GET request for the /pets/{petID} resource, choose Integration response.

Under Body Mapping templates, for Content-Type, enter application/json. Choose Add mapping template.

Paste the following code in the template area and choose Save.

#set($inputRoot = $input.path('$'))
$input.json("$")
#if($inputRoot.toString().contains("error"))
    #set($context.responseOverride.status = 400)
#end

When you run the test again by passing -1 for petId, you get the following response.

The status code changed from 200 to 400 without modifying the actual API.

In this PetStore API example, you use the $context.response.override.status variable to override the response status when you see an error key in the body content. The following code snippet searches the error string in the response body and does the status overriding.

#if($inputRoot.toString().contains("error"))
    #set($context.responseOverride.status = 400)

Override request headers, paths, and query strings

You can override the response status using the new response status override feature. Similarly, you can override request headers, paths, and query strings by using the $context.requestOverride variable.

Consider a situation where you have launched a new version of your API that is highly performant and much improved. However, your existing API clients are not willing to transition because of the changes that would be involved to their existing systems. You want to maintain a single API version for low maintenance, performance, and agility. What choices do you have?

The override request feature can come to the rescue. For example, you have an older version of an API endpoint that looks like the following, with “v1” in the path and a productId parameter in the query string with value=1234.

https://xxxx.execute-api.us-east-1.amazonaws.com/prod/v1/products?productId=1234

The new version of the endpoint looks like the following, with “v2” in the path and a productId parameter in the query string that requires the client to send value=4567. It also accepts a header to check if the request is from a client using the old API. To demonstrate this feature, create a request path parameter appversion that can accept any string as the version path.

https://xxxx.execute-api.us-east-1.amazonaws.com/prod/v2/pets?productId=4567

Using the new $context.requestOverride variable in the body mapping template, you can now override the requested path value from “v1” to “v2” and the query string parameter value from “1234” to “4567”. You can also add a new header to pass a flag for requests from an old API version. For example, see the following mapping template.

#set($version = "v2")
#set($newProductKey = "4567")
#set($customerFlag = "1")
## Perform normal body mapping. In this case, just pass the body mapping through **
$input.json("$")
## Override path parameter **
#set($context.requestOverride.path.appversion = $version)
## Override query string parameters **
#set($context.requestOverride.querystring.productId = $ newProductKey)
## Add new header parameter **
#set($context.requestOverride.header.flag = $customerFlag

In the above mapping template, first set the values of the required path, query string, and header for your new backend API. Then, use $input.json(“$”) to do normal body mapping. Finally, override the API path, query string, and header with the values set earlier.

When you run your API with the updated body template, the API Gateway logs show that overrides have been successfully applied.

Overrides are final. An override may only be applied to each parameter one time. Trying to override the same parameter multiple times results in 5XX responses from API Gateway.

If you must override the same parameter multiple times to build it throughout the template, I recommend creating a variable and applying the override at the end of the template. For more information, see Use a Mapping Template to Override an API’s Request and Response Parameters and Status Codes.

Conclusion

In this post, I showed you how to change the response status code of an API with the new override feature of API Gateway without modifying the API itself. You also saw how to transform requests coming to your new API from existing or legacy clients, without asking your customer to do any code changes.

The override feature enables you to have more control over the request and response parameters and status. You can now integrate legacy APIs to API Gateway, even when you want a different status from the API or to pass a different set of parameters based on the client call.