Tag Archives: lambda

Reducing Your Organization’s Carbon Footprint with Amazon CodeGuru Profiler

Post Syndicated from Isha Dua original https://aws.amazon.com/blogs/devops/reducing-your-organizations-carbon-footprint-with-codeguru-profiler/

It is crucial to examine every functional area when firms reorient their operations toward sustainable practices. Making informed decisions is necessary to reduce the environmental effect of an IT stack when creating, deploying, and maintaining it. To build a sustainable business for our customers and for the world we all share, we have deployed data centers that provide the efficient, resilient service our customers expect while minimizing our environmental footprint—and theirs. While we work to improve the energy efficiency of our datacenters, we also work to help our customers improve their operations on the AWS cloud. This two-pronged approach is based on the concept of the shared responsibility between AWS and AWS’ customers. As shown in the diagram below, AWS focuses on optimizing the sustainability of the cloud, while customers are responsible for sustainability in the cloud, meaning that AWS customers must optimize the workloads they have on the AWS cloud.

Figure 1. Shared responsibility model for sustainability

Figure 1. Shared responsibility model for sustainability

Just by migrating to the cloud, AWS customers become significantly more sustainable in their technology operations. On average, AWS customers use 77% fewer servers, 84% less power, and a 28% cleaner power mix, ultimately reducing their carbon emissions by 88% compared to when they ran workloads in their own data centers. These improvements are attributable to the technological advancements and economies of scale that AWS datacenters bring. However, there are still significant opportunities for AWS customers to make their cloud operations more sustainable. To uncover this, we must first understand how emissions are categorized.

The Greenhouse Gas Protocol organizes carbon emissions into the following scopes, along with relevant emission examples within each scope for a cloud provider such as AWS:

  • Scope 1: All direct emissions from the activities of an organization or under its control. For example, fuel combustion by data center backup generators.
  • Scope 2: Indirect emissions from electricity purchased and used to power data centers and other facilities. For example, emissions from commercial power generation.
  • Scope 3: All other indirect emissions from activities of an organization from sources it doesn’t control. AWS examples include emissions related to data center construction, and the manufacture and transportation of IT hardware deployed in data centers.

From an AWS customer perspective, emissions from customer workloads running on AWS are accounted for as indirect emissions, and part of the customer’s Scope 3 emissions. Each workload deployed generates a fraction of the total AWS emissions from each of the previous scopes. The actual amount varies per workload and depends on several factors including the AWS services used, the energy consumed by those services, the carbon intensity of the electric grids serving the AWS data centers where they run, and the AWS procurement of renewable energy.

At a high level, AWS customers approach optimization initiatives at three levels:

  • Application (Architecture and Design): Using efficient software designs and architectures to minimize the average resources required per unit of work.
  • Resource (Provisioning and Utilization): Monitoring workload activity and modifying the capacity of individual resources to prevent idling due to over-provisioning or under-utilization.
  • Code (Code Optimization): Using code profilers and other tools to identify the areas of code that use up the most time or resources as targets for optimization.

In this blogpost, we will concentrate on code-level sustainability improvements and how they can be realized using Amazon CodeGuru Profiler.

How CodeGuru Profiler improves code sustainability

Amazon CodeGuru Profiler collects runtime performance data from your live applications and provides recommendations that can help you fine-tune your application performance. Using machine learning algorithms, CodeGuru Profiler can help you find your most CPU-intensive lines of code, which contribute the most to your scope 3 emissions. CodeGuru Profiler then suggests ways to improve the code to make it less CPU demanding. CodeGuru Profiler provides different visualizations of profiling data to help you identify what code is running on the CPU, see how much time is consumed, and suggest ways to reduce CPU utilization. Optimizing your code with CodeGuru profiler leads to the following:

  • Improvements in application performance
  • Reduction in cloud cost, and
  • Reduction in the carbon emissions attributable to your cloud workload.

When your code performs the same task with less CPU, your applications run faster, customer experience improves, and your cost reduces alongside your cloud emission. CodeGuru Profiler generates the recommendations that help you make your code faster by using an agent that continuously samples stack traces from your application. The stack traces indicate how much time the CPU spends on each function or method in your code—information that is then transformed into CPU and latency data that is used to detect anomalies. When anomalies are detected, CodeGuru Profiler generates recommendations that clearly outline you should do to remediate the situation. Although CodeGuru Profiler has several visualizations that help you visualize your code, in many cases, customers can implement these recommendations without reviewing the visualizations. Let’s demonstrate this with a simple example.

Demonstration: Using CodeGuru Profiler to optimize a Lambda function

In this demonstration, the inefficiencies in a AWS Lambda function will be identified by CodeGuru Profiler.

Building our Lambda Function (10mins)

To keep this demonstration quick and simple, let’s create a simple lambda function that display’s ‘Hello World’. Before writing the code for this function, let’s review two important concepts. First, when writing Python code that runs on AWS and calls AWS services, two critical steps are required:

The Python code lines (that will be part of our function) that execute these steps listed above are shown below:

import boto3 #this will import AWS SDK library for Python
VariableName = boto3.client('dynamodb’) #this will create the AWS SDK service client

Secondly, functionally, AWS Lambda functions comprise of two sections:

  • Initialization code
  • Handler code

The first time a function is invoked (i.e., a cold start), Lambda downloads the function code, creates the required runtime environment, runs the initialization code, and then runs the handler code. During subsequent invocations (warm starts), to keep execution time low, Lambda bypasses the initialization code and goes straight to the handler code. AWS Lambda is designed such that the SDK service client created during initialization persists into the handler code execution. For this reason, AWS SDK service clients should be created in the initialization code. If the code lines for creating the AWS SDK service client are placed in the handler code, the AWS SDK service client will be recreated every time the Lambda function is invoked, needlessly increasing the duration of the Lambda function during cold and warm starts. This inadvertently increases CPU demand (and cost), which in turn increases the carbon emissions attributable to the customer’s code. Below, you can see the green and brown versions of the same Lambda function.

Now that we understand the importance of structuring our Lambda function code for efficient execution, let’s create a Lambda function that recreates the SDK service client. We will then watch CodeGuru Profiler flag this issue and generate a recommendation.

  1. Open AWS Lambda from the AWS Console and click on Create function.
  2. Select Author from scratch, name the function ‘demo-function’, select Python 3.9 under runtime, select x86_64 under Architecture.
  3. Expand Permissions, then choose whether to create a new execution role or use an existing one.
  4. Expand Advanced settings, and then select Function URL.
  5. For Auth type, choose AWS_IAM or NONE.
  6. Select Configure cross-origin resource sharing (CORS). By selecting this option during function creation, your function URL allows requests from all origins by default. You can edit the CORS settings for your function URL after creating the function.
  7. Choose Create function.
  8. In the code editor tab of the code source window, copy and paste the code below:
#invocation code
import json
import boto3

#handler code
def lambda_handler(event, context):
  client = boto3.client('dynamodb') #create AWS SDK Service client’
  #simple codeblock for demonstration purposes  
  output = ‘Hello World’
  print(output)
  #handler function return

  return output

Ensure that the handler code is properly indented.

  1. Save the code, Deploy, and then Test.
  2. For the first execution of this Lambda function, a test event configuration dialog will appear. On the Configure test event dialog window, leave the selection as the default (Create new event), enter ‘demo-event’ as the Event name, and leave the hello-world template as the Event template.
  3. When you run the code by clicking on Test, the console should return ‘Hello World’.
  4. To simulate actual traffic, let’s run a curl script that will invoke the Lambda function every 0.2 seconds. On a bash terminal, run the following command:
while true; do curl {Lambda Function URL]; sleep 0.06; done

If you do not have git bash installed, you can use AWS Cloud 9 which supports curl commands.

Enabling CodeGuru Profiler for our Lambda function

We will now set up CodeGuru Profiler to monitor our Lambda function. For Lambda functions running on Java 8 (Amazon Corretto), Java 11, and Python 3.8 or 3.9 runtimes, CodeGuru Profiler can be enabled through a single click in the configuration tab in the AWS Lambda console.  Other runtimes can be enabled following a series of steps that can be found in the CodeGuru Profiler documentation for Java and the Python.

Our demo code is written in Python 3.9, so we will enable Profiler from the configuration tab in the AWS Lambda console.

  1. On the AWS Lambda console, select the demo-function that we created.
  2. Navigate to Configuration > Monitoring and operations tools, and click Edit on the right side of the page.

  1.  Scroll down to Amazon CodeGuru Profiler and click the button next to Code profiling to turn it on. After enabling Code profiling, click Save.

Note: CodeGuru Profiler requires 5 minutes of Lambda runtime data to generate results. After your Lambda function provides this runtime data, which may need multiple runs if your lambda has a short runtime, it will display within the Profiling group page in the CodeGuru Profiler console. The profiling group will be given a default name (i.e., aws-lambda-<lambda-function-name>), and it will take approximately 15 minutes after CodeGuru Profiler receives the runtime data for this profiling group to appear. Be patient. Although our function duration is ~33ms, our curl script invokes the application once every 0.06 seconds. This should give profiler sufficient information to profile our function in a couple of hours. After 5 minutes, our profiling group should appear in the list of active profiling groups as shown below.

Depending on how frequently your Lambda function is invoked, it can take up to 15 minutes to aggregate profiles, after which you can see your first visualization in the CodeGuru Profiler console. The granularity of the first visualization depends on how active your function was during those first 5 minutes of profiling—an application that is idle most of the time doesn’t have many data points to plot in the default visualization. However, you can remedy this by looking at a wider time period of profiled data, for example, a day or even up to a week, if your application has very low CPU utilization. For our demo function, a recommendation should appear after about an hour. By this time, the profiling groups list should show that our profiling group now has one recommendation.

Profiler has now flagged the repeated creation of the SDK service client with every invocation.

From the information provided, we can see that our CPU is spending 5x more computing time than expected on the recreation of the SDK service client. The estimated cost impact of this inefficiency is also provided. In production environments, the cost impact of seemingly minor inefficiencies can scale very quickly to several kilograms of CO2 and hundreds of dollars as invocation frequency, and the number of Lambda functions increase.

CodeGuru Profiler integrates with Amazon DevOps Guru, a fully managed service that makes it easy for developers and operators to improve the performance and availability of their applications. Amazon DevOps Guru analyzes operational data and application metrics to identify behaviors that deviate from normal operating patterns. Once these operational anomalies are detected, DevOps Guru presents intelligent recommendations that address current and predicted future operational issues. By integrating with CodeGuru Profiler, customers can now view operational anomalies and code optimization recommendations on the DevOps Guru console. The integration, which is enabled by default, is only applicable to Lambda resources that are supported by CodeGuru Profiler and monitored by both DevOps Guru and CodeGuru.

We can now stop the curl loop (Control+C) so that the Lambda function stops running. Next, we delete the profiling group that was created when we enabled profiling in Lambda, and then delete the Lambda function or repurpose as needed.

Conclusion

Cloud sustainability is a shared responsibility between AWS and our customers. While we work to make our datacenter more sustainable, customers also have to work to make their code, resources, and applications more sustainable, and CodeGuru Profiler can help you improve code sustainability, as demonstrated above. To start Profiling your code today, visit the CodeGuru Profiler documentation page. To start monitoring your applications, head over to the Amazon DevOps Guru documentation page.

About the authors:

Isha Dua

Isha Dua is a Senior Solutions Architect based in San Francisco Bay Area. She helps AWS Enterprise customers grow by understanding their goals and challenges, and guiding them on how they can architect their applications in a cloud native manner while making sure they are resilient and scalable. She’s passionate about machine learning technologies and Environmental Sustainability.

Christian Tomeldan

Christian Tomeldan is a DevOps Engineer turned Solutions Architect. Operating out of San Francisco, he is passionate about technology and conveys that passion to customers ensuring they grow with the right support and best practices. He focuses his technical depth mostly around Containers, Security, and Environmental Sustainability.

Ifeanyi Okafor

Ifeanyi Okafor is a Product Manager with AWS. He enjoys building products that solve customer problems at scale.

Export historical Security Hub findings to an S3 bucket to enable complex analytics

Post Syndicated from Jonathan Nguyen original https://aws.amazon.com/blogs/security/export-historical-security-hub-findings-to-an-s3-bucket-to-enable-complex-analytics/

AWS Security Hub is a cloud security posture management service that you can use to perform security best practice checks, aggregate alerts, and automate remediation. Security Hub has out-of-the-box integrations with many AWS services and over 60 partner products. Security Hub centralizes findings across your AWS accounts and supported AWS Regions into a single delegated administrator account in your aggregation Region of choice, creating a single pane of glass to consolidate and view individual security findings.

Because there are a large number of possible integrations across accounts and Regions, your delegated administrator account in the aggregation Region might have hundreds of thousands of Security Hub findings. To perform complex analytics or machine learning across the existing (historical) findings that are maintained in Security Hub, you can export findings to an Amazon Simple Storage Service (Amazon S3) bucket. To export new findings that have recently been created, you can implement the solution in the aws-security-hub-findings-export GitHub repository. However, Security Hub has data export API rate quotas, which can make exporting a large number of findings challenging.

In this blog post, we provide an example solution to export your historical Security Hub findings to an S3 bucket in your account, even if you have a large number of findings. We walk you through the components of the solution and show you how to use the solution after deployment.

Prerequisites

To deploy the solution, complete the following prerequisites:

  1. Enable Security Hub.
  2. If you want to export Security Hub findings for multiple accounts, designate a Security Hub administrator account.
  3. If you want to export Security Hub findings across multiple Regions, enable cross-Region aggregation.

Solution overview and architecture

In this solution, you use the following AWS services and features:

  • Security Hub export orchestration
    • AWS Step Functions helps you orchestrate automation and long-running jobs, which are integral to this solution. You need the ability to run a workflow for hours due to the Security Hub API rate limits and number of findings and objects.
    • AWS Lambda functions handle the logic for exporting and storing findings in an efficient and cost-effective manner. You can customize Lambda functions to most use cases.
  • Storage of exported findings
  • Job status tracking
    • Amazon EventBridge tracks changes in the status of the Step Functions workflow. The solution can run for over 100 hours; by using EventBridge, you don’t have to manually check the status.
    • Amazon Simple Notification Service (Amazon SNS) sends you notifications when the long-running jobs are complete or when they might have issues.
    • AWS Systems Manager Parameter Store provides a quick way to track overall status by maintaining a numeric count of successfully exported findings that you can compare with the number of findings shown in the Security Hub dashboard.

Figure 1 shows the architecture for the solution, deployed in the Security Hub delegated administrator account in the aggregation Region. The figure shows multiple Security Hub member accounts to illustrate how you can export findings for an entire AWS Organizations organization from a single delegated administrator account.

Figure 1: High-level overview of process and resources deployed in the Security Hub account

Figure 1: High-level overview of process and resources deployed in the Security Hub account

As shown in Figure 1, the workflow after deployment is as follows:

  1. The Step Functions workflow for the Security Hub export is invoked.
  2. The Step Functions workflow invokes a single Lambda function that does the following:
    1. Retrieves Security Hub findings that have an Active status and puts them in a temporary file.
    2. Pushes the file as an object to Amazon S3.
    3. Adds the global count of exported findings from the Step Functions workflow to a Systems Manager parameter for validation and tracking purposes.
    4. Repeats steps b–c for about 10 minutes to get the most findings while preventing the Lambda function from timing out.
    5. If a nextToken is present, pushes the nextToken to the output of the Step Functions.

      Note: If the number of items in the output is smaller than the number of items returned by the API call, then the return output includes a nextToken, which can be passed to a subsequent command to retrieve the next set of items.

  3. The Step Functions workflow goes through a Choice state as follows:
    • If a Security Hub nextToken is present, Step Functions invokes the Lambda function again.
    • If a Security Hub nextToken isn’t present, Step Functions ends the workflow successfully.
  4. An EventBridge rule tracks changes in the status of the Step Functions workflow and sends events to an SNS topic. Subscribers to the SNS topic receive a notification when the status of the Step Functions workflow changes.

Deploy the solution

You can deploy the solution through either the AWS Management Console or the AWS Cloud Development Kit (AWS CDK).

To deploy the solution (console)

  • In your delegated administrator Security Hub account, launch the AWS CloudFormation template by choosing the following Launch Stack button. It will take about 10 minutes for the CloudFormation stack to complete.

    Launch Stack

    Note: The stack will launch in the US East (N. Virginia) Region (us-east-1). If you are using cross-Region aggregation, deploy the solution into the Region where Security Hub findings are consolidated. You can download the CloudFormation template for the solution, modify it, and deploy it to your selected Region.

To deploy the solution (AWS CDK)

  1. Download the code from our aws-security-hub-findings-historical-export GitHub repository, where you can also contribute to the sample code. The CDK initializes your environment and uploads the Lambda assets to Amazon S3. Then, you deploy the solution to your account.
  2. While you are authenticated in the security tooling account, run the following commands in your terminal. Make sure to replace <AWS_ACCOUNT> with the account number, and replace <REGION> with the AWS Region where you want to deploy the solution.
    cdk bootstrap aws://<AWS_ACCOUNT>/<REGION>
    cdk deploy SechubHistoricalPullStack

Solution walkthrough and validation

Now that you’ve successfully deployed the solution, you can see each aspect of the automation workflow in action.

Before you start the workflow, you need to subscribe to the SNS topic so that you’re notified of status changes within the Step Functions workflow. For this example, you will use email notification.

To subscribe to the SNS topic

  1. Open the Amazon SNS console.
  2. Go to Topics and choose the Security_Hub_Export_Status topic.
  3. Choose Create subscription.
  4. For Protocol, choose Email.
  5. For Endpoint, enter the email address where you want to receive notifications.
  6. Choose Create subscription.
  7. After you create the subscription, go to your email and confirm the subscription.

You’re now subscribed to the SNS topic, so any time that the Step Functions status changes, you will receive a notification. Let’s walk through how to run the export solution.

To run the export solution

  1. Open the Amazon Step Functions console.
  2. In the left navigation pane, choose State machines.
  3. Choose the new state machine named sec_hub_finding_export.
  4. Choose Start execution.
  5. On the Start execution page, for Name – optional and Input – optional, leave the default values and then choose Start execution.
    Figure 2: Example input values for execution of the Step Functions workflow

    Figure 2: Example input values for execution of the Step Functions workflow

  6. This will start the Step Functions workflow and redirect you to the Graph view. If successful, you will see that the overall Execution status and each step have a status of Successful.
  7. For long-running jobs, you can view the CloudWatch log group associated with the Lambda function to view the logs.
  8. To track the number of Security Hub findings that have been exported, open the Systems Manager console, choose Parameter Store, and then select the /sechubexport/findingcount parameter. Under Value, you will see the total number of Security Hub findings that have been exported, as shown in Figure 3.
    Figure 3: Systems Manager Parameter Store value for the number of Security Hub findings exported

    Figure 3: Systems Manager Parameter Store value for the number of Security Hub findings exported

Depending on the number of Security Hub findings, this process can take some time. This is primarily due to the GetFindings quota of 3 requests per second. Each GetFindings request can return a maximum of 100 findings, so this means that you can get up to 300 findings per second. On average, the solution can export about 1 million findings per hour. If you have a large number of findings, you can start the finding export process and wait for the SNS topic to notify you when the process is complete.

How to customize the solution

The solution provides a general framework to help you export your historical Security Hub findings. There are many ways that you can customize this solution based on your needs. The following are some enhancements that you can consider.

Change the Security Hub finding filter

The solution currently pulls all findings with RecordState: ACTIVE, which pulls the active Security Hub findings in the AWS account. You can update the Lambda function code, specifically the finding_filter JSON value within the create_filter function, to pull findings for your use case. For example, to get all active Security Hub findings from the AWS Foundational Security Best Practices standard, update the Lambda function code as follows.

{
                 WorkflowState: [
                     {
                         "Value": "NEW ",
                         "Comparison": "EQUALS"
                     },
                 ],
                 "RecordState": [
                     {
                         "Value": "ACTIVE",
                         "Comparison": "EQUALS"
                     },
                 ]
            }

Export more than 100 million Security Hub findings

The example solution can export about 100 million Security Hub findings. This number is primarily determined by the speed at which findings can be exported, due to the following factors:

If you want to export more than 100 million Security Hub findings, do one of the following:

Note: If you implement either of these solutions, make sure that the nextToken also gets passed to the new Step Functions execution by updating the Lambda function code to parse and pass the nextToken received in the last request.

Speed up the export

One way to increase the export bandwidth, and reduce the overall execution time, is to run the export job in parallel across the individual Security Hub member accounts rather than from the single delegated administrator account.

You could use CloudFormation StackSets to deploy this solution in each Security Hub member account and send the findings to a centralized S3 bucket. You would need to modify the solution to allow an S3 bucket to be provided as an input, and all the Lambda function Identity and Access Management (IAM) roles would need cross-account access to the S3 bucket and corresponding AWS Key Management Service (AWS KMS) key. You would also need to make updates in each member account to iterate through the various Regions in which the Security Hub findings exist.

Next steps

The solution in this post is designed to assist in the retrieval and export of all existing findings currently in Security Hub. After you successfully run this solution to export historical findings, you can continuously export new Security Hub findings by using the sample solution in the aws-security-hub-findings-export GitHub repository.

Now that you’ve exported the Security Hub findings, you can set up and run custom complex reporting or queries against the S3 bucket by using Amazon Athena and AWS Glue. Additionally, you can run machine learning and analytics capabilities by using services like Amazon SageMaker or Amazon Lookout for Metrics.

Conclusion

In this post, you deployed a solution to export the existing Security Hub findings in your account to a central S3 bucket, so that you can apply complex analytics and machine learning to those findings. We walked you through how to use the solution and apply it to some example use cases after you successfully exported existing findings across your AWS environment. Now your security team can use the data in the S3 bucket for predictive analytics and determine if there are Security Hub findings and specific resources that might need to be prioritized for review due to a deviation from normal behavior. Additionally, you can use this solution to enable more complex analytics on multiple fields by querying large and complex datasets with AWS Athena.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a thread on AWS Security Hub re:Post.

 
Want more AWS Security news? Follow us on Twitter.

Jonathan Nguyen

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on threat detection and incident response. Today, he helps enterprise customers develop a comprehensive security strategy and deploy security solutions at scale, and he trains customers on AWS Security best practices.

Implement step-up authentication with Amazon Cognito, Part 2: Deploy and test the solution

Post Syndicated from Salman Moghal original https://aws.amazon.com/blogs/security/implement-step-up-authentication-with-amazon-cognito-part-2-deploy-and-test-the-solution/

This solution consists of two parts. In the previous blog post Implement step-up authentication with Amazon Cognito, Part 1: Solution overview, you learned about the architecture and design of a step-up authentication solution that uses AWS services such as Amazon API Gateway, Amazon Cognito, Amazon DynamoDB, and AWS Lambda to protect privileged API operations. In this post, you will use a reference implementation to deploy and test the step-up authentication solution in your AWS account.

Solution deployment

The step-up authentication solution discussed in Part 1 uses a reference implementation that you can use for demonstration and learning purposes. You can also review the implementation code in the step-up-auth GitHub repository. The reference implementation includes a web application that you can use in the following sections to test the step-up implementation. Additionally, the implementation contains a sample privileged API action /transfer and a non-privileged API action /info, and two step-up authentication solution API operations /initiate-auth, and /respond-to-challenge. The web application invokes these API operations to demonstrate how to perform step-up authentication.

Deployment prerequisites

The following are prerequisites for deployment:

  1. The Node.js runtime and the node package manager (npm) are installed on your machine. You can use a package manager for your platform to install these. Note that the reference implementation code was tested using Node.js v16 LTS.
  2. The AWS Cloud Development Kit (AWS CDK) is installed in your environment.
  3. The AWS Command Line Interface (AWS CLI) is installed in your environment.
  4. You must have AWS credentials files that contain a profile with your account secret key and access key to perform the deployment. Make sure that your account has enough privileges to create, update, or delete the following resources:
  5. A two-factor authentication (2FA) mobile application, such as Google Authenticator, is installed on your mobile device.

Deploy the step-up solution

You can deploy the solution by using the AWS CDK, which will create a working reference implementation of the step-up authentication solution.

To deploy the solution

  1. Build the necessary resources by using the build.sh script in the deployment folder. Run the build script from a terminal window, using the following command:
    cd deployment && ./build.sh
  2. Deploy the solution by using the deploy.sh script that is present in the deployment folder, using the following command. Be sure to replace the required environment variables with your own values.
    export AWS_REGION=<your AWS Region of choice, for example us-east-2>
    export AWS_ACCOUNT=<your account number>
    export AWS_PROFILE=<a valid profile in .aws/credentials that contains the secret/access key to your account>
    export NODE_ENV=development
    export ENV_PREFIX=dev

    The account you specify in the AWS_ACCOUNT environment variable is used to bootstrap the AWS CDK deployment. Set AWS_PROFILE to point to your profile. Make sure that your account has sufficient privileges, as described in the prerequisites.

    The NODE_ENV environment variable can be set to development or production. This variable controls the log output that the Lambda functions generate. The ENV_PREFIX environment variable allows you to prefix all resources with a tag, which enables a multi-tenant deployment of this solution.

  3. Still in the deployment folder, deploy the stack by using the following command:
    ./deploy.sh
  4. Make note of the CloudFront distribution URL that follows Sample Web App URL, as shown in Figure 1. In the next section, you will use this CloudFront distribution URL to load the sample web app in a web browser and test the step-up solution
    Figure 1: The output of the deployment process

    Figure 1: The output of the deployment process

After the deployment script deploy.sh completes successfully, the AWS CDK creates the following resources in your account:

  • An Amazon Cognito user pool that is used as a user registry.
  • An Amazon API Gateway API that contains three resources:
    • A protected resource that requires step-up authentication.
    • An initiate-auth resource to start the step-up challenge response.
    • A respond-to-challenge resource to complete the step-up challenge.
  • An API Gateway Lambda authorizer that is used to protect API actions.
  • The following Amazon DynamoDB tables:
    • A setting table that holds the configuration mapping of the API operations that require elevated privileges.
    • A session table that holds temporary, user-initiated step-up sessions and their current status.
  • A React web UI that demonstrates how to invoke a privileged API action and go through step-up authentication.

Test the step-up solution

In order to test the step-up solution, you’ll use the sample web application that you deployed in the previous section. Here’s an overview of the actions you’ll perform to test the flow:

  1. In the AWS Management Console, create items in the setting DynamoDB table that point to privileged API actions. After the solution deployment, the setting DynamoDB table is called step-up-auth-setting-<ENV_PREFIX>. For more information about ENV_PREFIX variable usage in a multi-tenant environment, see Deploy the step-up solution earlier in this post.

    As discussed, in the Data design section in Part 1 of this series, the Lambda authorizer treats all API invocations as non-privileged (that is, they don’t require step-up authentication) unless there is a matching entry for the API action in the setting table. Additionally, you can switch a privileged API action to a non-privileged API action by simply changing the stepUpState attribute in the setting table. Create an item in the DynamoDB table for the sample /transfer API action and for the sample /info API action. The /transfer API action will require step-up authentication, whereas the /info API action will be a non-privileged invocation that does not require step-up authentication. Note that there is no need to define a non-privileged API action in the table; it is there for illustration purposes only.

  2. If you haven’t already, install Google Authenticator or a similar two-factor authentication (2FA) application on your mobile device.
  3. Using the sample web application, register a new user in Amazon Cognito.
  4. Log in to the sample web application by using the registered new user.
  5. Configure the preferred multi-factor authentication (MFA) settings for the logged in user in the application. This step is necessary so that Amazon Cognito can challenge the user with a one-time password (OTP).
  6. Using the sample web application, invoke the sample /transfer privileged API action that requires step-up authentication.
  7. The Lambda authorizer will intercept the API request and return a 401 Unauthorized response status code that the sample web application will handle. The application will perform step-up authentication by prompting you to provide additional security credentials, specifically the OTP. To complete the step-up authentication, enter the OTP, which is sent through short service message (SMS) or by using an authenticator mobile app.
  8. Invoke the sample /transfer privileged API action again in the sample web application, and verify that the API invocation is successful.

The following instructions assume that you’ve installed a 2FA mobile application, such as Google Authenticator, on your mobile device. You will configure the 2FA application in the following steps and use the OTP from this mobile application when prompted to enter the step-up challenge. You can configure Amazon Cognito to send you an SMS with the OTP. However, you must be aware of the Amazon Cognito throttling limits. See the Additional considerations section in Part 1 of this series. Read these limits carefully, especially if you set the user’s preferred MFA setting to SMS.

To test the step-up authentication solution

  1. Open the Amazon DynamoDB console and log in to your AWS account.
  2. On the left nav pane, under Tables, choose Explore items. In the right pane, choose the table named step-up-auth-setting* and choose Create item, as shown in Figure 2.
    Figure 2: Choose the step-up-auth-setting* table and choose Create item button

    Figure 2: Choose the step-up-auth-setting* table and choose Create item button

  3. In the Edit item screen as shown in Figure 3, ensure that JSON is selected, and the Attributes button for View DynamoDB JSON is off.
    Figure 3: Edit an item in the table - select JSON and turn off View DynamoDB JSON button

    Figure 3: Edit an item in the table – select JSON and turn off View DynamoDB JSON button

  4. To create an entry for the /info API action, copy the following JSON text:
    {
       "id": "/info",
       "lastUpdateTimestamp": "2021-08-23T08:25:29.023Z",
       "stepUpState": "STEP_UP_NOT_REQUIRED",
       "createTimestamp": "2021-08-23T08:25:29.023Z"
    }
  5. Paste the copied JSON text for the /info API action in the Attributes text area, as shown in Figure 4, and choose Create item.
    Figure 4: Create an entry for the /info API action

    Figure 4: Create an entry for the /info API action

  6. To create an entry for the /transfer API action, copy the following JSON text:
    {
       "id": "/transfer",
       "lastUpdateTimestamp": "2021-08-23T08:22:12.436Z",
       "stepUpState": "STEP_UP_REQUIRED",
       "createTimestamp": "2021-08-23T08:22:12.436Z"
    }
  7. Paste the copied JSON text for the /transfer API action in the Attributes text area, as shown in Figure 4, and choose Create item.
    Figure 5: Create an entry for the /transfer API action

    Figure 5: Create an entry for the /transfer API action

  8. Open your web browser and load the CloudFront URL that you made note of in step 4 of the Deploy the step-up solution procedure.
  9. On the login screen of the sample web application, enter the information for a new user. Make sure that the email address and phone numbers are valid. Choose Register. You will be prompted to enter a verification code. Check your email for the verification code, and enter it at the sample web application prompt.
  10. You will be sent back to the login screen. Log in as the user that you just registered. You will see the welcome screen, as shown in Figure 6.
    Figure 6: Welcome screen of the sample web application

    Figure 6: Welcome screen of the sample web application

  11. In the left nav pane choose Setting, choose the Configure button to the right of Software Token, as shown in Figure 7. Use your mobile device camera to capture the QR code on the screen in your 2FA application, for example Google Authenticator.
    Figure 7: Configure Software Token screen with QR code

    Figure 7: Configure Software Token screen with QR code

  12. Enter the temporary code from the 2FA application into the web application and choose Submit. You will see the message Software Token successfully configured!
  13. Still in the Setting menu, next to Select Preferred MFA, choose Software Token. You will see the message User preferred MFA set to Software Token, as shown in Figure 8.
    Figure 8: Completed Software Token setup

    Figure 8: Completed Software Token setup

  14. In the left nav pane choose StepUp Auth. In the right pane, choose Invoke Transfer API. You should see Response: 401 authorization challenge, as shown in Figure 9.
    Figure 9: The step-up API invocation returns an authorization challenge

    Figure 9: The step-up API invocation returns an authorization challenge

  15. On your mobile device, open the 2FA application, copy the OTP code from the 2FA application, and enter the code into the Enter OTP field, as shown in Figure 9. Choose Submit.
  16. This sends the OTP to the respond-to-challenge endpoint. After the OTP is verified, the endpoint will return a success or failure message. Figure 10 shows a successful OTP verification. You are prompted to invoke the /transfer privileged API action again.
    Figure 10: The OTP prompt during step-up API invocation

    Figure 10: The OTP prompt during step-up API invocation

  17. Invoke the transfer API action again by choosing Invoke Transfer API. You should see a success message as shown in Figure 11.
    Figure 11: A successful step-up API invocation

    Figure 11: A successful step-up API invocation

    Congratulations! You’ve successfully performed step-up authentication.

Conclusion

In the previous post in this series, Implement step-up authentication with Amazon Cognito, Part 1: Solution overview, you learned about the architecture and implementation details for the step-up authentication solution. In this blog post, you learned how to deploy and test the step-up authentication solution in your AWS account. You deployed the solution by using scripts from the step-up-auth GitHub repository that use the AWS CDK to create resources in your account for Amazon Cognito, Amazon API Gateway, a Lambda authorizer, and Amazon DynamoDB. Finally, you tested the end-to-end solution on a sample web application by invoking a privileged API action that required step-up authentication. Using the 2FA application, you were able to pass in an OTP to complete the step-up authentication and subsequently successfully invoke the privileged API action.

For more information about AWS Cognito user pools and the new console experience, watch the video Amazon Cognito User Pools New Console Walkthrough on the AWS channel on YouTube. And for more information about how to protect your API actions with fine-grained access controls, see the blog post Building fine-grained authorization using Amazon Cognito, API Gateway, and IAM.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the Amazon Cognito forum.

Want more AWS Security news? Follow us on Twitter.

Salman Moghal

Salman Moghal

Salman is a Principal Consultant in AWS Professional Services, based in Toronto, Canada. He helps customers in architecting, developing, and reengineering data-driven applications at scale, with a sharp focus on security.

Thomas Ross

Thomas Ross

Thomas is a Software Engineering student at Carleton University. He worked at AWS as a Professional Services Intern and a Software Development Engineer Intern in Amazon Aurora. He has an interest in almost anything related to technology, especially systems at high scale, security, distributed systems, and databases.

Ozair Sheikh

Ozair Sheikh

Ozair is a senior product leader for Sponsored Display in Amazon ads, based in Toronto, Canada. He helps advertisers and Ad Tech API Partners build campaign management solutions to reach customers across the purchase journey. He has over 10 years of experience in API management and security, with an obsession for delivering highly secure API products.

Mahmoud Matouk

Mahmoud Matouk

Mahmoud is a Principal Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Implement step-up authentication with Amazon Cognito, Part 1: Solution overview

Post Syndicated from Salman Moghal original https://aws.amazon.com/blogs/security/implement-step-up-authentication-with-amazon-cognito-part-1-solution-overview/

In this blog post, you’ll learn how to protect privileged business transactions that are exposed as APIs by using multi-factor authentication (MFA) or security challenges. These challenges have two components: what you know (such as passwords), and what you have (such as a one-time password token). By using these multi-factor security controls, you can implement step-up authentication to obtain a higher level of security when you perform critical transactions. In this post, we show you how you can use AWS services such as Amazon API Gateway, Amazon Cognito, Amazon DynamoDB, and AWS Lambda functions to implement step-up authentication by using a simple rule-based security model for your API resources.

Previously, identity and access management solutions have attempted to deliver step-up authentication by retrofitting their runtimes with stateful server-side management, which doesn’t scale in the modern-day stateless cloud-centered application architecture. We’ll show you how to use a pluggable, stateless authentication implementation that integrates into your existing infrastructure without compromising your security or performance. The Amazon API Gateway Lambda authorizer is a pluggable serverless function that acts as an intermediary step before an API action is invoked. This Lambda authorizer, coupled with a small SDK library that runs in the authorizer, will provide step-up authentication.

This solution consists of two blog posts. This is Part 1, where you’ll learn about the step-up authentication solution architecture and design. In the next post, Implement step-up authentication with Amazon Cognito, Part 2: Deploy and test the solution, you’ll learn how to use a reference implementation to test the step-up authentication solution.

Prerequisites

The reference architecture in this post uses a purpose-built step-up authorization workflow engine, which uses a custom SDK. The custom SDK uses the DynamoDB service as a persistent layer. This workflow engine is generic and can be used across any API serving layers, such as API Gateway or Elastic Load Balancing (ELB) Application Load Balancer, as long as the API serving layers can intercept API requests to perform additional actions. The step-up workflow engine also relies on an identity provider that is capable of issuing an OAuth 2.0 access token.

There are three parts to the step-up authentication solution:

  1. An API serving layer with the capability to apply custom logic before applying business logic.
  2. An OAuth 2.0–capable identity provider system.
  3. A purpose-built step-up workflow engine.

The solution in this post uses Amazon Cognito as the identity provider, with an API Gateway Lambda authorizer to invoke the step-up workflow engine, and DynamoDB as a persistent layer used by the step-up workflow engine. You can see a reference implementation of the API Gateway Lambda authorizer in the step-up-auth GitHub repository. Additionally, the purpose-built step-up workflow engine provides two API endpoints (or API actions), /initiate-auth and /respond-to-challenge, which are realized using the API Gateway Lambda authorizer, to drive the API invocation step-up state.

Note: If you decide to use an API serving layer other than API Gateway, or use an OAuth 2.0 identity provider besides Amazon Cognito, you will have to make changes to the accompanying sample code in the step-up-auth GitHub repository.

Solution architecture

Figure 1 shows the high-level reference architecture.

Figure 1: Step-up authentication high-level reference architecture

Figure 1: Step-up authentication high-level reference architecture

First, let’s talk about the core components in the step-up authentication reference architecture in Figure 1.

Identity provider

In order for a client application or user to invoke a protected backend API action, they must first obtain a valid OAuth token or JSON web token (JWT) from an identity provider. The step-up authentication solution uses Amazon Cognito as the identity provider. The step-up authentication solution and the accompanying step-up API operations use the access token to make the step-up authorization decision.

Protected backend

The step-up authentication solution uses API Gateway to protect backend resources. API Gateway supports several different API integration types, and you can use any one of the supported API Gateway integration types. For this solution, the accompanying sample code in the step-up-auth GitHub repository uses Lambda proxy integration to simulate a protected backend resource.

Data design

The step-up authentication solution relies on two DynamoDB tables, a session table and a setting table. The session table contains the user’s step-up session information, and the setting table contains an API step-up configuration. The API Gateway Lambda authorizer (described in the next section) checks the setting table to determine whether the API request requires a step-up session. For more information about table structure and sample values, see the Step-up authentication data design section in the accompanying GitHub repository.

The session table has the DynamoDB Time to Live (TTL) feature enabled. An item stays in the session table until the TTL time expires, when DynamoDB automatically deletes the item. The TTL value can be controlled by using the environment variable SESSION_TABLE_ITEM_TTL. Later in this post, we’ll cover where to define this environment variable in the Step-up solution design details section; and we’ll cover how to set the optimal value for this environment variable in the Additional considerations section.

Authorizer

The step-up authentication solution uses a purpose-built request parameter-based Lambda authorizer (also called a REQUEST authorizer). This REQUEST authorizer helps protect privileged API operations that require a step-up session.

The authorizer verifies that the API request contains a valid access token in the HTTP Authorization header. Using the access token’s JSON web token ID (JTI) claim as a key, the authorizer then attempts to retrieve a step-up session from the session table. If a session exists and its state is set to either STEP_UP_COMPLETED or STEP_UP_NOT_REQUIRED, then the authorizer lets the API call through by generating an allow API Gateway Lambda authorizer policy. If the set-up state is set to STEP_UP_REQUIRED, then the authorizer returns a 401 Unauthorized response status code to the caller.

If a step-up session does not exist in the session table for the incoming API request, then the authorizer attempts to create a session. It first looks up the setting table for the API configuration. If an API configuration is found and the configuration status is set to STEP_UP_REQUIRED, it indicates that the user must provide additional authentication in order to call this API action. In this case, the authorizer will create a new session in the session table by using the access token’s JTI claim as a session key, and it will return a 401 Unauthorized response status code to the caller. If the API configuration in the setting table is set to STEP_UP_DENY, then the authorizer will return a deny API Gateway Lambda authorizer policy, therefore blocking the API invocation. The caller will receive a 403 Forbidden response status code.

The authorizer uses the purpose-built auth-sdk library to interface with both the session and setting DynamoDB tables. The auth-sdk library provides convenient methods to create, update, or delete items in tables. Internally, auth-sdk uses the DynamoDB v3 Client SDK.

Initiate auth endpoint

When you deploy the step-up authentication solution, you will get the following two API endpoints:

  1. The initiate step-up authentication endpoint (described in this section).
  2. The respond to step-up authentication challenge endpoint (described in the next section).

When a client receives a 401 Unauthorized response status code from API Gateway after invoking a privileged API operation, the client can start the step-up authentication flow by invoking the initiate step-up authentication endpoint (/initiate-auth).

The /initiate-auth endpoint does not require any extra parameters, it only requires the Amazon Cognito access_token to be passed in the Authorization header of the request. The /initiate-auth endpoint uses the access token to call the Amazon Cognito API actions GetUser and GetUserAttributeVerificationCode on behalf of the user.

After the /initiate-auth endpoint has determined the proper multi-factor authentication (MFA) method to use, it returns the MFA method to the client. There are three possible values for the MFA methods:

  • MAYBE_SOFTWARE_TOKEN_STEP_UP, which is used when the MFA method cannot be determined.
  • SOFTWARE_TOKEN_STEP_UP, which is used when the user prefers software token MFA.
  • SMS_STEP_UP, which is used when the user prefers short message service (SMS) MFA.

Let’s take a closer look at how /initiate-auth endpoint determines the type of MFA methods to return to the client. The endpoint calls Amazon Cognito GetUser API action to check for user preferences, and it takes the following actions:

  1. Determines what method of MFA the user prefers, either software token or SMS.
  2. If the user’s preferred method is set to software token, the endpoint returns SOFTWARE_TOKEN_STEP_UP code to the client.
  3. If the user’s preferred method is set to SMS, the endpoint sends an SMS message with a code to the user’s mobile device. It uses the Amazon Cognito GetUserAttributeVerificationCode API action to send the SMS message. After the Amazon Cognito API action returns success, the endpoint returns SMS_STEP_UP code to the client.
  4. When the user preferences don’t include either a software token or SMS, the endpoint checks if the response from Amazon Cognito GetUser API action contains UserMFASetting response attribute list with either SOFTWARE_TOKEN_MFA or SMS_MFA keywords. If the UserMFASetting response attribute list contains SOFTWARE_TOKEN_MFA, then the endpoint returns SOFTWARE_TOKEN_STEP_UP code to the client. If it contains SMS_MFA keyword, then the endpoint invokes the Amazon Cognito GetUserAttributeVerificationCode API action to send the SMS message (as in step 3). Upon successful response from the Amazon Cognito API action, the endpoint returns SMS_STEP_UP code to the client.
  5. If the UserMFASetting response attribute list from Amazon Cognito GetUser API action does not contain SOFTWARE_TOKEN_MFA or SMS_MFA keywords, then the endpoint looks for phone_number_verified attribute. If found, then the endpoint sends an SMS message with a code to the user’s mobile device with verified phone number. The endpoint uses the Amazon Cognito GetUserAttributeVerificationCode API action to send the SMS message (as in step 3). Otherwise, when no verified phone is found, the endpoint returns MAYBE_SOFTWARE_TOKEN_STEP_UP code to the client.

The flowchart shown in Figure 2 illustrates the full decision logic.

Figure 2: MFA decision flow chart

Figure 2: MFA decision flow chart

Respond to challenge endpoint

The respond to challenge endpoint (/respond-to-challenge) is called by the client after it receives an appropriate MFA method from the /initiate-auth endpoint. The user must respond to the challenge appropriately by invoking /respond-to-challenge with a code and an MFA method.

The /respond-to-challenge endpoint receives two parameters in the POST body, one indicating the MFA method and the other containing the challenge response. Additionally, this endpoint requires the Amazon Cognito access token to be passed in the Authorization header of the request.

If the MFA method is SMS_STEP_UP, the /respond-to-challenge endpoint invokes the Amazon Cognito API action VerifyUserAttribute to verify the user-provided challenge response, which is the code that was sent by using SMS.

If the MFA method is SOFTWARE_TOKEN_STEP_UP or MAYBE_SOFTWARE_TOKEN_STEP_UP, the /respond-to-challenge endpoint invokes the Amazon Cognito API action VerifySoftwareToken to verify the challenge response that was sent in the endpoint payload.

After the user-provided challenge response is verified, the /respond-to-challenge endpoint updates the session table with the step-up session state STEP_UP_COMPLETED by using the access_token JTI. If the challenge response verification step fails, no changes are made to the session table. As explained earlier in the Data design section, the step-up session stays in the session table until the TTL time expires, when DynamoDB will automatically delete the item.

Deploy and test the step-up authentication solution

If you want to test the step-up authentication solution at this point, go to the second part of this blog, Implement step-up authentication with Amazon Cognito, Part 2: Deploy and test the solution. That post provides instructions you can use to deploy the solution by using the AWS Cloud Development Kit (AWS CDK) in your AWS account, and test it by using a sample web application.

Otherwise, you can continue reading the rest of this post to review the details and code behind the step-up authentication solution.

Step-up solution design details

Now let’s dig deeper into the step-up authentication solution. Figure 3 expands on the high-level solution design in the previous section and highlights the sequence of events that must take place to perform step-up authentication. In this section, we’ll break down these sequences into smaller parts and discuss each by going over a detailed sequence diagram.

Figure 3: Step-up authentication detailed reference architecture

Figure 3: Step-up authentication detailed reference architecture

Let’s group the step-up authentication flow in Figure 3 into three parts:

  1. Create a step-up session (steps 1-6 in Figure 3)
  2. Initiate step-up authentication (steps 7-8 in Figure 3)
  3. Respond to the step-up challenge (steps 9-12 in Figure 3)

In the next sections, you’ll learn how the user’s API requests are handled by the step-up authentication solution, and how the user state is elevated by going through an additional challenge.

Create a step-up session

After the user successfully logs in, they create a step-up session when invoking a privileged API action that is protected with the step-up Lambda authorizer. This authorizer determines whether to start a step-up challenge based on the configuration within the DynamoDB setting table, which might create a step-up session in the DynamoDB session table. Let’s go over steps 1–6, shown in the architecture diagram in Figure 3, in more detail:

  • Step 1 – It’s important to note that the user must authenticate with Amazon Cognito initially. As a result, they must have a valid access token generated by the Amazon Cognito user pool.
  • Step 2 – The user then invokes a privileged API action and passes the access token in the Authorization header.
  • Step 3 – The API action is protected by using a Lambda authorizer. The authorizer first validates the token by invoking the Amazon Cognito user pool public key. If the token is invalid, a 401 Unauthorized response status code can be sent immediately, prompting the client to present a valid token.
  • Step 4 – The authorizer performs a lookup in the DynamoDB setting table to check whether the current request needs elevated privilege (also known as step-up privilege). In the setting table, you can define which API actions require elevated privilege. You can additionally bundle API operations into a group by defining the group attribute. This allows you to further isolate privileged API operations, especially in a large-scale deployment.
  • Step 5 – If an API action requires elevated privilege, the authorizer will check for an existing step-up session for this specific user in the session table. If a step-up session does not exist, the authorizer will create a new entry in the session table. The key for this table will be the JTI claim of the access_token (which can be obtained after token verification).
  • Step 6 – If a valid session exists, then authorization will be given. Otherwise an unauthorized access response (401 HTTP code) will be sent back from the Lambda authorizer, indicating that the user requires elevated privilege.

Figure 4 highlights these steps in a sequence diagram.

Figure 4: Sequence diagram for creating a step-up session

Figure 4: Sequence diagram for creating a step-up session

Initiate step-up authentication

After the user receives a 401 Unauthorized response status code from invoking the privileged API action in the previous step, the user must call the /initiate-auth endpoint to start step-up authentication. The endpoint will return the response to the user or the client application to supply the temporary code. Let’s go over steps 7 and 8, shown in the architecture diagram in Figure 3, in more detail:

  • Step 7 – The client application initiates a step-up action by calling the /initiate-auth endpoint. This action is protected by the API Gateway built-in Amazon Cognito authorizer, and the client needs to pass a valid access_token in the Authorization header.
  • Step 8 – The call is forwarded to a Lambda function that will initiate the step-up action with the end user. The function first calls the Amazon Cognito API action GetUser to find out the user’s MFA settings. Depending on which MFA type is enabled for the user, the function uses different Amazon Cognito API operations to start the MFA challenge. For more details, see the Initiate auth endpoint section earlier in this post.

Figure 5 shows these steps in a sequence diagram.

Figure 5: Sequence diagram for invoking /initiate-auth to start step-up authentication

Figure 5: Sequence diagram for invoking /initiate-auth to start step-up authentication

Respond to the step-up challenge

In the previous step, the user receives a challenge code from the /initiate-auth endpoint. Depending on the type of challenge code, user must respond by sending a one-time password (OTP) to the /respond-to-challenge endpoint. The /respond-to-challenge endpoint invokes an Amazon Cognito API action to verify the OTP. Upon successful verification, the /respond-to-challenge endpoint marks the step-up session in the session table to STEP_UP_COMPLETED, indicating that the user now has elevated privilege. At this point, the user can invoke the privileged API action again to perform the elevated business operation. Let’s go over steps 9–12, shown in the architecture diagram in Figure 3, in more detail:

  • Step 9 – The client application presents an appropriate screen to the user to collect a response to the step-up challenge. The client application calls the /respond-to-challenge endpoint that contains the following:
    1. An access_token in the Authorization header.
    2. A step-up challenge type.
    3. A response provided by the user to the step-up challenge.

    This endpoint is protected by the API Gateway built-in Amazon Cognito authorizer.

  • Step 10 – The call is forwarded to the Lambda function, which verifies the response by calling the Amazon Cognito API action VerifyUserAttribute (in the case of SMS_STEP_UP) or VerifySoftwareToken (in the case of SOFTWARE_TOKEN_STEP_UP), depending on the type of step-up action that was returned from the /initiate-auth API action. The Amazon Cognito response will indicate whether verification was successful.
  • Step 11 – If the Amazon Cognito response in the previous step was successful, the Lambda function associated with the /respond-to-challenge endpoint inserts a record in the session table by using the access_token JTI as key. This record indicates that the user has completed step-up authentication. The record is inserted with a time to live (TTL) equal to the lesser of these values: the remaining period in the access_token timeout, or the default TTL value that is set in the Lambda function as a configurable environment variable, SESSION_TABLE_ITEM_TTL. The /respond-to-challenge endpoint returns a 200 status code after successfully updating the session table. It returns a 401 Unauthorized response status code if the operation failed or if the Amazon Cognito API calls in the previous step failed. For more information about the optimal value for the SESSION_TABLE_ITEM_TTL variable, see the Additional considerations section later in this post.
  • Step 12 – The client application can re-try the original call (using the same access token) to the privileged API operations, and this call should now succeed because an active step-up session exists for the user. Calls to other privileged API operations that require step-up should also succeed, as long as the step-up session hasn’t expired.

Figure 6 shows these steps in a sequence diagram.

Figure 6: Invoke the /respond-to-challenge endpoint to complete step-up authentication

Figure 6: Invoke the /respond-to-challenge endpoint to complete step-up authentication

Additional considerations

This solution uses several Amazon Cognito API operations to provide step-up authentication functionality. Amazon Cognito applies rate limiting on all API operations categories, and rapid calls that exceed the assigned quota will be throttled.

The step-up flow for a single user can include multiple Amazon Cognito API operations such as GetUser, GetUserAttributeVerificationCode, VerifyUserAttribute, and VerifySoftwareToken. These Amazon Cognito API operations have different rate limits. The effective rate, in requests per second (RPS), that your privileged and protected API action can achieve will be equivalent to the lowest category rate limit among these API operations. When you use the default quota, your application can achieve 25 SMS_STEP_UP RPS or up to 50 SOFTWARE_TOKEN_STEP_UP RPS.

Certain Amazon Cognito API operations have additional security rate limits per user per hour. For example, the GetUserAttributeVerificationCode API action has a limit of five calls per user per hour. For that reason, we recommend 15 minutes as the minimum value for SESSION_TABLE_ITEM_TTL, as this will allow a single user to have up to four step-up sessions per hour if needed.

Conclusion

In this blog post, you learned about the architecture of our step-up authentication solution and how to implement this architecture to protect privileged API operations by using AWS services. You learned how to use Amazon Cognito as the identity provider to authenticate users with multi-factor security and API Gateway with an authorizer Lambda function to enforce access to API actions by using a step-up authentication workflow engine. This solution uses DynamoDB as a persistent layer to manage the security rules for the step-up authentication workflow engine, which helps you to efficiently manage your rules.

In the next part of this post, Implement step-up authentication with Amazon Cognito, Part 2: Deploy and test the solution, you’ll deploy a reference implementation of the step-up authentication solution in your AWS account. You’ll use a sample web application to test the step-up authentication solution you learned about in this post.

 
If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the Amazon Cognito forum.

Want more AWS Security news? Follow us on Twitter.

Salman Moghal

Salman Moghal

Salman is a Principal Consultant in AWS Professional Services, based in Toronto, Canada. He helps customers in architecting, developing, and reengineering data-driven applications at scale, with a sharp focus on security.

Thomas Ross

Thomas Ross

Thomas is a Software Engineering student at Carleton University. He worked at AWS as a Professional Services Intern and a Software Development Engineer Intern in Amazon Aurora. He has an interest in almost anything related to technology, especially systems at high scale, security, distributed systems, and databases.

Ozair Sheikh

Ozair Sheikh

Ozair is a senior product leader for Sponsored Display in Amazon ads, based in Toronto, Canada. He helps advertisers and Ad Tech API Partners build campaign management solutions to reach customers across the purchase journey. He has over 10 years of experience in API management and security, with an obsession for delivering highly secure API products.

Mahmoud Matouk

Mahmoud Matouk

Mahmoud is a Principal Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Deploy and manage OpenAPI/Swagger RESTful APIs with the AWS Cloud Development Kit

Post Syndicated from Luke Popplewell original https://aws.amazon.com/blogs/devops/deploy-and-manage-openapi-swagger-restful-apis-with-the-aws-cloud-development-kit/

This post demonstrates how AWS Cloud Development Kit (AWS CDK) Infrastructure as Code (IaC) constructs and AWS serverless technology can be used to build and deploy a RESTful Application Programming Interface (API) defined in the OpenAPI specification. This post uses an example API that describes  Widget resources and demonstrates how to use an AWS CDK Pipeline to:

  • Deploy a RESTful API stage to Amazon API Gateway from an OpenAPI specification.
  • Build and deploy an AWS Lambda function that contains the API functionality.
  • Auto-generate API documentation and publish it to an Amazon Simple Storage Service (Amazon S3)-hosted website served by the Amazon CloudFront content delivery network (CDN) service. This provides technical and non-technical stakeholders with versioned, current, and accessible API documentation.
  • Auto-generate client libraries for invoking the API and deploy them to AWS CodeArtifact, which is a fully-managed artifact repository service. This allows API client development teams to integrate with different versions of the API in different environments.

The diagram shown in the following figure depicts the architecture of the AWS services and resources described in this post.

 The architecture described in this post consists of an AWS CodePipeline pipeline, provisioned using the AWS CDK, that deploys the Widget API to AWS Lambda and API Gateway. The pipeline then auto-generates the API’s documentation as a website served by CloudFront and deployed to S3. Finally, the pipeline auto-generates a client library for the API and deploys this to CodeArtifact.

Figure 1 – Architecture

The code that accompanies this post, written in Java, is available here.

Background

APIs must be understood by all stakeholders and parties within an enterprise including business areas, management, enterprise architecture, and other teams wishing to consume the API. Unfortunately, API definitions are often hidden in code and lack up-to-date documentation. Therefore, they remain inaccessible for the majority of the API’s stakeholders. Furthermore, it’s often challenging to determine what version of an API is present in different environments at any one time.

This post describes some solutions to these issues by demonstrating how to continuously deliver up-to-date and accessible API documentation, API client libraries, and API deployments.

AWS CDK

The AWS CDK is a software development framework for defining cloud IaC and is available in multiple languages including TypeScript, JavaScript, Python, Java, C#/.Net, and Go. The AWS CDK Developer Guide provides best practices for using the CDK.

This post uses the CDK to define IaC in Java which is synthesized to a cloud assembly. The cloud assembly includes one to many templates and assets that are deployed via an AWS CodePipeline pipeline. A unit of deployment in the CDK is called a Stack.

OpenAPI specification (formerly Swagger specification)

OpenAPI specifications describe the capabilities of an API and are both human and machine-readable. They consist of definitions of API components which include resources, endpoints, operation parameters, authentication methods, and contact information.

Project composition

The API project that accompanies this post consists of three directories:

  • app
  • api
  • cdk

app directory

This directory contains the code for the Lambda function which is invoked when the Widget API is invoked via API Gateway. The code has been developed in Java as an Apache Maven project.

The Quarkus framework has been used to define a WidgetResource class (see src/main/java/aws/sample/blog/cdkopenapi/app/WidgetResources.java ) that contains the methods that align with HTTP Methods of the Widget API.
api directory

The api directory contains the OpenAPI specification file ( openapi.yaml ). This file is used as the source for:

  • Defining the REST API using API Gateway’s support for OpenApi.
  • Auto-generating the API documentation.
  • Auto-generating the API client artifact.

The api directory also contains the following files:

  • openapi-generator-config.yaml : This file contains configuration settings for the OpenAPI Generator framework, which is described in the section CI/CD Pipeline.
  • maven-settings.xml: This file is used support the deployment of the generated SDKs or libraries (Apache Maven artifacts) for the API and is described in the CI/CD Pipeline section of this post.

This directory contains a sub directory called docker . The docker directory contains a Dockerfile which defines the commands for building a Docker image:

FROM ruby:2.6.5-alpine
 
RUN apk update \
 && apk upgrade --no-cache \
 && apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.14/main/ nodejs=14.20.0-r0 npm \
 && apk add git \
 && apk add --no-cache build-base
 
# Install Widdershins node packages and ruby gem bundler 
RUN npm install -g widdershins \
 && gem install bundler 
 
# working directory
WORKDIR /openapi
 
# Clone and install the Slate framework
RUN git clone https://github.com/slatedocs/slate
RUN cd slate \
 && bundle install

The Docker image incorporates two open source projects, the NodeJS Widdershins library and the Ruby Slate-framework. These are used together to auto-generate the documentation for the API from the OpenAPI specification.  This Dockerfile is referenced and built by the  ApiStack class, which is described in the CDK Stacks section of this post.

cdk directory

This directory contains an Apache Maven Project developed in Java for provisioning the CDK stacks for the  Widget API.

Under the  src/main/java  folder, the package  aws.sample.blog.cdkopenapi.cdk  contains the files and classes that define the application’s CDK stacks and also the entry point (main method) for invoking the stacks from the CDK Toolkit CLI:

  • CdkApp.java: This file contains the  CdkApp class which provides the main method that is invoked from the AWS CDK Toolkit to build and deploy the  application stacks.
  • ApiStack.java: This file contains the   ApiStack class which defines the  OpenApiBlogAPI   stack and is described in the CDK Stacks section of this post.
  • PipelineStack.java: This file contains the   PipelineStack class which defines the OpenAPIBlogPipeline  stack and is described in the CDK Stacks section of this post.
  • ApiStackStage.java: This file contains the  ApiStackStage class which defines a CDK stage. As detailed in the CI/CD Pipeline section of this post, a DEV stage, containing the  OpenApiBlogAPI stack resources for a DEV environment, is deployed from the  OpenApiBlogPipeline pipeline.

CDK stacks

ApiStack

Note that the CDK bundling functionality is used at multiple points in the  ApiStack  class to produce CDK Assets. The post, Building, bundling, and deploying applications with the AWS CDK, provides more details regarding using CDK bundling mechanisms.

The  ApiStack  class defines multiple resources including:

  • Widget API Lambda function: This is bundled by the CDK in a Docker container using the Java 11 runtime image.
  • Widget  REST API on API Gateway: The REST API is created from an Inline API Definition which is passed as an S3 CDK Asset. This asset includes a reference to the  Widget API OpenAPI specification located under the  api folder (see  api/openapi.yaml ) and builds upon the SpecRestApi construct and API Gateway’s support for OpenApi.
  • API documentation Docker Image Asset: This is the Docker image that contains the open source frameworks (Widdershins and Slate) that are leveraged to generate the API documentation.
  • CDK Asset bundling functionality that leverages the API documentation Docker image to auto-generate documentation for the API.
  • An S3 Bucket for holding the API documentation website.
  • An origin access identity (OAI) which allows CloudFront to securely serve the S3 Bucket API documentation content.
  • A CloudFront distribution which provides CDN functionality for the S3 Bucket website.

Note that the  ApiStack class features the following code which is executed on the  Widget API Lambda construct:

CfnFunction apiCfnFunction = (CfnFunction)apiLambda.getNode().getDefaultChild();
apiCfnFunction.overrideLogicalId("APILambda");

The CDK, by default, auto-assigns an ID for each defined resource but in this case the generated ID is being overridden with “APILambda”. The reason for this is that inside of the  Widget API OpenAPI specification (see  api/openapi.yaml ), there is a reference to the Lambda function by name (“APILambda”) so that the function can be integrated as a proxy for each listed API path and method combination. The OpenAPI specification includes this name as a variable to derive the Amazon Resource Name (ARN) for the Lambda function:

uri:
	Fn::Sub: "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${APILambda.Arn}/invocations"

PipelineStack

The  PipelineStack class defines a CDK CodePipline construct which is a higher level construct and pattern. Therefore, the construct doesn’t just map directly to a single CloudFormation resource, but provisions multiple resources to fulfil the requirements of the pattern. The post, CDK Pipelines: Continuous delivery for AWS CDK applications, provides more detail on creating pipelines with the CDK.

final CodePipeline pipeline = CodePipeline.Builder.create(this, "OpenAPIBlogPipeline")
.pipelineName("OpenAPIBlogPipeline")
.selfMutation(true)
      .dockerEnabledForSynth(true)
      .synth(synthStep)
      .build();

CI/CD pipeline

The diagram in the following figure shows the multiple CodePipeline stages and actions created by the CDK CodePipeline construct that is defined in the PipelineStack class.

The CI/CD pipeline’s stages include the Source stage, the Synth stage, the Update pipeline, the Assets stage, and the DEV stage.

Figure 2 – CI/CD Pipeline

The stages defined include the following:

  • Source stage: The pipeline is passed the source code contents from this stage.
  • Synth stage: This stage consists of a Synth Action that synthesizes the CloudFormation templates for the application’s CDK stacks and compiles and builds the project Lambda API function.
  • Update Pipeline stage: This stage checks the OpenAPIBlogPipeline stack and reinitiates the pipeline when changes to its definition have been deployed.
  • Assets stage: The application’s CDK stacks produce multiple file assets (for example, zipped Lambda code) which are published to Amazon S3. Docker image assets are published to a managed CDK framework Amazon Elastic Container Registry (Amazon ECR) repository.
  • DEV stage: The API’s CDK stack ( OpenApiBlogAPI ) is deployed to a hypothetical development environment in this stage. A post stage deployment action is also defined in this stage. Through the use of a CDK ShellStep construct, a Bash script is executed that deploys a generated client Java Archive (JAR) for the Widget API to CodeArtifact. The script employs the OpenAPI Generator project for this purpose:
CodeBuildStep codeArtifactStep = CodeBuildStep.Builder.create("CodeArtifactDeploy")
    .input(pipelineSource)
    .commands(Arrays.asList(
           	"echo $REPOSITORY_DOMAIN",
           	"echo $REPOSITORY_NAME",
           	"export CODEARTIFACT_TOKEN=`aws codeartifact get-authorization-token --domain $REPOSITORY_DOMAIN --query authorizationToken --output text`",
           	"export REPOSITORY_ENDPOINT=$(aws codeartifact get-repository-endpoint --domain $REPOSITORY_DOMAIN --repository $REPOSITORY_NAME --format maven | jq .repositoryEndpoint | sed 's/\\\"//g')",
           	"echo $REPOSITORY_ENDPOINT",
           	"cd api",
           	"wget -q https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/5.4.0/openapi-generator-cli-5.4.0.jar -O openapi-generator-cli.jar",
     	          "cp ./maven-settings.xml /root/.m2/settings.xml",
        	          "java -jar openapi-generator-cli.jar batch openapi-generator-config.yaml",
                    "cd client",
                    "mvn --no-transfer-progress deploy -DaltDeploymentRepository=openapi--prod::default::$REPOSITORY_ENDPOINT"
))
      .rolePolicyStatements(Arrays.asList(codeArtifactStatement, codeArtifactStsStatement))
.env(new HashMap<String, String>() {{
      		put("REPOSITORY_DOMAIN", codeArtifactDomainName);
            	put("REPOSITORY_NAME", codeArtifactRepositoryName);
       }})
      .build();

Running the project

To run this project, you must install the AWS CLI v2, the AWS CDK Toolkit CLI, a Java/JDK 11 runtime, Apache Maven, Docker, and a Git client. Furthermore, the AWS CLI must be configured for a user who has administrator access to an AWS Account. This is required to bootstrap the CDK in your AWS account (if not already completed) and provision the required AWS resources.

To build and run the project, perform the following steps:

  1. Fork the OpenAPI blog project in GitHub.
  2. Open the AWS Console and create a connection to GitHub. Note the connection’s ARN.
  3. In the Console, navigate to AWS CodeArtifact and create a domain and repository.  Note the names used.
  4. From the command line, clone your forked project and change into the project’s directory:
git clone https://github.com/<your-repository-path>
cd <your-repository-path>
  1. Edit the CDK JSON file at  cdk/cdk.json  and enter the details:
"RepositoryString": "<your-github-repository-path>",
"RepositoryBranch": "<your-github-repository-branch-name>",
"CodestarConnectionArn": "<connection-arn>",
"CodeArtifactDomain": "<code-artifact-domain-name>",
"CodeArtifactRepository": "<code-artifact-repository-name>"

Please note that for setting configuration values in CDK applications, it is recommend to use environment variables or AWS Systems Manager parameters.

  1. Commit and push your changes back to your GitHub repository:
git push origin main
  1. Change into the  cdk directory and bootstrap the CDK in your AWS account if you haven’t already done so (enter “Y” when prompted):
cd cdk
cdk bootstrap
  1. Deploy the CDK pipeline stack (enter “Y” when prompted):
cdk deploy OpenAPIBlogPipeline

Once the stack deployment completes successfully, the pipeline  OpenAPIBlogPipeline will start running. This will build and deploy the API and its associated resources. If you open the Console and navigate to AWS CodePipeline, then you’ll see a pipeline in progress for the API.

Once the pipeline has completed executing, navigate to AWS CloudFormation to get the output values for the  DEV-OpenAPIBlog  stack deployment:

  1. Select the  DEV-OpenAPIBlog  stack entry and then select the Outputs column. Record the REST_URL value for the key that begins with   OpenAPIBlogRestAPIEndpoint .
  2. Record the CLOUDFRONT_URL value for the key  OpenAPIBlogCloudFrontURL .

The API ping method at https://<REST_URL>/ping can now be invoked using your browser or an API development tool like Postman. Other API other methods, as defined by the OpenApi specification, are also available for invocation (For example, GET https://<REST_URL>/widgets).

To view the generated API documentation, open a browser at https://< CLOUDFRONT_URL>.

The following figure shows the API documentation website that has been auto-generated from the API’s OpenAPI specification. The documentation includes code snippets for using the API from multiple programming languages.

The API’s auto-generated documentation website provides descriptions of the API’s methods and resources as well as code snippets in multiple languages including JavaScript, Python, and Java.

Figure 3 – Auto-generated API documentation

To view the generated API client code artifact, open the Console and navigate to AWS CodeArtifact. The following figure shows the generated API client artifact that has been published to CodeArtifact.

The CodeArtifact service user interface in the Console shows the different versions of the API’s auto-generated client libraries.

Figure 4 – API client artifact in CodeArtifact

Cleaning up

  1. From the command change to the  cdk directory and remove the API stack in the DEV stage (enter “Y” when prompted):
cd cdk
cdk destroy OpenAPIBlogPipeline/DEV/OpenAPIBlogAPI
  1. Once this has completed, delete the pipeline stack:
cdk destroy OpenAPIBlogPipeline
  1. Delete the S3 bucket created to support pipeline operations. Open the Console and navigate to Amazon S3. Delete buckets with the prefix  openapiblogpipeline .

Conclusion

This post demonstrates the use of the AWS CDK to deploy a RESTful API defined by the OpenAPI/Swagger specification. Furthermore, this post describes how to use the AWS CDK to auto-generate API documentation, publish this documentation to a web site hosted on Amazon S3, auto-generate API client libraries or SDKs, and publish these artifacts to an Apache Maven repository hosted on CodeArtifact.

The solution described in this post can be improved by:

  • Building and pushing the API documentation Docker image to Amazon ECR, and then using this image in CodePipeline API pipelines.
  • Creating stages for different environments such as TEST, PREPROD, and PROD.
  • Adding integration testing actions to make sure that the API Deployment is working correctly.
  • Adding Manual approval actions for that are executed before deploying the API to PROD.
  • Using CodeBuild caching of artifacts including Docker images and libraries.

About the author:

Luke Popplewell

Luke Popplewell works primarily with federal entities in the Australian Government. In his role as an architect, Luke uses his knowledge and experience to help organisations reach their goals on the AWS cloud. Luke has a keen interest in serverless technology, modernization, DevOps and event-driven architectures.

Use Security Hub custom actions to remediate S3 resources based on Macie discovery results

Post Syndicated from Jonathan Nguyen original https://aws.amazon.com/blogs/security/use-security-hub-custom-actions-to-remediate-s3-resources-based-on-macie-discovery-results/

The amount of data available to be collected, stored and processed within an organization’s AWS environment can grow rapidly and exponentially. This increases the operational complexity and the need to identify and protect sensitive data. If your security teams need to review and remediate security risks manually, it would either take a large team or the actions might not be timely. There is also a chance that with manual operation, a step could be missed or the incorrect action could be taken. As a result, your security teams will need an automated and scalable way to support these operations efficiently.

Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie generates findings for sensitive data in an S3 object or a potential issue with the security or privacy of an S3 bucket. AWS Security Hub allows you to gain a centralized view into the security posture across your AWS environment by aggregating security findings from various AWS services and partner products, including Amazon Macie. Security Hub also includes the custom actions feature, which you can use to create actions for response and remediation to selected findings within the Security Hub console in an efficient and consistent manner.

It is important for your security teams to create effective and standardized mechanisms for taking action against Macie findings to ensure that data remains secure. By using Security Hub custom actions, you can have predefined actions for the security team to take against Macie findings without having to manually find and remediate the resources.

This blog post provides you with an example solution for responding to Macie sensitive data findings and policy findings in Security Hub by using custom actions. I will walk through the components of the solution, as well as opportunities where resources can be customized for your specific use case.

Prerequisites

You must have AWS Security Hub and Amazon Macie enabled in the AWS account where you are deploying this solution.

Solution overview

In this solution, you’ll use a combination of Security Hub custom actions, Amazon EventBridge, and AWS Lambda to take action on Macie findings in Security Hub. You will be working with the findings within the same AWS account where you deployed the solution.

Macie generates two categories of findings relating to different resources, which will require different remediation actions.

  1. Policy finding is a detailed report of a potential policy violation or issue with the security or privacy of an Amazon Simple Storage Service (Amazon S3) bucket.
  2. Sensitive data finding is a detailed report of sensitive data in an S3 object.

A full list of Macie finding types can be found in the Macie User guide.

For the two Macie finding categories, there is an associated Security Hub custom action:

  1. Custom action for sensitive data finding (S3 object) – When the security team selects this custom action, the action invokes a Lambda function that will take the following steps on the S3 object in the Macie finding:
    1. Tag the object with the Security Hub finding ID
    2. Encrypt the S3 object with a different customer-managed KMS key
    3. Update the Security Hub finding workflow status to RESOLVED
  2. Custom action for policy finding (S3 bucket). When you select this this custom action, it invokes a Lambda function that will take the following steps on the S3 bucket in the Macie finding:
    1. Tag the object with the Security Hub finding ID
    2. Update the S3 bucket configuration to:
      • Enable default encryption
      • Enable public access block
    3. Update the Security Hub finding workflow status to RESOLVED

The solution is configured to take action within the AWS account where the finding and corresponding resource is generated. In order to enable cross-account remediation, you will need to deploy an additional IAM role for the automation to assume and provision a KMS key to use for encryption.

Note: The custom actions in this solution are meant to be examples of actions to take against Macie policy and sensitive data findings. These actions will be different depending on your use-case and environment. You will also need to review and update the associated Lambda function execution role IAM policies accordingly.

Solution architecture

Figure 1: Resources deployed in the Security AWS account taking action on resources identified in the Workload AWS account

Figure 1: Resources deployed in the Security AWS account taking action on resources identified in the Workload AWS account

Figure 1 shows the architecture for the solution. The workflow is as follows:

  1. A Macie job runs and creates findings, which are sent to Security Hub in the same AWS account as the Macie finding.
  2. The delegated administrator Security Hub account combines findings across all member Security Hub accounts, including Macie findings.
  3. The security team reviews the Macie findings in the Security Hub delegated administrator account and determines to take remediation actions for a finding by selecting the finding and then selecting the appropriate Security Hub custom action.
  4. The Security Hub custom action sends the finding to the EventBridge rule, which is linked to the Lambda function.
  5. The EventBridge rule invokes the Lambda function to take action against the resources from the Macie finding.
  6. The Lambda function will:
    1. Take action for the S3 resource
    2. Mark the Macie finding as resolved in the delegated administrator Security Hub account

The solution is currently intended to work in a single Region. In order to enable this solution across Regions, you will need to change the Remediation Lambda function code for any regional resources used for remediation actions (i.e. AWS Key Management Service).

Deploy the solution

You can deploy the solution through either the AWS Management Console or the AWS Cloud Development Kit (AWS CDK).

To deploy the solution by using the AWS Management Console

  • In your security tooling account, launch the AWS CloudFormation template by choosing the following Launch Stack button. It will take approximately 10 minutes for the CloudFormation stack to complete.
    Select this image to open a link that starts building the CloudFormation stack

    Note: The stack will launch in the N. Virginia (us-east-1) Region. To deploy this solution into other AWS Regions, download the solution’s CloudFormation template, modify it, and deploy it to the selected Region.

  • (OPTIONAL) If you want to enable cross-account remediation, launch the following AWS CloudFormation template in the AWS account where you want to be able to take remediation actions. You can also use AWS CloudFormation StackSets if deploying to multiple AWS accounts.
    Select this image to open a link that starts building the CloudFormation stack

To deploy the solution by using AWS CDK

You can find the latest code in our GitHub repository, where you can also contribute to the sample code. The following commands show how to deploy the solution by using the AWS CDK. First, the CDK initializes your environment and uploads the AWS Lambda assets to Amazon S3. Then, you can deploy the solution to your account. Make sure to replace <AWS_ACCOUNT> with the account number, and replace <REGION> with the AWS Region that you want the solution deployed to.

  1. Run the following commands in your terminal while authenticated in the security tooling AWS account:

    cdk bootstrap aws://<Security_Tooling_AWS_ACCOUNT>/<REGION>

    cdk deploy MacieRemediationStack

  2. (OPTIONAL) If you want to enable cross-account remediation, Run the following commands in your terminal while authenticated to member AWS account:

    cdk bootstrap aws://<Member_AWS_ACCOUNT>/<REGION>

    cdk deploy MacieRemediationIAMStack –parameters solutionaccount=<Security_Tooling_AWS_ACCOUNT>

Solution walkthrough and validation

Now that you’ve successfully deployed the solution, you can see things in action. You have two options for testing the workflow on your own:

  1. Use a sample event, generated by a Macie finding in Security Hub, and invoke the Lambda function that is tied to the Security Hub custom action.

    Note: If using sample events, you can replace the values for the resources with real resources. Otherwise, you will not be able to see the Lambda function successfully take action because the resource in your sample event may not exist.

  2. Generate demo Macie findings in Security Hub by using this sample data for Amazon Macie.

I have existing findings for Macie generated in my AWS account, and in the procedures in this section, I’ll walk through taking action against these.

Note: If you set up Macie and Security Hub in a delegated administrator and member model that ingests findings from other AWS accounts, the IAM remediation roles for the S3 bucket and S3 objects must be deployed in the member accounts.

Review deployed resources in the AWS console

Before taking action on your sample findings, review the deployed resources that you’ll use.

To review deployed resources

  1. In the AWS account console where the automation was deployed, go to Security Hub, choose Settings, and then choose Custom actions. You should see two custom actions:
    • Macie Policy Finding
      • arn:aws:securityhub:<region>:<account-id>:action/custom/MacieS3BucketPolicy
    • Macie Data Finding
      • arn:aws:securityhub:<region>:<account-id>:action/custom/MacieSensitiveData
        Figure 2: Custom actions in Security Hub

        Figure 2: Custom actions in Security Hub

  2. Navigate to the EventBridge console and then choose Rules. You should see four rules:
    • Disabled – These are disabled by default during deployment
      • Autoremediate_Macie_Policy_Finding
      • Autoremediate_Macie_Sensitive_Data_Finding
        Figure 3: Disabled EventBridge rules for autoremediation of Macie findings in Security Hub

        Figure 3: Disabled EventBridge rules for autoremediation of Macie findings in Security Hub

    • Enabled – These are enabled by default during deployment:
      • Custom_Action_Macie_Policy_Finding
      • Custom_Action_Macie_Sensitive_Data_Finding
        Figure 4: Enabled EventBridge rules tied to the Security Hub custom actions

        Figure 4: Enabled EventBridge rules tied to the Security Hub custom actions

    In the enabled EventBridge rules, you should see the corresponding Security Hub custom action Amazon Resource Names (ARNs) in the rule event pattern.

    Figure 5: Enabled EventBridge rule event pattern for the Security Hub custom action

    Figure 5: Enabled EventBridge rule event pattern for the Security Hub custom action

Take action on an Amazon Macie object or policy finding

Each Security Hub custom action invokes a corresponding Lambda function that is configured as a target in the EventBridge rule. The Lambda function parses the information in the Macie finding from Security Hub to take action.

Each Security Hub custom action is specific to either an S3 object or an S3 bucket. If you attempt a custom action meant for an S3 object against a Macie policy finding, this will successfully initiate the custom action, but the Lambda function that is invoked will be unsuccessful.

If the Macie finding is specific to an S3 object, the title will display “The S3 object …,” whereas if the Macie finding is for a policy finding, the title will display information for an S3 bucket.

To take action on findings

  1. In the AWS account console where the automation was deployed, navigate to AWS Security Hub, and then choose Findings.
  2. Filter the findings by setting Product Name to Macie.
    Figure 6: Filter for Macie findings in Security Hub

    Figure 6: Filter for Macie findings in Security Hub

  3. Select the checkbox for either a Macie policy finding or a sensitive data finding; this will select a custom action. After you select the action, there is no confirmation step, and the action will invoke the Lambda function.
    Figure 7: Validate Custom Action has sent the finding to Amazon CloudWatch Events (EventBridge rule)

    Figure 7: Validate Custom Action has sent the finding to Amazon CloudWatch Events (EventBridge rule)

Review and validate the Security Hub custom action on target resources

In order to validate or troubleshoot the solution, you need to review whether the Lambda function was able to take action against the resources in the Security Hub finding for Macie.

To validate or troubleshoot the custom action

  1. For validation of sensitive data finding remediation, review S3 object configuration:
    1. Navigate to the Amazon S3 console.
    2. Choose the S3 object in the Macie finding.
    3. Choose the Properties tab and review the following fields:
      • Tags should be set to SH_Finding_ID.
      • AWS KMS key ARN should be set to the KMS key with the alias `macie_key`
        1. Click on the KMS key ARN and validate the key’s alias is the key deployed in the solution
  2. For validation of policy finding remediation, review the S3 bucket configuration:
    1. Navigate to the Amazon S3 console.
    2. Choose the S3 bucket in the Macie finding.
    3. Choose the Properties tab and review the following fields:
      • Tags should be set to SH_Finding_ID.
      • Default Encryption should be set to Enabled.
    4. Choose the Permissions tab and review the following fields:
      • Block public access should be set to On.
  3. For troubleshooting, you can review the CloudWatch logs for the Lambda function:
    1. Navigate to the CloudWatch console.
    2. Choose /aws/lambda/Remediate_Macie_S3_Bucket.
    3. Choose the most recent log stream and review the logs to see what actions were taken on the resources.

Next steps and customization

The solution in this post has a custom action for an S3 object and an S3 bucket, and is meant to serve as a template. You could modify the Lambda functions associated with the custom actions to take different or additional actions that are specific to your environment and data classification.

Additionally, I walked through specific Security Hub custom actions for Macie policy (bucket) or sensitive data (objects) findings. If you have defined actions to take for both, you could consolidate the custom actions and invoke a Lambda function that parses information from the Security Hub Macie finding to determine if it is a policy or sensitive data finding.

The two disabled EventBridge rules deployed as part of the solution are examples that can be leveraged for auto-remediation. After you use Security Hub’s custom actions to remediate findings, your security team could start to see a trend where you always want to take specific actions and enable the EventBridge rules to take action without requiring your security team to select a custom action in Security Hub in the AWS console.

  • Autoremediate_Macie_Policy_Finding
  • Autoremediate_Macie_Sensitive_Data_Finding

Conclusion

In this post, you deployed a solution to allow your security team to take automated actions against a Macie sensitive data and policy finding from Security Hub by using custom actions in the AWS console. We walked through what the solution does and how the solution can be customized to your use case.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum or Amazon Macie forum.

Want more AWS Security news? Follow us on Twitter.

Jonathan Nguyen

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on threat detection and incident response. Today, he helps enterprise customers develop a comprehensive security strategy and deploy security solutions at scale, and he trains customers on AWS Security best practices.

Automating Amazon EC2-Windows EBS Volumes monitoring and creating alarms

Post Syndicated from Sheila Busser original https://aws.amazon.com/blogs/compute/automating-amazon-ec2-windows-ebs-volumes-monitoring-and-creating-alarms/

This blog post is written by, Santhosh Kumar Adapa, Database Consultant,  AWS WWCO ProServe,  Jeevan Shetty, Database Consultant, AWS WWCO ProServe, and Bhanu Ganesh Gudivada, Consultant  Databases, AWS WWCO ProServe.

Customers who are running fleets of Amazon Elastic Compute Cloud (Amazon EC2) instances use advanced monitoring techniques to observe their operational performance. Capabilities like aggregated and custom dimensions help customers categorize and customize their metrics across server fleets for fast and efficient decision making. Customers require visibility into not only infrastructure metrics (such as CPU and memory), but also disk usage metrics.

Monitoring Amazon EC2-Windows Amazon Elastic Block Store (Amazon EBS) Volumes usage is critical, especially when customers have a large fleet of Amazon EC2 Windows servers running to host their databases and applications in AWS. Generally, we see issues with EC2 instances running out of disk space, and free disk space isn’t a metric that is directly available with Amazon CloudWatch. Amazon CloudWatch agent helps solve this problem. After installing and configuring the CloudWatch agent on your EC2 instance, the agent will send metric data with the disk utilization to CloudWatch. The next step is to create a CloudWatch alarm to monitor the disk utilization metric.

In this post, we showcase the steps to automate the monitoring and creating alarms for EBS volumes attached to Amazon EC2 Windows instances. Alarms are created using AWS Lambda that monitors the free disk space and alerts whenever thresholds are crossed using Amazon Simple Notification Service (Amazon SNS).

Solution overview

To demonstrate the solution we first install and configure the CloudWatch agent in your Amazon EC2 Windows instance, and then the agent will send metric data with the disk utilization to CloudWatch. To monitor the disk on each Amazon EC2 Windows instance, we’ll use two custom Metrics, “FreeStorageSpaceInMB” and “FreeStorageSpaceInPercent”, that are collected by CloudWatch agent and pushed to CloudWatch.

The following diagram illustrates the architecture used in this post:

architecture used in this post

  1. Amazon EC2 Windows instance with attached Amazon EBS Volumes to be monitored for free disk usage. The EC2 instance is configured with Amazon CloudWatch Agent.
  2. CloudWatch agent is configured to monitor the “FreeStorageSpaceInMB” and “FreeStorageSpaceInPercent” metrics, and pushed to AWS CloudWatch.
  3. Lambda function that can be invoked to create CloudWatch alarms for each disk attached to the EC2 instance.
  4. CloudWatch Alarms are created with warnings and critical thresholds based on storage size.
  5. Amazon SNS is used to send alerts when the CloudWatch Alarms crosses the threshold.
  6. AWS Identity and Access Management (IAM) to provide permission to the Lambda function to get Amazon EBS metrics and to create CloudWatch Alarms.

Prerequisites

You will need the following prerequisites:

  • To implement this solution, you must have an Amazon EC2 Windows instance configured with Amazon CloudWatch Agent by following the steps documented in the article – How to monitor Windows and Linux servers and get internal performance metrics.
  • To monitor the “FreeStorageSpaceInMB” and “FreeStorageSpaceInPercent” metrics for Amazon EBS volumes attached to the EC2 instance, the CloudWatch agent configuration JSON should have the following section:
"LogicalDisk": {
	"measurement": [
	{
		"name":"% Free Space",
		"rename":"FreeStorageSpaceInPercent",
		"unit":"Percent"
	},
	{
		"name":"Free Megabytes",
		"rename":"FreeStorageSpaceInMB",
		"unit":"Megabytes"
	}
	],
	"metrics_collection_interval": 10,
	"resources": [
		"*"
	]
},
  1. Amazon EC2 host or bastion host with an IAM role attached that has permissions to create an IAM role, Lambda function, and run Amazon Relational Database Service (Amazon RDS) CLI commands. A Lambda function and an IAM role are created using AWS Serverless Application Model (SAM).

AWS SAM

In this section, we provide the steps to create an IAM role and deploy a Lambda function using AWS SAM.

  1. Log in to the Amazon EC2 host and install the AWS SAM CLI.
  2. Download the source code and deploy it by running the following command:
git clone https://github.com/aws-samples/aws-ec2-windows-ebs-volumes-monitoring

cd aws-ec2-windows-ebs-volumes-monitoring/ebs_volumes_monitoring
sam deploy --guided

3. Provide the following parameters:

    1. Stack Name – Name for the AWS CloudFormation stack.
    2. AWS Region – AWS Region where the stack is being deployed.

The following is the sample output when you run sam deploy –guided with default arguments:

=========================================
Stack Name [ebs-volumes-monitoring]: ebs-volumes-monitoring
AWS Region [us-west-2]:
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]:
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]:
#Preserves the state of previously provisioned resources when an operation fails
Disable rollback [y/N]:
Save arguments to configuration file [Y/n]:
SAM configuration file [samconfig.toml]:
SAM configuration environment [default]:

In the following sections, we describe the AWS services deployed with AWS SAM.

IAM role

AWS SAM creates an IAM role with policies to describe EC2 instances, as well as List, Get, and Put CloudWatch metrics. Furthermore, it attaches an AWS managed IAM policy called AWSLambdaBasicExecutionRole to the IAM role. This role is attached to the Lambda functions to create Amazon EBS volume alarms for EC2 instances.

Lambda function

AWS SAM also deploys the Lambda function. It uses Python 3.8 and accepts two parameters:

  1. Hostname: Amazon EC2 Windows instance name, or, if we must configure alarms for multiple servers, then you can use a wild card character, such as Instance_name* or Instance_name?
  2. sns_topic_name: ARN of the SNS topic that is used to configure CloudWatch Alarms. Notification is sent to the SNS topic when the Amazon EBS Volume metric crosses the threshold.

Invoking Lambda function

After the SAM deployment is successful, we can invoke the Lambda function with the instance name and the SNS Topic ARN. The Lambda function creates two alarms (Warning and Critical) for every Amazon EBS volume attached to the instance. The Warning and Critical values can be changed in the Lambda code so that there are two different values depending on the size of the disk drive. Furthermore, the alarms are configured to send notifications to the SNS Topic. The following is the sample command to invoke the Lambda function:

aws lambda invoke --function-name ec2-ebs-metric --cli-binary-format raw-in-base64-out \
--payload '{"hostname": "Windows*", "sns_topic_name": "arn:aws:sns:us-west-2:123456789:notify_dba" }' response.json

Verifying CloudWatch Alarms:

Verify the CloudWatch Alarms that are created in the CloudWatch console. The following screenshot shows the CloudWatch alarms created for an EC2 instance with four disks. There are two alarms (Warning and Critical) created for every disk (four disks in total). Therefore, we see eight CloudWatch alarms.

CloudWatch console alarms

Checking CloudWatch Logs:

After running the Lambda function, to verify the log, go to Lambda Service page, select the Lambda function created, navigate to the Monitor tab, and then select “View logs in CloudWatch”. Then, go to the latest log file to check the CloudWatch log files for any errors.

Checking CloudWatch LogsSelect the latest Log Steam to check the details of the last Lambda function execution.

Log Steam detailsThe log file shows the full details of the Lambda function execution. Furthermore, it shows the CloudWatch alarms configured for each disk, as well as if there are any errors generated during execution.

Log file detailsCleanup

To clean up the resources used in this post, complete the following steps:

  1. Delete the CloudFormation stack by running below command and replacing STACK_NAME with stack name provided in step 3a above, under section “AWS SAM”
sam delete --stack-name STACK_NAME
  1. Confirm the stack has been deleted by running below command. Replace STACK_NAME as mentioned in previous step.
aws cloudformation list-stacks --query "StackSummaries[?contains(StackName,' STACK_NAME ')].StackStatus"
  1. Delete any CloudWatch alarms created by the Lambda function by following the document – Editing or deleting a CloudWatch alarm.

Conclusion

In this post, we demonstrated how the requirement of monitoring Amazon EC2 Windows EBS Volumes usage is critical. In particular, this is essential when customers have a large fleet of Amazon EC2 Windows servers running to host their databases and applications in the cloud. We showcased the process of automating the free disk monitoring using Lambda and notifying through Amazon SNS when disks cross the storage threshold. By implementing such monitoring, customers can prevent issues with EC2 instances running out of disk space thus preventing critical production outages.

Provide any thoughts or questions in the comments section. We also encourage you to explore CloudWatch monitoring and try out additional use cases mentioned in the documentation.

Streamlining evidence collection with AWS Audit Manager

Post Syndicated from Nicholas Parks original https://aws.amazon.com/blogs/security/streamlining-evidence-collection-with-aws-audit-manager/

In this post, we will show you how to deploy a solution into your Amazon Web Services (AWS) account that enables you to simply attach manual evidence to controls using AWS Audit Manager. Making evidence-collection as seamless as possible minimizes audit fatigue and helps you maintain a strong compliance posture.

As an AWS customer, you can use APIs to deliver high quality software at a rapid pace. If you have compliance-focused teams that rely on manual, ticket-based processes, you might find it difficult to document audit changes as those changes increase in velocity and volume.

As your organization works to meet audit and regulatory obligations, you can save time by incorporating audit compliance processes into a DevOps model. You can use modern services like Audit Manager to make this easier. Audit Manager automates evidence collection and generates reports, which helps reduce manual auditing efforts and enables you to scale your cloud auditing capabilities along with your business.

AWS Audit Manager uses services such as AWS Security Hub, AWS Config, and AWS CloudTrail to automatically collect and organize evidence, such as resource configuration snapshots, user activity, and compliance check results. However, for controls represented in your software or processes without an AWS service-specific metric to gather, you need to manually create and provide documentation as evidence to demonstrate that you have established organizational processes to maintain compliance. The solution in this blog post streamlines these types of activities.

Solution architecture

This solution creates an HTTPS API endpoint, which allows integration with other software development lifecycle (SDLC) solutions, IT service management (ITSM) products, and clinical trial management systems (CTMS) solutions that capture trial process change amendment documentation (in the case of pharmaceutical companies who use AWS to build robust pharmacovigilance solutions). The endpoint can also be a backend microservice to an application that allows contract research organizations (CRO) investigators to add their compliance supporting documentation.

In this solution’s current form, you can submit an evidence file payload along with the assessment and control details to the API and this solution will tie all the information together for the audit report. This post and solution is directed towards engineering teams who are looking for a way to accelerate evidence collection. To maximize the effectiveness of this solution, your engineering team will also need to collaborate with cross-functional groups, such as audit and business stakeholders, to design a process and service that constructs and sends the message(s) to the API and to scale out usage across the organization.

To download the code for this solution, and the configuration that enables you to set up auto-ingestion of manual evidence, see the aws-audit-manager-manual-evidence-automation GitHub repository.

Architecture overview

In this solution, you use AWS Serverless Application Model (AWS SAM) templates to build the solution and deploy to your AWS account. See Figure 1 for an illustration of the high-level architecture.

Figure 1. The architecture of the AWS Audit Manager automation solution

Figure 1. The architecture of the AWS Audit Manager automation solution

The SAM template creates resources that support the following workflow:

  1. A client can call an Amazon API Gateway endpoint by sending a payload that includes assessment details and the evidence payload.
  2. An AWS Lambda function implements the API to handle the request.
  3. The Lambda function uploads the evidence to an Amazon Simple Storage Service (Amazon S3) bucket (3a) and uses AWS Key Management Service (AWS KMS) to encrypt the data (3b).
  4. The Lambda function also initializes the AWS Step Functions workflow.
  5. Within the Step Functions workflow, a Standard Workflow calls two Lambda functions. The first looks for a matching control within an assessment, and the second updates the control within the assessment with the evidence.
  6. When the Step Functions workflow concludes, it sends a notification for success or failure to subscribers of an Amazon Simple Notification Service (Amazon SNS) topic.

Deploy the solution

The project available in the aws-audit-manager-manual-evidence-automation GitHub repository contains source code and supporting files for a serverless application you can deploy with the AWS SAM command line interface (CLI). It includes the following files and folders:

src Code for the application’s Lambda implementation of the Step Functions workflow.
It also includes a Step Functions definition file.
template.yml A template that defines the application’s AWS resources.

Resources for this project are defined in the template.yml file. You can update the template to add AWS resources through the same deployment process that updates your application code.

Prerequisites

This solution assumes the following:

  1. AWS Audit Manager is enabled.
  2. You have already created an assessment in AWS Audit Manager.
  3. You have the necessary tools to use the AWS SAM CLI (see details in the table that follows).

For more information about setting up Audit Manager and selecting a framework, see Getting started with Audit Manager in the blog post AWS Audit Manager Simplifies Audit Preparation.

The AWS SAM CLI is an extension of the AWS CLI that adds functionality for building and testing Lambda applications. The AWS SAM CLI uses Docker to run your functions in an Amazon Linux environment that matches Lambda. It can also emulate your application’s build environment and API.

To use the AWS SAM CLI, you need the following tools:

AWS SAM CLI Install the AWS SAM CLI
Node.js Install Node.js 14, including the npm package management tool
Docker Install Docker community edition

To deploy the solution

  1. Open your terminal and use the following command to create a folder to clone the project into, then navigate to that folder. Be sure to replace <FolderName> with your own value.

    mkdir Desktop/<FolderName> && cd $_

  2. Clone the project into the folder you just created by using the following command.

    git clone https://github.com/aws-samples/aws-audit-manager-manual-evidence-automation.git

  3. Navigate into the newly created project folder by using the following command.

    cd aws-audit-manager-manual-evidence-automation

  4. In the AWS SAM shell, use the following command to build the source of your application.

    sam build

  5. In the AWS SAM shell, use the following command to package and deploy your application to AWS. Be sure to replace <DOC-EXAMPLE-BUCKET> with your own unique S3 bucket name.

    sam deploy –guided –parameter-overrides paramBucketName=<DOC-EXAMPLE-BUCKET>

  6. When prompted, enter the AWS Region where AWS Audit Manager was configured. For the rest of the prompts, leave the default values.
  7. To activate the IAM authentication feature for API gateway, override the default value by using the following command.

    paramUseIAMwithGateway=AWS_IAM

To test the deployed solution

After you deploy the solution, run an invocation like the one below for an assessment (using curl). Be sure to replace <YOURAPIENDPOINT> and <AWS REGION> with your own values.

curl –location –request POST
‘https://<YOURAPIENDPOINT>.execute-api.<AWS REGION>.amazonaws.com/Prod’ \
–header ‘x-api-key: ‘ \
–form ‘[email protected]”<PATH TO FILE>”‘ \
–form ‘AssessmentName=”GxP21cfr11″‘ \
–form ‘ControlSetName=”General requirements”‘ \
–form ‘ControlIdName=”11.100(a)”‘

Check to see that your file is correctly attached to the control for your assessment.

Form-data interface parameters

The API implements a form-data interface that expects four parameters:

  1. AssessmentName: The name for the assessment in Audit Manager. In this example, the AssessmentName is GxP21cfr11.
  2. ControlSetName: The display name for a control set within an assessment. In this example, the ControlSetName is General requirements.
  3. ControlIdName: this is a particular control within a control set. In this example, the ControlIdName is 11.100(a).
  4. Payload: this is the file representing evidence to be uploaded.

As a refresher of Audit Manager concepts, evidence is collected for a particular control. Controls are grouped into control sets. Control sets can be grouped into a particular framework. The assessment is considered an implementation, or an instance, of the framework. For more information, see AWS Audit Manager concepts and terminology.

To clean up the deployed solution

To clean up the solution, use the following commands to delete the AWS CloudFormation stack and your S3 bucket. Be sure to replace <YourStackId> and <DOC-EXAMPLE-BUCKET> with your own values.

aws cloudformation delete-stack –stack-name <YourStackId>
aws s3 rb s3://<DOC-EXAMPLE-BUCKET> –force

Conclusion

This solution provides a way to allow for better coordination between your software delivery organization and compliance professionals. This allows your organization to continuously deliver new updates without overwhelming your security professionals with manual audit review tasks.

Next steps

There are various ways to extend this solution.

  1. Update the API Lambda implementation to be a webhook for your favorite software development lifecycle (SDLC) or IT service management (ITSM) solution.
  2. Modify the steps within the Step Functions state machine to more closely match your unique compliance processes.
  3. Use AWS CodePipeline to start Step Functions state machines natively, or integrate a variation of this solution with any continuous compliance workflow that you have.

Learn more AWS Audit Manager, DevOps, and AWS for Health and start building!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Nicholas Parks

Nicholas Parks

Nicholas has been using AWS since 2010 across various enterprise verticals including healthcare, life sciences, financial, retail, and telecommunications. Nicholas focuses on modernizations in pursuit of new revenue as well as application migrations. He specializes in Lean, DevOps cultural change, and Continuous Delivery.

Brian Tang

Brian Tang

Brian Tang is an AWS Solutions Architect based out of Boston, MA. He has 10 years of experience helping enterprise customers across a wide range of industries complete digital transformations by migrating business-critical workloads to the cloud. His core interests include DevOps and serverless-based solutions. Outside of work, he loves rock climbing and playing guitar.

Forwarding emails automatically based on content with Amazon Simple Email Service

Post Syndicated from Murat Balkan original https://aws.amazon.com/blogs/messaging-and-targeting/forwarding-emails-automatically-based-on-content-with-amazon-simple-email-service/

Introduction

Email is one of the most popular channels consumers use to interact with support organizations. In its most basic form, consumers will send their email to a catch-all email address where it is further dispatched to the correct support group. Often, this requires a person to inspect content manually. Some IT organizations even have a dedicated support group that handles triaging the incoming emails before assigning them to specialized support teams. Triaging each email can be challenging, and delays in email routing and support processes can reduce customer satisfaction. By utilizing Amazon Simple Email Service’s deep integration with Amazon S3, AWS Lambda, and other AWS services, the task of categorizing and routing emails is automated. This automation results in increased operational efficiencies and reduced costs.

This blog post shows you how a serverless application will receive emails with Amazon SES and deliver them to an Amazon S3 bucket. The application uses Amazon Comprehend to identify the dominant language from the message body.  It then looks it up in an Amazon DynamoDB table to find the support group’s email address specializing in the email subject. As the last step, it forwards the email via Amazon SES to its destination. Archiving incoming emails to Amazon S3 also enables further processing or auditing.

Architecture

By completing the steps in this post, you will create a system that uses the architecture illustrated in the following image:

Architecture showing how to forward emails by content using Amazon SES

The flow of events starts when a customer sends an email to the generic support email address like [email protected]. This email is listened to by Amazon SES via a recipient rule. As per the rule, incoming messages are written to a specified Amazon S3 bucket with a given prefix.

This bucket and prefix are configured with S3 Events to trigger a Lambda function on object creation events. The Lambda function reads the email object, parses the contents, and sends them to Amazon Comprehend for language detection.

Amazon DynamoDB looks up the detected language code from an Amazon DynamoDB table, which includes the mappings between language codes and support group email addresses for these languages. One support group could answer English emails, while another support group answers French emails. The Lambda function determines the destination address and re-sends the same email address by performing an email forward operation. Suppose the lookup does not return any destination address, or the language was not be detected. In that case, the email is forwarded to a catch-all email address specified during the application deployment.

In this example, Amazon SES hosts the destination email addresses used for forwarding, but this is not a requirement. External email servers will also receive the forwarded emails.

Prerequisites

To use Amazon SES for receiving email messages, you need to verify a domain that you own. Refer to the documentation to verify your domain with Amazon SES console. If you do not have a domain name, you will register one from Amazon Route 53.

Deploying the Sample Application

Clone this GitHub repository to your local machine and install and configure AWS SAM with a test AWS Identity and Access Management (IAM) user.

You will use AWS SAM to deploy the remaining parts of this serverless architecture.

The AWS SAM template creates the following resources:

  • An Amazon DynamoDB mapping table (language-lookup) contains information about language codes and associates them with destination email addresses.
  • An AWS Lambda function (BlogEmailForwarder) that reads the email content parses it, detects the language, looks up the forwarding destination email address, and sends it.
  • An Amazon S3 bucket, which will store the incoming emails.
  • IAM roles and policies.

To start the AWS SAM deployment, navigate to the root directory of the repository you downloaded and where the template.yaml AWS SAM template resides. AWS SAM also requires you to specify an Amazon Simple Storage Service (Amazon S3) bucket to hold the deployment artifacts. If you haven’t already created a bucket for this purpose, create one now. You will refer to the documentation to learn how to create an Amazon S3 bucket. The bucket should have read and write access by an AWS Identity and Access Management (IAM) user.

At the command line, enter the following command to package the application:

sam package --template template.yaml --output-template-file output_template.yaml --s3-bucket BUCKET_NAME_HERE

In the preceding command, replace BUCKET_NAME_HERE with the name of the Amazon S3 bucket that should hold the deployment artifacts.

AWS SAM packages the application and copies it into this Amazon S3 bucket.

When the AWS SAM package command finishes running, enter the following command to deploy the package:

sam deploy --template-file output_template.yaml --stack-name blogstack --capabilities CAPABILITY_IAM --parameter-overrides [email protected] YOUR_DOMAIN_NAME_HERE [email protected] YOUR_DOMAIN_NAME_HERE

In the preceding command, change the YOUR_DOMAIN_NAME_HERE with the domain name you validated with Amazon SES. This domain also applies to other commands and configurations that will be introduced later.

This example uses “blogstack” as the stack name, you will change this to any other name you want. When you run this command, AWS SAM shows the progress of the deployment.

Configure the Sample Application

Now that you have deployed the application, you will configure it.

Configuring Receipt Rules

To deliver incoming messages to Amazon S3 bucket, you need to create a Rule Set and a Receipt rule under it.

Note: This blog uses Amazon SES console to create the rule sets. To create the rule sets with AWS CloudFormation, refer to the documentation.

  1. Navigate to the Amazon SES console. From the left navigation choose Rule Sets.
  2. Choose Create a Receipt Rule button at the right pane.
  3. Add [email protected]YOUR_DOMAIN_NAME_HERE as the first recipient addresses by entering it into the text box and choosing Add Recipient.

 

 

Choose the Next Step button to move on to the next step.

  1. On the Actions page, select S3 from the Add action drop-down to reveal S3 action’s details. Select the S3 bucket that was created by the AWS SAM template. It is in the format of your_stack_name-inboxbucket-randomstring. You will find the exact name in the outputs section of the AWS SAM deployment under the key name InboxBucket or by visiting the AWS CloudFormation console. Set the Object key prefix to info/. This tells Amazon SES to add this prefix to all messages destined to this recipient address. This way, you will re-use the same bucket for different recipients.

Choose the Next Step button to move on to the next step.

In the Rule Details page, give this rule a name at the Rule name field. This example uses the name info-recipient-rule. Leave the rest of the fields with their default values.

Choose the Next Step button to move on to the next step.

  1. Review your settings on the Review page and finalize rule creation by choosing Create Rule

  1. In this example, you will be hosting the destination email addresses in Amazon SES rather than forwarding the messages to an external email server. This way, you will be able to see the forwarded messages in your Amazon S3 bucket under different prefixes. To host the destination email addresses, you need to create different rules under the default rule set. Create three additional rules for [email protected]YOUR_DOMAIN_NAME_HERE , [email protected] YOUR_DOMAIN_NAME_HERE and [email protected]YOUR_DOMAIN_NAME_HERE email addresses by repeating the steps 2 to 5. For Amazon S3 prefixes, use catchall/, english/, and french/ respectively.

 

Configuring Amazon DynamoDB Table

To configure the Amazon DynamoDB table that is used by the sample application

  1. Navigate to Amazon DynamoDB console and reach the tables view. Inspect the table created by the AWS SAM application.

language-lookup table is the table where languages and their support group mappings are kept. You need to create an item for each language, and an item that will hold the default destination email address that will be used in case no language match is found. Amazon Comprehend supports more than 60 different languages. You will visit the documentation for the supported languages and add their language codes to this lookup table to enhance this application.

  1. To start inserting items, choose the language-lookup table to open table overview page.
  2. Select the Items tab and choose the Create item From the dropdown, select Text. Add the following JSON content and choose Save to create your first mapping object. While adding the following object, replace Destination attribute’s value with an email address you own. The email messages will be forwarded to that address.

{

  “language”: “en”,

  “destination”: “[email protected]_DOMAIN_NAME_HERE”

}

Lastly, create an item for French language support.

{

  “language”: “fr”,

  “destination”: “[email protected]_DOMAIN_NAME_HERE”

}

Testing

Now that the application is deployed and configured, you will test it.

  1. Use your favorite email client to send the following email to the domain name [email protected] email address.

Subject: I need help

Body:

Hello, I’d like to return the shoes I bought from your online store. How can I do this?

After the email is sent, navigate to the Amazon S3 console to inspect the contents of the Amazon S3 bucket that is backing the Amazon SES Rule Sets. You will also see the AWS Lambda logs from the Amazon CloudWatch console to confirm that the Lambda function is triggered and run successfully. You should receive an email with the same content at the address you defined for the English language.

  1. Next, send another email with the same content, this time in French language.

Subject: j’ai besoin d’aide

Body:

Bonjour, je souhaite retourner les chaussures que j’ai achetées dans votre boutique en ligne. Comment puis-je faire ceci?

 

Suppose a message is not matched to a language in the lookup table. In that case, the Lambda function will forward it to the catchall email address that you provided during the AWS SAM deployment.

You will inspect the new email objects under english/, french/ and catchall/ prefixes to observe the forwarding behavior.

Continue experimenting with the sample application by sending different email contents to [email protected] YOUR_DOMAIN_NAME_HERE address or adding other language codes and email address combinations into the mapping table. You will find the available languages and their codes in the documentation. When adding a new language support, don’t forget to associate a new email address and Amazon S3 bucket prefix by defining a new rule.

Cleanup

To clean up the resources you used in your account,

  1. Navigate to the Amazon S3 console and delete the inbox bucket’s contents. You will find the name of this bucket in the outputs section of the AWS SAM deployment under the key name InboxBucket or by visiting the AWS CloudFormation console.
  2. Navigate to AWS CloudFormation console and delete the stack named “blogstack”.
  3. After the stack is deleted, remove the domain from Amazon SES. To do this, navigate to the Amazon SES Console and choose Domains from the left navigation. Select the domain you want to remove and choose Remove button to remove it from Amazon SES.
  4. From the Amazon SES Console, navigate to the Rule Sets from the left navigation. On the Active Rule Set section, choose View Active Rule Set button and delete all the rules you have created, by selecting the rule and choosing Action, Delete.
  5. On the Rule Sets page choose Disable Active Rule Set button to disable listening for incoming email messages.
  6. On the Rule Sets page, Inactive Rule Sets section, delete the only rule set, by selecting the rule set and choosing Action, Delete.
  7. Navigate to CloudWatch console and from the left navigation choose Logs, Log groups. Find the log group that belongs to the BlogEmailForwarderFunction resource and delete it by selecting it and choosing Actions, Delete log group(s).
  8. You will also delete the Amazon S3 bucket you used for packaging and deploying the AWS SAM application.

 

Conclusion

This solution shows how to use Amazon SES to classify email messages by the dominant content language and forward them to respective support groups. You will use the same techniques to implement similar scenarios. You will forward emails based on custom key entities, like product codes, or you will remove PII information from emails before forwarding with Amazon Comprehend.

With its native integrations with AWS services, Amazon SES allows you to enhance your email applications with different AWS Cloud capabilities easily.

To learn more about email forwarding with Amazon SES, you will visit documentation and AWS blogs.

Caching data and configuration settings with AWS Lambda extensions

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/caching-data-and-configuration-settings-with-aws-lambda-extensions/

This post is written by Hari Ohm Prasath Rajagopal, Senior Modernization Architect and Vamsi Vikash Ankam, Technical Account Manager

In this post, I show how to build a flexible in-memory AWS Lambda caching layer using Lambda extensions. Lambda functions use REST API calls to access the data and configuration from the cache. This can reduce latency and cost when consuming data from AWS services such as Amazon DynamoDB, AWS Systems Manager Parameter Store, and AWS Secrets Manager.

Applications making frequent API calls to retrieve static data can benefit from a caching layer. This can reduce the function’s latency, particularly for synchronous requests, as the data is retrieved from the cache instead of an external service. The cache can also reduce costs by reducing the number of calls to downstream services.

There are two types of cache to consider in this situation:

Lambda extensions are a new way for tools to integrate more easily into the Lambda execution environment and control and participate in Lambda’s lifecycle. They use the Extensions API, a new HTTP interface, to register for lifecycle events during function initialization, invocation, and shutdown.

They can also use environment variables to add options and tools to the runtime, or use wrapper scripts to customize the runtime startup behavior. The Lambda cache uses Lambda extensions to run as a separate process.

To learn more about how to use extensions with your functions, read “Introducing AWS Lambda extensions”.

Creating a cache using Lambda extensions

To set up the example, visit the GitHub repo, and follow the instructions in the README.md file.

The demo uses AWS Serverless Application Model (AWS SAM) to deploy the infrastructure. The walkthrough requires AWS SAM CLI (minimum version 0.48) and an AWS account.

To install the example:

  1. Create an AWS account if you do not already have one and login.
  2. Clone the repo to your local development machine:
  3. git clone https://github.com/aws-samples/aws-lambda-extensions
    cd aws-lambda-extensions/cache-extension-demo/
  4. If you are not running in a Linux environment, ensure that your build architecture matches the Lambda execution environment by compiling with GOOS=linux and GOARCH=amd64.
  5. GOOS=linux GOARCH=amd64
  6. Build the Go binary extension with the following command:
  7. go build -o bin/extensions/cache-extension-demo main.go
  8. Ensure that the extensions files are executable:
  9. chmod +x bin/extensions/cache-extension-demo
  10. Update the parameters region value in ../example-function/config.yaml with the Region where you are deploying the function.
  11. parameters:
      - region: us-west-2
  12. Build the function dependencies.
  13. cd SAM
    sam build
    AWS SAM build

    AWS SAM build

  14. Deploy the AWS resources specified in the template.yml file:
  15. sam deploy --guided
  16. During the prompts:
  17. Enter a stack name cache-extension-demo.
  18. Enter the same AWS Region specified previously.
  19. Accept the default DatabaseName. You can specify a custom database name, and also update the ../example-function/config.yaml and index.js files with the new database name.
  20. Enter MySecret as the Secrets Manager secret.
  21. Accept the defaults for the remaining questions.
  22. AWS SAM Deploy

    AWS SAM Deploy

    AWS SAM deploys:

    • A DynamoDB table.
    • The Lambda function ExtensionsCache-DatabaseEntry, which puts a sample item into the DynamoDB table.
    • An AWS Systems Manager Parameter Store parameter called CacheExtensions_Parameter1 with a value of MyParameter.
    • An AWS Secrets Manager secret called secret_info with a value of MySecret.
    • A Lambda layer called Cache_Extension_Layer.
    • A Lambda function using Nodejs.12 called ExtensionsCache-SampleFunction. This reads the cached values via the extension from either the DynamoDB table, Parameter Store, or Secrets Manager.
    • IAM permissions

    The cache extension is delivered as a Lambda layer and added to ExtensionsCache-SampleFunction.

    It is written as a self-contained binary in Golang, which makes the extension compatible with all of the supported runtimes. The extension caches the data from DynamoDB, Parameter Store, and Secrets Manager, and then runs a local HTTP endpoint to service the data. The Lambda function retrieves the configuration data from the cache using a local HTTP REST API call.

    Here is the architecture diagram.

    Extensions cache architecture diagram

    Extensions cache architecture diagram

    Once deployed, the extension performs the following steps:

    1. On start-up, the extension reads the config.yaml file, which determines which resources to cache. The file is deployed as part of the Lambda function.
    2. The boolean CACHE_EXTENSION_INIT_STARTUP Lambda environment variable specifies whether to load into cache the items specified in config.yaml. If false, the extension initializes an empty map with the names.
    3. The extension retrieves the required data based on the resources in the config.yaml file. This includes the data from DynamoDB, the configuration from Parameter Store, and the secret from Secrets Manager. The data is stored in memory.
    4. The extension starts a local HTTP server using TCP port 4000, which serves the cache items to the function. The Lambda function accesses the local in-memory cache by invoking the following endpoint: http://localhost:4000/<cachetype>?name=<name>.
    5. If the data is not available in the cache, or has expired, the extension accesses the corresponding AWS service to retrieve the data. It is cached first and then returned to the Lambda function. The CACHE_EXTENSION_TTL Lambda environment variable defines the refresh interval (defined based on Go time format, for example: 30s, 3m, etc.)

    This sequence diagram explains the data flow:

    Extensions cache sequence diagram

    Extensions cache sequence diagram

    Testing the example application

    Once the AWS SAM template is deployed, navigate to the AWS Lambda console.

    1. Select the function starting with the name ExtensionsCache-SampleFunction. Within the function code, the options array specifies which data to return from the cache. This is initially set to path: '/dynamodb?name=DynamoDbTable-pKey1-sKey1'
    2. Choose Configure test events to configure a test event.
    3. Enter a name for the Event name, accept the default payload, and select Create.
    4. Select Test to invoke the function. This returns the cached data from DynamoDB and logs the output.
    5. Successfully retrieve DynamoDB data from cache

      Successfully retrieve DynamoDB data from cache

    6. In the index.js file, amend the path statement to retrieve the Parameter Store configuration:
    7. const options = {
        "hostname": "localhost",
        "port": 4000,
        "path": "/parameters?name=CacheExtensions_Parameter1",
        "method": "GET"
      }
    8. Select Deploy to save the function configuration and select Test. The function returns the Parameter Store configuration item:
    9. Successfully retrieve Parameter Store data from cache

      Successfully retrieve Parameter Store data from cache

    10. In the function code, amend the path statement to retrieve the Secrets Manager secret:
    11. const options = {
        "hostname": "localhost",
        "port": 4000,
        "path": "/parameters?name=/aws/reference/secretsmanager/secret_info",
        "method": "GET"
      }
    12. Select Deploy to save the function configuration and select Test. The function returns the secret:
    Successfully retrieve Secrets Manager data from cache

    Successfully retrieve Secrets Manager data from cache

    The benefits of using Lambda extensions

    There are a number of benefits to using a Lambda extension for this solution:

    1. Improved Lambda function performance as data is cached in memory by the extension during initialization.
    2. Fewer AWS API calls to external services, this can reduce costs and helps avoid throttling limits if services are accessed frequently.
    3. Cache data is stored in memory and not in a file within the Lambda execution environment. This means that no additional process is required to manage the lifecycle of the file. In-memory is also more secure, as data is not persisted to disk for subsequent function invocations.
    4. The function requires less code, as it only needs to communicate with the extension via HTTP to retrieve the data. The function does not have to have additional libraries installed to communicate with DynamoDB, Parameter Store, Secrets Manager, or the local file system.
    5. The cache extension is a Golang compiled binary and the executable can be shared with functions running other runtimes like Node.js, Python, Java, etc.
    6. Using a YAML template to store the details of what to cache makes it easier to configure and add additional services.

    Comparing the performance benefit

    To test the performance of the cache extension, I compare two tests:

    1. A Golang Lambda function that accesses a secret from AWS Secrets Manager for every invocation.
    2. The ExtensionsCache-SampleFunction, previously deployed using AWS SAM. This uses the cache extension to access the secrets from Secrets Manager, the function reads the value from the cache.

    Both functions are configured with 512 MB of memory and the function timeout is set to 30 seconds.

    I use Artillery to load test both Lambda functions. The load runs for 100 invocations over 2 minutes. I use Amazon CloudWatch metrics to view the function average durations.

    Test 1 shows a duration of 43 ms for the first invocation as a cold start. Subsequent invocations average 22 ms.

    Test 1 performance results

    Test 1 performance results

    Test 2 shows a duration of 16 ms for the first invocation as a cold start. Subsequent invocations average 3 ms.

    Test 2 performance results

    Test 2 performance results

    Using the Lambda extensions caching layer shows a significant performance improvement. Cold start invocation duration is reduced by 62% and subsequent invocations by 80%.

    In this example, the CACHE_EXTENSION_INIT_STARTUP environment variable flag is not configured. With the flag enabled for the extension, data is pre-fetched during extension initialization and the cold start time is further reduced.

    Conclusion

    Using Lambda extensions is an effective way to cache static data from external services in Lambda functions. This reduces function latency and costs. This post shows how to build both a data and configuration cache using DynamoDB, Parameter Store, and Secrets Manager.

    To set up the walkthrough demo in this post, visit the GitHub repo, and follow the instructions in the README.md file.

    The extension uses a local configuration file to determine which values to cache, and retrieves the items from the external services. A Lambda function retrieves the values from the local cache using an HTTP request, without having to communicate with the external services directly. In this example, this results in an 80% reduction in function invocation time.

    For more serverless learning resources, visit https://serverlessland.com.

Scaling Neuroscience Research on AWS

Post Syndicated from Konrad Rokicki original https://aws.amazon.com/blogs/architecture/scaling-neuroscience-research-on-aws/

HHMI’s Janelia Research Campus in Ashburn, Virginia has an integrated team of lab scientists and tool-builders who pursue a small number of scientific questions with potential for transformative impact. To drive science forward, we share our methods, results, and tools with the scientific community.

Introduction

Our neuroscience research application involves image searches that are computationally intensive but have unpredictable and sporadic usage patterns. The conventional on-premises approach is to purchase a powerful and expensive workstation, install and configure specialized software, and download the entire dataset to local storage. With 16 cores, a typical search of 50,000 images takes 30 seconds. A serverless architecture using AWS Lambda allows us to do this job in seconds for a few cents per search, and is capable of scaling to larger datasets.

Parallel Computation in Neuroscience Research

Basic research in neuroscience is often conducted on fruit flies. This is because their brains are small enough to study in a meaningful way with current tools, but complex enough to produce sophisticated behaviors. Conducting such research nonetheless requires an immense amount of data and computational power. Janelia Research Campus developed the NeuronBridge tool on AWS to accelerate scientific discovery by scaling computation in the cloud.

Color Depth Search Example fly brains

Figure 1: A “mask image” (on the left) is compared to many different fly brains (on the right) to find matching neurons. (Janella Research Campus)

The fruit fly (Drosophila melanogaster) has about 100,000 neurons and its brain is highly stereotyped. This means that the brain of one fruit fly is similar to the next one. Using electron microscopy (EM), the FlyEM project has reconstructed a wiring diagram of a fruit fly brain. This connectome includes the structure of the neurons and the connectivity between them. But EM is only half of the picture. Once scientists know the structure and connectivity, they must perform experiments to find what purpose the neurons serve.

Flies can be genetically modified to reproducibly express a fluorescent protein in certain neurons, causing those neurons to glow under a light microscope (LM). By iterating through many modifications, the FlyLight project has created a vast genetic driver library. This allows scientists to target individual neurons for experiments. For example, blocking a particular neuron of a fly from functioning, and then observing its behavior, allows a scientist to understand the function of that neuron. Through the course of many such experiments, scientists are currently uncovering the function of entire neuronal circuits.

We developed NeuronBridge, a tool available for use by neuroscience researchers around the world, to bridge the gap between the EM and LM data. Scientists can start with EM structure and find matching fly lines in LM. Or they may start with a fly line and find the corresponding neuronal circuits in the EM connectome.

Both EM and LM produce petabytes of 3D images. Image processing and machine learning algorithms are then applied to discern neuron structure. We also developed a computational shortcut called color depth MIP to represent depth as color. This technique compresses large 3D image stacks into smaller 2D images that can be searched efficiently.

Image search is an embarrassingly parallel problem ideally suited to parallelization with simple functions. In a typical search, the scientist will create a “mask image,” which is a color depth image featuring only the neuron they want to find. The search algorithm must then compare this image to hundreds of thousands of other images. The paradigm of launching many short-lived cloud workers, termed burst-parallel compute, was originally suggested by a group at UCSD. To scale NeuronBridge, we decided to build a serverless AWS-native implementation of burst-parallel image search.

The Architecture

Our main reason for using a serverless approach was that our usage patterns are unpredictable and sporadic. The total number of researchers who are likely to use our tool is not large, and only a small fraction of them will need the tool at any given time. Furthermore, our tool could go unused for weeks at a time, only to get a flood of requests after a new dataset is published. A serverless architecture allows us to cope with this unpredictable load. We can keep costs low by only paying for the compute time we actually use.

One challenge of implementing a burst-parallel architecture is that each Lambda invocation requires a network call, with the ensuing network latency. Spawning several thousands of functions from a single manager function can take many seconds. The trick to minimizing this latency is to parallelize these calls by recursively spawning concurrent managers in a tree structure. Each leaf in this tree spawns a set of worker functions to do the work of searching the imagery. Each worker reads a small batch of images from Amazon Simple Storage Service (S3). They are then compared to the mask image, and the intermediate results are written to Amazon DynamoDB, a serverless NoSQL database.

Serverless architecture for burst-parallel search

Figure 2: Serverless architecture for burst-parallel search

Search state is monitored by an AWS Step Functions state machine, which checks DynamoDB once per second. When all the results are ready, the Step Functions state machine runs another Lambda function to combine and sort the results. The state machine addresses error conditions and timeouts, and updates the browser when the asynchronous search is complete. We opted to use AWS AppSync to notify a React web client, providing an interactive user experience while remaining entirely serverless.

As we scaled to 3,000 concurrent Lambda functions reading from our data bucket, we reached Amazon S3’s limit of 5,500 GETs per second per prefix. The fix was to create numbered prefix folders and then randomize our key list. Each worker could then search a random list of images across many prefixes. This change distributed the load from our highly parallel functions across a number of S3 shards, and allowed us to run with much higher parallelism.

We also addressed cold-start latency. Infrequently used Lambda functions take longer to start than recently used ones, and our unpredictable usage patterns meant that we were experiencing many cold starts. In our Java functions, we found that most of the cold-start time was attributed to JVM initialization and class loading. Although many mitigations for this exist, our worker logic was small enough that rewriting the code to use Node.js was the obvious choice. This immediately yielded a huge improvement, reducing cold starts from 8-10 seconds down to 200 ms.

With all of these insights, we developed a general-purpose parallel computation framework called burst-compute. This AWS-native framework runs as a serverless application to implement this architecture. It allows you to massively scale your own custom worker functions and combiner functions. We used this new framework to implement our image search.

Conclusion

The burst-parallel architecture is a powerful new computation paradigm for scientific computing. It takes advantage of the enormous scale and technical innovation of the AWS Cloud to provide near-interactive on-demand compute without expensive hardware maintenance costs. As scientific computing capability matures for the cloud, we expect this kind of large-scale parallel computation to continue becoming more accessible. In the future, the cloud could open doors to entirely new types of scientific applications, visualizations, and analysis tools.

We would like to express our thanks to AWS Solutions Architects Scott Glasser and Ray Chang, for their assistance with design and prototyping, and to Geoffrey Meissner for reviewing drafts of this write-up. 

Source Code

All of the application code described in this article is open source and licensed for reuse:

The data and imagery are shared publicly on the Registry of Open Data on AWS.

New Synchronous Express Workflows for AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/new-synchronous-express-workflows-for-aws-step-functions/

Today, AWS is introducing Synchronous Express Workflows for AWS Step Functions. This is a new way to run Express Workflows to orchestrate AWS services at high-throughput.

Developers have been using asynchronous Express Workflows since December 2019 for workloads that require higher event rates and shorter durations. Customers were looking for ways to receive an immediate response from their Express Workflows without having to write additional code or introduce additional services.

What’s new?

Synchronous Express Workflows allow developers to quickly receive the workflow response without needing to poll additional services or build a custom solution. This is useful for high-volume microservice orchestration and fast compute tasks that communicate via HTTPS.

Getting started

You can build and run Synchronous Express Workflows using the AWS Management Console, the AWS Serverless Application Model (AWS SAM), the AWS Cloud Development Kit (AWS CDK), AWS CLI, or AWS CloudFormation.

To create Synchronous Express Workflows from the AWS Management Console:

  1. Navigate to the Step Functions console and choose Create State machine.
  2. Choose Author with code snippets. Choose Express.
    This generates a sample workflow definition that you can change once the workflow is created.
  3. Choose Next, then choose Create state machine. It may take a moment for the workflow to deploy.

Starting Synchronous Express Workflows

When starting an Express Workflow, a new Type parameter is required. To start a synchronous workflow from the AWS Management Console:

  1. Navigate to the Step Functions console.
  2. Choose an Express Workflow from the list.
  3. Choose Start execution.

    Here you have an option to run the Express Workflow as a synchronous or asynchronous type.
  4. Choose Synchronous and choose Start execution.

  5. Expand Details in the results message to view the output.

Monitoring, logging and tracing

Enable logging to inspect and debug Synchronous Express Workflows. All execution history is sent to CloudWatch Logs. Use the Monitoring and Logging tabs in the Step Functions console to gain visibility into Express Workflow executions.

The Monitoring tab shows six graphs with CloudWatch metrics for Execution Errors, Execution Succeeded, Execution Duration, Billed Duration, Billed Memory, and Executions Started. The Logging tab shows recent logs and the logging configuration, with a link to CloudWatch Logs.

Enable X-Ray tracing to view trace maps and timelines of the underlying components that make up a workflow. This helps to discover performance issues, detect permission problems, and track requests made to and from other AWS services.

Creating an example workflow

The following example uses Amazon API Gateway HTTP APIs to start an Express Workflow synchronously. The workflow analyses web form submissions for negative sentiment. It generates a case reference number and saves the data in an Amazon DynamoDB table. The workflow returns the case reference number and message sentiment score.

  1. The API endpoint is generated by an API Gateway HTTP APIs. A POST request is made to the API which invokes the workflow. It contains the contact form’s message body.
  2. The message sentiment is analyzed by Amazon Comprehend.
  3. The Lambda function generates a case reference number, which is recorded in the DynamoDB table.
  4. The workflow choice state branches based on the detected sentiment.
  5. If a negative sentiment is detected, a notification is sent to an administrator via Amazon Simple Email Service (SES).
  6. When the workflow completes, it returns a ticketID to API Gateway.
  7. API Gateway returns the ticketID in the API response.

The code for this application can be found in this GitHub repository. Three important files define the application and its resources:

Deploying the application

Clone the GitHub repository and deploy with the AWS SAM CLI:

$ git clone https://github.com/aws-samples/contact-form-processing-with-synchronous-express-workflows.git
$ cd contact-form-processing-with-synchronous-express-workflows 
$ sam build 
$ sam deploy -g

This deploys 12 resources, including a Synchronous Express Workflow, three Lambda functions, an API Gateway HTTP API endpoint, and all the AWS Identity & Access Management (IAM) roles and permissions required for the application to run.

Note the HTTP APIs endpoint and workflow ARN outputs.

Testing Synchronous Express Workflows:

A new StartSyncExecution AWS CLI command is used to run the synchronous Express Workflow:

aws stepfunctions start-sync-execution \
--state-machine-arn <your-workflow-arn> \
--input "{\"message\" : \"This is bad service\"}"

The response is received once the workflow completes. It contains the workflow output (sentiment and ticketid), the executionARN, and some execution metadata.

Starting the workflow from HTTP API Gateway:

The application deploys an API Gateway HTTP API, with a Step Functions integration. This is configured in the api.yaml file. It starts the state machine with the POST body provided as the input payload.

Trigger the workflow with a POST request, using the API HTTP API endpoint generated from the deploy step. Enter the following CURL command into the terminal:

curl --location --request POST '<YOUR-HTTP-API-ENDPOINT>' \
--header 'Content-Type: application/json' \
--data-raw '{"message":" This is bad service"}'

The POST request returns a 200 status response. The output field of the response contains the sentiment results (negative) and the generated ticketId (jc4t8i).

Putting it all together

You can use this application to power a web form backend to help expedite customer complaints. In the following example, a frontend application submits form data via an AJAX POST request. The application waits for the response, and presents the user with a message appropriate to the detected sentiment, and a case reference number.

If a negative sentiment is returned in the API response, the user is informed of their case number:

Setting IAM permissions

Before a user or service can start a Synchronous Express Workflow, it must be granted permission to perform the states:StartSyncExecution API operation. This is a new state-machine level permission. Existing Express Workflows can be run synchronously once the correct IAM permissions for StartSyncExecution are granted.

The example application applies this to a policy within the HttpApiRole in the AWS SAM template. This role is added to the HTTP API integration within the api.yaml file.

Conclusion

Step Functions Synchronous Express Workflows allow developers to receive a workflow response without having to poll additional services. This helps developers orchestrate microservices without needing to write additional code to handle errors, retries, and run parallel tasks. They can be invoked in response to events such as HTTP requests via API Gateway, from a parent state machine, or by calling the StartSyncExecution API action.

This feature is available in all Regions where AWS Step Functions is available. View the AWS Regions table to learn more.

For more serverless learning resources, visit Serverless Land.

Building a Scalable Document Pre-Processing Pipeline

Post Syndicated from Joel Knight original https://aws.amazon.com/blogs/architecture/building-a-scalable-document-pre-processing-pipeline/

In a recent customer engagement, Quantiphi, Inc., a member of the Amazon Web Services Partner Network, built a solution capable of pre-processing tens of millions of PDF documents before sending them for inference by a machine learning (ML) model. While the customer’s use case—and hence the ML model—was very specific to their needs, the pipeline that does the pre-processing of documents is reusable for a wide array of document processing workloads. This post will walk you through the pre-processing pipeline architecture.

Pre-processing pipeline architecture-SM

Architectural goals

Quantiphi established the following goals prior to starting:

  • Loose coupling to enable independent scaling of compute components, flexible selection of compute services, and agility as the customer’s requirements evolved.
  • Work backwards from business requirements when making decisions affecting scale and throughput and not simply because “fastest is best.” Scale components only where it makes sense and for maximum impact.
  •  Log everything at every stage to enable troubleshooting when something goes wrong, provide a detailed audit trail, and facilitate cost optimization exercises by identifying usage and load of every compute component in the architecture.

Document ingestion

The documents are initially stored in a staging bucket in Amazon Simple Storage Service (Amazon S3). The processing pipeline is kicked off when the “trigger” Amazon Lambda function is called. This Lambda function passes parameters such as the name of the staging S3 bucket and the path(s) within the bucket which are to be processed to the “ingestion app.”

The ingestion app is a simple application that runs a web service to enable triggering a batch and lists documents from the S3 bucket path(s) received via the web service. As the app processes the list of documents, it feeds the document path, S3 bucket name, and some additional metadata to the “ingest” Amazon Simple Queue Service (Amazon SQS) queue. The ingestion app also starts the audit trail for the document by writing a record to the Amazon Aurora database. As the document moves downstream, additional records are added to the database. Records are joined together by a unique ID and assigned to each document by the ingestion app and passed along throughout the pipeline.

Chunking the documents

In order to maximize grip and control, the architecture is built to submit single-page files to the ML model. This enables correlating an inference failure to a specific page instead of a whole document (which may be many pages long). It also makes identifying the location of features within the inference results an easier task. Since the documents being processed can have varied sizes, resolutions, and page count, a big part of the pre-processing pipeline is to chunk a document up into its component pages prior to sending it for inference.

The “chunking orchestrator” app repeatedly pulls a message from the ingest queue and retrieves the document named therein from the S3 bucket. The PDF document is then classified along two metrics:

  • File size
  • Number of pages

We use these metrics to determine which chunking queue the document is sent to:

  • Large: Greater than 10MB in size or greater than 10 pages
  • Small: Less than or equal to 10MB and less than or equal to 10 pages
  • Single page: Less than or equal to 10MB and exactly one page

Each of these queues is serviced by an appropriately sized compute service that breaks the document down into smaller pieces, and ultimately, into individual pages.

  • Amazon Elastic Cloud Compute (EC2) processes large documents primarily because of the high memory footprint needed to read large, multi-gigabyte PDF files into memory. The output from these workers are smaller PDF documents that are stored in Amazon S3. The name and location of these smaller documents is submitted to the “small documents” queue.
  • Small documents are processed by a Lambda function that decomposes the document into single pages that are stored in Amazon S3. The name and location of these single page files is sent to the “single page” queue.

The Dead Letter Queues (DLQs) are used to hold messages from their respective size queue which are not successfully processed. If messages start landing in the DLQs, it’s an indication that there is a problem in the pipeline. For example, if messages start landing in the “small” or “single page” DLQ, it could indicate that the Lambda function processing those respective queues has reached its maximum run time.

An Amazon CloudWatch Alarm monitors the depth of each DLQ. Upon seeing DLQ activity, a notification is sent via Amazon Simple Notification Service (Amazon SNS) so an administrator can then investigate and make adjustments such as tuning the sizing thresholds to ensure the Lambda functions can finish before reaching their maximum run time.

In order to ensure no documents are left behind in the active run, there is a failsafe in the form of an Amazon EC2 worker that retrieves and processes messages from the DLQs. This failsafe app breaks a PDF all the way down into individual pages and then does image conversion.

For documents that don’t fall into a DLQ, they make it to the “single page” queue. This queue drives each page through the “image conversion” Lambda function which converts the single page file from PDF to PNG format. These PNG files are stored in Amazon S3.

Sending for inference

At this point, the documents have been chunked up and are ready for inference.

When the single-page image files land in Amazon S3, an S3 Event Notification is fired which places a message in a “converted image” SQS queue which in turn triggers the “model endpoint” Lambda function. This function calls an API endpoint on an Amazon API Gateway that is fronting the Amazon SageMaker inference endpoint. Using API Gateway with SageMaker endpoints avoided throttling during Lambda function execution due to high volumes of concurrent calls to the Amazon SageMaker API. This pattern also resulted in a 2x inference throughput speedup. The Lambda function passes the document’s S3 bucket name and path to the API which in turn passes it to the auto scaling SageMaker endpoint. The function reads the inference results that are passed back from API Gateway and stores them in Amazon Aurora.

The inference results as well as all the telemetry collected as the document was processed can be queried from the Amazon Aurora database to build reports showing number of documents processed, number of documents with failures, and number of documents with or without whatever feature(s) the ML model is trained to look for.

Summary

This architecture is able to take PDF documents that range in size from single page up to thousands of pages or gigabytes in size, pre-process them into single page image files, and then send them for inference by a machine learning model. Once triggered, the pipeline is completely automated and is able to scale to tens of millions of pages per batch.

In keeping with the architectural goals of the project, Amazon SQS is used throughout in order to build a loosely coupled system which promotes agility, scalability, and resiliency. Loose coupling also enables a high degree of grip and control over the system making it easier to respond to changes in business needs as well as focusing tuning efforts for maximum impact. And with every compute component logging everything it does, the system provides a high degree of auditability and introspection which facilitates performance monitoring, and detailed cost optimization.

Building well-architected serverless applications: Understanding application health – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-understanding-application-health-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explaining the example application.

Question OPS1: How do you evaluate your serverless application’s health?

This post continues part 1 of this Operational Excellence question. Previously, I covered using Amazon CloudWatch out-of-the-box standard metrics and alerts, and structured and centralized logging with the Embedded Metrics Format.

Best practice: Use application, business, and operations metrics

Identifying key performance indicators (KPIs), including business, customer, and operations outcomes, in additional to application metrics, helps to show a higher-level view of how the application and business is performing.

Business KPIs measure your application performance against business goals. For example, if fewer flight reservations are flowing through the system, the business would like to know.

Customer experience KPIs highlight the overall effectiveness of how customers use the application over time. Examples are perceived latency, time to find a booking, make a payment, etc.

Operational metrics help to see how operationally stable the application is over time. Examples are continuous integration/delivery/deployment feedback time, mean-time-between-failure/recovery, number of on-call pages and time to resolution, etc.

Custom Metrics

Embedded Metrics Format can also emit custom metrics to help understand your workload health’s impact on business.

The airline booking service uses AWS Step Functions, AWS Lambda, Amazon SNS, and Amazon DynamoDB.

In the confirm booking module function handler, I add a new namespace and dimension to associate this set of logs with this application and service.

metrics.set_namespace("ServerlessAirlineEMF")
metrics.put_dimensions({"service":"confirm_booking"})

Within the try statement within the try/except block, I emit a metric for a successful booking:

metrics.put_metric("BookingSuccessful", 1, "Count")

And within the except statement within the try/except block, I emit a metric for a failed booking:

metrics.put_metric("BookingFailed", 1, "Count")

Once I make a booking, within the CloudWatch console, I navigate to Logs | Log groups and select the Airline-ConfirmBooking-develop group. I select a log stream and find the custom metric as part of the structured log entry.

structured-log-entry

Custom metric structured log entry

I can also create a custom metric graph. Within the CloudWatch console, I navigate to Metrics. I see the ServerlessAirlineEMF Custom Namespace is available.

custom-namespace

Custom metric namespace

I select the Namespace and the available metric.

available-namespace

Available metric

I select a Line graph, add a name, and see the successful booking now plotted on the graph.

custom-metric-plotted

Plotted CloudWatch metric

I can also visualize and analyze this metric using CloudWatch Insights.

Once a booking is made, within the CloudWatch console, I navigate to Logs | Insights. I select the /aws/lambda/Airline-ConfirmBooking-develop log group. I choose Run query which shows a list of discovered fields on the right of the console.

I can search for discovered booking related fields.

cloudwatch-insights-discovered-fieldsI then enter the following in the query pane to search the logs and plot the sum of BookingReference and choose Run Query:

fields @timestamp, @message
| stats sum(@BookingReference)

cloudwatch-insights-displayed-bookingreference

CloudWatch Insights query

There are a number of other component metrics that are important to measure. Track interactions between upstream and downstream components such as message queue length, integration latency, and throttling.

Improvement plan summary:

  1. Identify user journeys and metrics from each customer transaction.
  2. Create custom metrics asynchronously instead of synchronously for improved performance, cost, and reliability outcomes.
  3. Emit business metrics from within your workload to measure application performance against business goals.
  4. Create and analyze component metrics to measure interactions with upstream and downstream components.
  5. Create and analyze operational metrics to assess the health of your continuous delivery pipeline and operational processes.

Good practice: Use distributed tracing and code is instrumented with additional context

Logging provides information on the individual point in time events the application generates. Tracing provides a wider continuous view of an application. Tracing helps to follow a user journey or transaction through the application.

AWS X-Ray is an example of a distributed tracing solution, there are a number of third-party options as well. X-Ray collects data about the requests that your application serves and also builds a service graph, which shows all the components of an application. This provides visibility to understand how upstream/downstream services may affect workload health.

For the most comprehensive view, enable X-Ray across as many services as possible and include X-Ray tracing instrumentation in code. This is the list of AWS Services integrated with X-Ray.

X-Ray receives data from services as segments. Segments for a common request are then grouped into traces. Segments can be further broken down into more granular subsegments. Custom data key-value pairs are added to segments and subsegments with annotations and metadata. Traces can also be grouped which helps with filter expressions.

AWS Lambda instruments incoming requests for all supported languages. Lambda application code can be further instrumented to emit information about its status, correlation identifiers, and business outcomes to determine transaction flows across your workload.

X-Ray tracing for a Lambda function is enabled in the Lambda Console. Select a Lambda function. I select the Airline-ReserveBooking-develop function. In the Configuration pane, I select to enable X-Ray Active tracing.

x-ray-enable

X-Ray tracing enabled

X-Ray can also be enabled via CloudFormation with the following code:

TracingConfig:
  Mode: Active

Lambda IAM permissions to write to X-Ray are added automatically when active tracing is enabled via the console. When using CloudFormation, Allow the Actions xray:PutTraceSegments and xray:PutTelemetryRecords

It is important to understand what invocations X-Ray does trace. X-Ray applies a sampling algorithm. If an upstream service, such as API Gateway with X-Ray tracing enabled, has already sampled a request, the Lambda function request is also sampled. Without an upstream request, X-Ray traces data for the first Lambda invocation each second, and then 5% of additional invocations.

For the airline application, X-Ray tracing is initiated within the shared library with the code:

from aws_xray_sdk.core import models, patch_all, xray_recorder

Segments, subsegments, annotations, and metadata are added to functions with the following example code:

segment = xray_recorder.begin_segment('segment_name')
# Start a subsegment
subsegment = xray_recorder.begin_subsegment('subsegment_name')
# Add metadata and annotations
segment.put_metadata('key', dict, 'namespace')
subsegment.put_annotation('key', 'value')
# Close the subsegment and segment
xray_recorder.end_subsegment()
xray_recorder.end_segment()

For example, within the collect payment module, an annotation is added for a successful payment with:

tracer.put_annotation("PaymentStatus", "SUCCESS")

CloudWatch ServiceLens

Once a booking is made and payment is successful, the tracing is available in the X-Ray console.

I explore how Amazon CloudWatch ServiceLens connects metrics, logs and the X-Ray tracing. Within the CloudWatch console, I navigate to ServiceLens | Service Map.

I can visualize all application resources and dependencies where X-Ray is enabled. I can trace performance or availability issues. If there was an issue connecting to SNS for example, this would be shown.

I select the Airline-CollectPayment-develop node and can view the out-of-the-box standard Lambda metrics.

I can select View Logs to jump to the CloudWatch Logs Insights console.

cloudwatch-insights-service-map-view

CloudWatch Insights Service map

I select View dashboard to see the function metrics, node map, and function details.

cloudwatch-insights-service-map-dashboard

CloudWatch Insights Service Map dashboard

I select View traces and can filter by the custom metric PaymentStatus. I select SUCCESS and chose Add to filter. I then select a trace.

cloudwatch-insights-service-map-select-trace

CloudWatch Insights Filtered traces

I see the full trace details, which show the full application transaction of a payment collection.

cloudwatch-insights-service-map-view-trace

Segments timeline

Selecting the Lambda handler subsegment – ## lambda_handler, I can view the trace Annotations and Metadata, which include the business transaction details such as Customer and PaymentStatus.

cloudwatch-insights-service-map-view-trace-annotations

X-Ray annotations

Trace groups are another feature of X-Ray and ServiceLens. Trace groups use filter expressions such as Annotation.PaymentStatus = "FAILED" which are used to view traces that match the particular group. Service graphs can also be viewed, and CloudWatch alarms created based on the group.

CloudWatch ServiceLens provides powerful capabilities to understand application performance bottlenecks and issues, helping determine how users are impacted.

Improvement plan summary:

  1. Identify common business context and system data that are commonly present across multiple transactions.
  2. Instrument SDKs and requests to upstream/downstream services to understand the flow of a transaction across system components.

Recent announcements

There have been a number of recent announcements for X-Ray and CloudWatch to improve how to evaluate serverless application health.

Conclusion

Evaluating application health helps you identify which services should be optimized to improve your customer’s experience. In part 1, I cover out-of-the-box standard metrics and alerts, as well as structured and centralized logging. In this post, I explore custom metrics and distributed tracing and show how to use ServiceLens to view logs, metrics, and traces together.

In an upcoming post, I will cover the next Operational Excellence question from the Well-Architected Serverless Lens – Approaching application lifecycle management.

Building well-architected serverless applications: Understanding application health – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-understanding-application-health-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explaining the example application.

Question OPS1: How do you evaluate your serverless application’s health?

Evaluating your metrics, distributed tracing, and logging gives you insight into business and operational events, and helps you understand which services should be optimized to improve your customer’s experience. By understanding the health of your Serverless Application, you will know whether it is functioning as expected, and can proactively react to any signals that indicate it is becoming unhealthy.

Required practice: Understand, analyze, and alert on metrics provided out of the box

It is important to understand metrics for every AWS service used in your application so you can decide how to measure its behavior. AWS services provide a number of out-of-the-box standard metrics to help monitor the operational health of your application.

As these metrics are generated automatically, it is a simple way to start monitoring your application and can also be augmented with custom metrics.

The first stage is to identify which services the application uses. The airline booking component uses AWS Step Functions, AWS Lambda, Amazon SNS, and Amazon DynamoDB.

When I make a booking, as shown in the Introduction post, AWS services emit metrics to Amazon CloudWatch. These are processed asynchronously without impacting the application’s performance.

There are two default CloudWatch dashboards to visualize key metrics quickly: per service and cross service.

Per service

To view the per service metrics dashboard, I open the CloudWatch console.

per-service-metrics-dashboardI select a service where Overview is shown, such as Lambda. Now I can view the metrics for all Lambda functions in the account.

per-service-metrics-lambdaCross service

To see an overview of key metrics across all AWS services, open the CloudWatch console and choose View cross service dashboard.

cross-service-metrics-dashboardI see a list of all services with one or two key metrics displayed. This provides a good overview of all services your application uses.

Alerting

The next stage is to identify the key metrics for comparison and set up alerts for under- and over-performing services. Here are some recommended metrics to alarm on for a number of AWS services.

Alerts can be configured manually or via infrastructure as code tools such as the AWS Serverless Application Model, AWS CloudFormation, or third-party tools.

To configure a manual alert for Lambda function errors using CloudWatch Alarms:

  1. I open the CloudWatch console and select Alarms and select Create Alarm.
  2. I choose Select Metric and from AWS Namespaces, select Lambda, Across All Functions and select Errors and select Select metric.add-metric-to-alert
  3. I change the Statistic to Sum and the Period to 1 minute.metric-values
  4. Under Conditions, I select a Static threshold Greater than 1 and select Next.

Alarms can also be created using anomaly detection rather than static values if there is a discernible pattern or trend. Anomaly detection looks at past metric data and uses machine learning to create a model of expected values. Alerts can then be configured if they fall outside this band of “normal” values. I use a Static threshold for this alarm.

  1. For the notification, I set the trigger to alarm to an existing SNS topic with my email address, then choose Next.metric-notification
  2. I enter a descriptive alarm name such as serverlessairline-lambda-prod-errors > 1, select Next, and choose Create alarm.

I have now manually set up an alarm.

Use CloudWatch composite alarms to combine multiple alarms to reduce noise and focus on critical issues. For example, a single alarm could trigger if there are both Lambda function errors as well as high Lambda concurrent executions.

It is simpler and more scalable to include alerting within infrastructure as code. Here is an example of alerting programmatically using CloudFormation.

I view the out of the box standard metrics and in this example, manually create an alarm for Lambda function errors.

Improvement plan summary:

  1. Understand what metrics and dimensions each managed service used provides.
  2. Configure alerts on relevant metrics for when services are unhealthy.

Good practice: Use structured and centralized logging

Central logging provides a single place to search and analyze logs. Structured logging means selecting a consistent log format and content structure to simplify querying across multiple components.

To identify a business transaction across components, such as a particular flight booking, log operational information from upstream and downstream services. Add information such as customer_id along with business outcomes such as order=accepted or order=confirmed. Make sure you are not logging any sensitive or personal identifying data in any logs.

Use JSON as your logging output format. Log multiple fields in a single object or dictionary rather than many one line messages for simpler searching.

Here is an example of a structured logging format.

The airline booking component, which is written in Python, currently uses a shared library with a separate log processing stack.

Embedded Metrics Format is a simpler mechanism to replace the shared library and use structured logging. CloudWatch Embedded Metrics adds environmental metadata such as Lambda Function version and also automatically extracts custom metrics so you can visualize and alarm on them. There are open-source client libraries available for Node.js and Python.

I then add embedded metrics to the individual confirm booking module with the following steps:

  1. I install the aws-embedded-metrics library using the instructions.
  2. In the function init code, I import the module and create a metric_scope with the following code

from aws_embedded_metrics import metric_scope
@metric_scope

  1. In the function handler, I log the generated bookingReference with the following code.

metrics.set_property("BookingReference", ret["bookingReference"])

In this example I also log the entire incoming event details.

metrics.set_property("event", event)

It is best practice to only log what is required to avoid unnecessary costs. Ensure the event does not have any sensitive or personal identifying data which is available to anyone who has access to the logs.

To avoid the duplicate logging in this example airline application which adds cost, I remove the existing shared library logger.*() lines.

When I make a booking, the CloudWatch log message is in structured JSON format. It contains the properties I set event, BookingReference, as well as function metadata.

I can then search for all log activity related to a specific booking across multiple functions with booking_id. I can track customer activity across multiple bookings using customer_id.

Logging is often created as a shared library resource which all functions reference. Another option is using Lambda Layers, which lets functions import additional code such as external libraries. Multiple functions can share this code.

Improvement plan summary:

  1. Log request identifiers from downstream services, component name, component runtime information, unique correlation identifiers, and information that helps identify a business transaction.
  2. Use JSON as the logging output. Prefer logging entire objects/dictionaries rather than many one line messages. Mask or remove sensitive data when logging.
  3. Minimize logging debugging information to a minimum as they can incur both costs and increase noise to signal ratio

Conclusion

Evaluating serverless application health helps understand which services should be optimized to improve your customer’s experience. I cover out of the box metrics and alerts, as well as structured and centralized logging.

This well-architected question will be continued in an upcoming post where I look at custom metrics and distributed tracing.

AWS Lambda now supports Ruby 2.7

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/aws-lambda-now-supports-ruby-2-7/

You can now develop your AWS Lambda functions using Ruby 2.7. Start using this runtime today by specifying a runtime parameter value of ruby2.7 when creating or updating Lambda functions.

New Ruby runtime features

Ruby 2.7 is a stable release and brings several new features, including pattern matching, argument forwarding, and numbered arguments.

Pattern matching

Pattern matching, a widely used feature in functional programming languages, is introduced as a new experimental feature. This allows deep matching of structured values, checking the structure and binding the matched parts to local variables.

It can traverse a given object and assign it’s value if it matches a pattern:

require "json"
 
json = <<END
{
  "name": "Alice",
  "age": 30,
  "children": [{ "name": "Bob", "age": 2 }]
}
END
 
case JSON.parse(json, symbolize_names: true)
in {name: "Alice", children: [{name: "Bob", age: age}]}
  p age #=> 2
end

For additional information on pattern matching, see Feature #14912.

Argument forwarding

Prior to Ruby 2.7 the * and ** operators are available for single and keyword arguments. These are used to specify any number of arguments or convert array or hashes to several arguments.

Ruby 2.7 added a new shorthand syntax ... for forwarding all arguments to a method irrespective of type. In the example below, all arguments to foo are forwarded to bar, including keyword and block arguments. It acts similar to calling super without any arguments.

def foo(...)
  bar(...)
end

Numbered arguments

Numbered arguments allow you to reference block arguments solely by their index. They are only valid when referenced inside of a block:

Before:

[1, 2, 3].each { |i| puts i }

After:

[1, 2, 3].each { puts @1 }

This can make short code blocks easier to read and reduce code repetition.

Amazon Linux 2

Ruby 2.7, like (Python 3.8, Node.js 10 and 12, and Java 11) is based on an Amazon Linux 2 execution environment. Amazon Linux 2 provides a secure, stable, and high-performance execution environment to develop and run cloud and enterprise applications.

Next steps

Get started building with Ruby 2.7 today by specifying a runtime parameter value of ruby2.7 when creating your Lambda functions. You can read about the Ruby programming model in the AWS Lambda documentation to learn more about writing functions in Ruby 2.7.

For existing Ruby functions, migrate to the new runtime by making any necessary changes to the code for compatibility with Ruby 2.7, then changing the function’s runtime configuration to ruby2.7.

Enjoy, go build with Ruby!

Building an AWS IoT Core device using AWS Serverless and an ESP32

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-an-aws-iot-core-device-using-aws-serverless-and-an-esp32/

Using a simple Arduino sketch, an AWS Serverless Application Repository application, and a microcontroller, you can build a basic serverless workflow for communicating with an AWS IoT Core device.

A microcontroller is a programmable chip and acts as the brain of an electronic device. It has input and output pins for reading and writing on digital or analog components. Those components could be sensors, relays, actuators, or various other devices. It can be used to build remote sensors, home automation products, robots, and much more. The ESP32 is a powerful low-cost microcontroller with Wi-Fi and Bluetooth built in and is used this walkthrough.

The Arduino IDE, a lightweight development environment for hardware, now includes support for the ESP32. There is a large collection of community and officially supported libraries, from addressable LED strips to spectral light analysis.

The following walkthrough demonstrates connecting an ESP32 to AWS IoT Core to allow it to publish and subscribe to topics. This means that the device can send any arbitrary information, such as sensor values, into AWS IoT Core while also being able to receive commands.

Solution overview

This post walks through deploying an application from the AWS Serverless Application Repository. This allows an AWS IoT device to be messaged using a REST endpoint powered by Amazon API Gateway and AWS Lambda. The AWS SAR application also configures an AWS IoT rule that forwards any messages published by the device to a Lambda function that updates an Amazon DynamoDB table, demonstrating basic bidirectional communication.

The last section explores how to build an IoT project with real-world application. By connecting a thermal printer module and modifying a few lines of code in the example firmware, the ESP32 device becomes an AWS IoT–connected printer.

All of this can be accomplished within the AWS Free Tier, which is necessary for the following instructions.

An example of an AWS IoT project using an ESP32, AWS IoT Core, and an Arduino thermal printer

An example of an AWS IoT project using an ESP32, AWS IoT Core, and an Arduino thermal printer.

Required steps

To complete the walkthrough, follow these steps:

  • Create an AWS IoT device.
  • Install and configure the Arduino IDE.
  • Configure and flash an ESP32 IoT device.
  • Deploying the lambda-iot-rule AWS SAR application.
  • Monitor and test.
  • Create an IoT thermal printer.

Creating an AWS IoT device

To communicate with the ESP32 device, it must connect to AWS IoT Core with device credentials. You must also specify the topics it has permissions to publish and subscribe on.

  1. In the AWS IoT console, choose Register a new thing, Create a single thing.
  2. Name the new thing. Use this exact name later when configuring the ESP32 IoT device. Leave the remaining fields set to their defaults. Choose Next.
  3.  Choose Create certificate. Only the thing cert, private key, and Amazon Root CA 1 downloads are necessary for the ESP32 to connect. Download and save them somewhere secure, as they are used when programming the ESP32 device.
  4. Choose Activate, Attach a policy.
  5. Skip adding a policy, and choose Register Thing.
  6. In the AWS IoT console side menu, choose Secure, Policies, Create a policy.
  7. Name the policy Esp32Policy. Choose the Advanced tab.
  8. Paste in the following policy template.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": "iot:Connect",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:client/THINGNAME"
        },
        {
          "Effect": "Allow",
          "Action": "iot:Subscribe",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topicfilter/esp32/sub"
        },
    	{
          "Effect": "Allow",
          "Action": "iot:Receive",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topic/esp32/sub"
        },
        {
          "Effect": "Allow",
          "Action": "iot:Publish",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topic/esp32/pub"
        }
      ]
    }
  9. Replace REGION with the matching AWS Region you’re currently operating in. This can be found on the top right corner of the AWS console window.
  10.  Replace ACCOUNT_ID with your own, which can be found in Account Settings.
  11. Replace THINGNAME with the name of your device.
  12. Choose Create.
  13. In the AWS IoT console, choose Secure, Certification. Select the one created for your device and choose Actions, Attach policy.
  14. Choose Esp32Policy, Attach.

Your AWS IoT device is now configured to have permission to connect to AWS IoT Core. It can also publish to the topic esp32/pub and subscribe to the topic esp32/sub. For more information on securing devices, see AWS IoT Policies.

Installing and configuring the Arduino IDE

The Arduino IDE is an open-source development environment for programming microcontrollers. It supports a continuously growing number of platforms including most ESP32-based modules. It must be installed along with the ESP32 board definitions, MQTT library, and ArduinoJson library.

  1. Download the Arduino installer for the desired operating system.
  2. Start Arduino and open the Preferences window.
  3. For Additional Board Manager URLs, add
    https://dl.espressif.com/dl/package_esp32_index.json.
  4. Choose Tools, Board, Boards Manager.
  5. Search esp32 and install the latest version.
  6. Choose Sketch, Include Library, Manage Libraries.
  7. Search MQTT, and install the latest version by Joel Gaehwiler.
  8. Repeat the library installation process for ArduinoJson.

The Arduino IDE is now installed and configured with all the board definitions and libraries needed for this walkthrough.

Configuring and flashing an ESP32 IoT device

A collection of various ESP32 development boards.

A collection of various ESP32 development boards.

For this section, you need an ESP32 device. To check if your board is compatible with the Arduino IDE, see the boards.txt file. The following code connects to AWS IoT Core securely using MQTT, a publish and subscribe messaging protocol.

This project has been tested on the following devices:

  1. Install the required serial drivers for your device. Some boards use different USB/FTDI chips for interfacing. Here are the most commonly used with links to drivers.
  2. Open the Arduino IDE and choose File, New to create a new sketch.
  3. Add a new tab and name it secrets.h.
  4. Paste the following into the secrets file.
    #include <pgmspace.h>
    
    #define SECRET
    #define THINGNAME ""
    
    const char WIFI_SSID[] = "";
    const char WIFI_PASSWORD[] = "";
    const char AWS_IOT_ENDPOINT[] = "xxxxx.amazonaws.com";
    
    // Amazon Root CA 1
    static const char AWS_CERT_CA[] PROGMEM = R"EOF(
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
    )EOF";
    
    // Device Certificate
    static const char AWS_CERT_CRT[] PROGMEM = R"KEY(
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
    )KEY";
    
    // Device Private Key
    static const char AWS_CERT_PRIVATE[] PROGMEM = R"KEY(
    -----BEGIN RSA PRIVATE KEY-----
    -----END RSA PRIVATE KEY-----
    )KEY";
  5. Enter the name of your AWS IoT thing, as it is in the console, in the field THINGNAME.
  6. To connect to Wi-Fi, add the SSID and PASSWORD of the desired network. Note: The network name should not include spaces or special characters.
  7. The AWS_IOT_ENDPOINT can be found from the Settings page in the AWS IoT console.
  8. Copy the Amazon Root CA 1, Device Certificate, and Device Private Key to their respective locations in the secrets.h file.
  9. Choose the tab for the main sketch file, and paste the following.
    #include "secrets.h"
    #include <WiFiClientSecure.h>
    #include <MQTTClient.h>
    #include <ArduinoJson.h>
    #include "WiFi.h"
    
    // The MQTT topics that this device should publish/subscribe
    #define AWS_IOT_PUBLISH_TOPIC   "esp32/pub"
    #define AWS_IOT_SUBSCRIBE_TOPIC "esp32/sub"
    
    WiFiClientSecure net = WiFiClientSecure();
    MQTTClient client = MQTTClient(256);
    
    void connectAWS()
    {
      WiFi.mode(WIFI_STA);
      WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
    
      Serial.println("Connecting to Wi-Fi");
    
      while (WiFi.status() != WL_CONNECTED){
        delay(500);
        Serial.print(".");
      }
    
      // Configure WiFiClientSecure to use the AWS IoT device credentials
      net.setCACert(AWS_CERT_CA);
      net.setCertificate(AWS_CERT_CRT);
      net.setPrivateKey(AWS_CERT_PRIVATE);
    
      // Connect to the MQTT broker on the AWS endpoint we defined earlier
      client.begin(AWS_IOT_ENDPOINT, 8883, net);
    
      // Create a message handler
      client.onMessage(messageHandler);
    
      Serial.print("Connecting to AWS IOT");
    
      while (!client.connect(THINGNAME)) {
        Serial.print(".");
        delay(100);
      }
    
      if(!client.connected()){
        Serial.println("AWS IoT Timeout!");
        return;
      }
    
      // Subscribe to a topic
      client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);
    
      Serial.println("AWS IoT Connected!");
    }
    
    void publishMessage()
    {
      StaticJsonDocument<200> doc;
      doc["time"] = millis();
      doc["sensor_a0"] = analogRead(0);
      char jsonBuffer[512];
      serializeJson(doc, jsonBuffer); // print to client
    
      client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
    }
    
    void messageHandler(String &topic, String &payload) {
      Serial.println("incoming: " + topic + " - " + payload);
    
    //  StaticJsonDocument<200> doc;
    //  deserializeJson(doc, payload);
    //  const char* message = doc["message"];
    }
    
    void setup() {
      Serial.begin(9600);
      connectAWS();
    }
    
    void loop() {
      publishMessage();
      client.loop();
      delay(1000);
    }
  10. Choose File, Save, and give your project a name.

Flashing the ESP32

  1. Plug the ESP32 board into a USB port on the computer running the Arduino IDE.
  2. Choose Tools, Board, and then select the matching type of ESP32 module. In this case, a Sparkfun ESP32 Thing was used.
  3. Choose Tools, Port, and then select the matching port for your device.
  4. Choose Upload. Arduino reads Done uploading when the upload is successful.
  5. Choose the magnifying lens icon to open the Serial Monitor. Set the baud rate to 9600.

Keep the Serial Monitor open. When connected to Wi-Fi and then AWS IoT Core, any messages received on the topic esp32/sub are logged to this console. The device is also now publishing to the topic esp32/pub.

The topics are set at the top of the sketch. When changing or adding topics, remember to add permissions in the device policy.

// The MQTT topics that this device should publish/subscribe
#define AWS_IOT_PUBLISH_TOPIC   "esp32/pub"
#define AWS_IOT_SUBSCRIBE_TOPIC "esp32/sub"

Within this sketch, the relevant functions are publishMessage() and messageHandler().

The publishMessage() function creates a JSON object with the current time in milliseconds and the analog value of pin A0 on the device. It then publishes this JSON object to the topic esp32/pub.

void publishMessage()
{
  StaticJsonDocument<200> doc;
  doc["time"] = millis();
  doc["sensor_a0"] = analogRead(0);
  char jsonBuffer[512];
  serializeJson(doc, jsonBuffer); // print to client

  client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
}

The messageHandler() function prints out the topic and payload of any message from a subscribed topic. To see all the ways to parse JSON messages in Arduino, see the deserializeJson() example.

void messageHandler(String &topic, String &payload) {
  Serial.println("incoming: " + topic + " - " + payload);

//  StaticJsonDocument<200> doc;
//  deserializeJson(doc, payload);
//  const char* message = doc["message"];
}

Additional topic subscriptions can be added within the connectAWS() function by adding another line similar to the following.

// Subscribe to a topic
  client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);

  Serial.println("AWS IoT Connected!");

Deploying the lambda-iot-rule AWS SAR application

Now that an ESP32 device has been connected to AWS IoT, the following steps walk through deploying an AWS Serverless Application Repository application. This is a base for building serverless integration with a physical device.

  1. On the lambda-iot-rule AWS Serverless Application Repository application page, make sure that the Region is the same as the AWS IoT device.
  2. Choose Deploy.
  3. Under Application settings, for PublishTopic, enter esp32/sub. This is the topic to which the ESP32 device is subscribed. It receives messages published to this topic. Likewise, set SubscribeTopic to esp32/pub, the topic on which the device publishes.
  4. Choose Deploy.
  5. When creation of the application is complete, choose Test app to navigate to the application page. Keep this page open for the next section.

Monitoring and testing

At this stage, two Lambda functions, a DynamoDB table, and an AWS IoT rule have been deployed. The IoT rule forwards messages on topic esp32/pub to TopicSubscriber, a Lambda function, which inserts the messages on to the DynamoDB table.

  1. On the application page, under Resources, choose MyTable. This is the DynamoDB table that the TopicSubscriber Lambda function updates.
  2. Choose Items. If the ESP32 device is still active and connected, messages that it has published appear here.

The TopicPublisher Lambda function is invoked by the API Gateway endpoint and publishes to the AWS IoT topic esp32/sub.

1.     On the application page, find the Application endpoint.

2.     To test that the TopicPublisher function is working, enter the following into a terminal or command-line utility, replacing ENDPOINT with the URL from above.

curl -d '{"text":"Hello world!"}' -H "Content-Type: application/json" -X POST https://ENDPOINT/publish

Upon success, the request returns a copy of the message.

Back in the Serial Monitor, the message published to the topic esp32/sub prints out.

Creating an IoT thermal printer

With the completion of the previous steps, the ESP32 device currently logs incoming messages to the serial console.

The following steps demonstrate how the code can be modified to use incoming messages to interact with a peripheral component. This is done by wiring a thermal printer to the ESP32 in order to physically print messages. The REST endpoint from the previous section can be used as a webhook in third-party applications to interact with this device.

A wiring diagram depicting an ESP32 connected to a thermal printer.

A wiring diagram depicting an ESP32 connected to a thermal printer.

  1. Follow the product instructions for powering, wiring, and installing the correct Arduino library.
  2. Ensure that the thermal printer is working by holding the power button on the printer while connecting the power. A sample receipt prints. On that receipt, the default baud rate is specified as either 9600 or 19200.
  3. In the Arduino code from earlier, include the following lines at the top of the main sketch file. The second line defines what interface the thermal printer is connected to. &Serial2 is used to set the third hardware serial interface on the ESP32. For this example, the pins on the Sparkfun ESP32 Thing, GPIO16/GPIO17, are used for RX/TX respectively.
    #include "Adafruit_Thermal.h"
    
    Adafruit_Thermal printer(&Serial2);
  4. Replace the setup() function with the following to initialize the printer on device bootup. Change the baud rate of Serial2.begin() to match what is specified in the test print. The default is 19200.
    void setup() {
      Serial.begin(9600);
    
      // Start the thermal printer
      Serial2.begin(19200);
      printer.begin();
      printer.setSize('S');
    
      connectAWS();
    }
    
  5. Replace the messageHandler() function with the following. On any incoming message, it parses the JSON and prints the message on the thermal printer.
    void messageHandler(String &topic, String &payload) {
      Serial.println("incoming: " + topic + " - " + payload);
    
      // deserialize json
      StaticJsonDocument<200> doc;
      deserializeJson(doc, payload);
      String message = doc["message"];
    
      // Print the message on the thermal printer
      printer.println(message);
      printer.feed(2);
    }
  6. Choose Upload.
  7. After the firmware has successfully uploaded, open the Serial Monitor to confirm that the board has connected to AWS IoT.
  8. Enter the following into a command-line utility, replacing ENDPOINT, as in the previous section.
    curl -d '{"message": "Hello World!"}' -H "Content-Type: application/json" -X POST https://ENDPOINT/publish

If successful, the device prints out the message “Hello World” from the attached thermal printer. This is a fully serverless IoT printer that can be triggered remotely from a webhook. As an example, this can be used with GitHub Webhooks to print a physical readout of events.

Conclusion

Using a simple Arduino sketch, an AWS Serverless Application Repository application, and a microcontroller, this post demonstrated how to build a basic serverless workflow for communicating with a physical device. It also showed how to expand that into an IoT thermal printer with real-world applications.

With the use of AWS serverless, advanced compute and extensibility can be added to an IoT device, from machine learning to translation services and beyond. By using the Arduino programming environment, the vast collection of open-source libraries, projects, and code examples open up a world of possibilities. The next step is to explore what can be done with an Arduino and the capabilities of AWS serverless. The sample Arduino code for this project and more can be found at this GitHub repository.

A simpler deployment experience with AWS SAM CLI

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/a-simpler-deployment-experience-with-aws-sam-cli/

The AWS Serverless Application Model (SAM) CLI provides developers with a local tool for managing serverless applications on AWS. The command line tool allows developers to initialize and configure applications, debug locally using IDEs like Visual Studio Code or JetBrains WebStorm, and deploy to the AWS Cloud.

On November 25, we announced improvements to the deployment process using the SAM CLI. These improvements allow users to deploy serverless applications with less manual setup, fewer repeated steps, and shorter CLI commands.

To install the latest version of the AWS SAM CLI, please refer to the installation section of the AWS SAM page.

What’s new?

Amazon S3 bucket management

Previously, developers had to manually create and manage an Amazon S3 bucket to host deployment artifacts for each desired Region. With this latest release, the SAM CLI automatically creates a Region-specific bucket via AWS CloudFormation, based on your local AWS credentials. If you deploy an application to a Region where no bucket exists, a new managed bucket is created in the new Region.

Minimized deployment commands

Before this update, a minimal deployment process would look like this:

sam package --s3-bucket my-regional-bucket --output-template-file out.yaml
sam deploy --template-file out.yaml --capabilities CAPABILITY_IAM --stack-name MyStackName

This series of commands was required at every deployment. With this latest update to SAM CLI, the package and deployment commands have been combined. The syntax is now:

sam deploy

The guided deployment

How does SAM CLI know where to deploy and what to name the application? The answer to this is found in the “guided deployment.” This is an interactive version of the deployment process that collects and saves information needed to deploy the application.

If sam deploy is running and cannot find the required information for deployment, the process errors out, recommending that the guided deployment process be run. To use the guided process:

sam deploy -g or --guided

SAM guided deploy

Once the information is collected, it is saved in the application as the samconfig.toml file. Subsequent calls to sam deploy use the existing data to deploy. If you update a setting between deployments, run the sam deploy -g command again to update the stored values.

Frequently asked questions

How many buckets are created?

When you run the sam deploy -g command with provided values, SAM checks the account for an existing SAM deployment bucket in that Region. This Regional bucket is created via CloudFormation by SAM as an artifact repository for all applications for the current account in the current Region. For a root level account, there is only a single bucket per Region that contains deployed SAM serverless applications.

What if the Region is changed for the application?

If you change the Region in samconfig.toml before running sam deploy, the process errors out. The selected deployment Region does not match the artifacts bucket Region stored in the samconfig.toml file. The error also occurs if you use the –region flag, and a Region is different to the Region in the samconfig.toml file. To change the Region for a deployment, use the sam deploy -g option to update the Region. SAM verifies that a bucket for the new Region exists, or creates one automatically.

What if the samconfig.toml file is deleted?

If the samconfig.toml file is deleted, SAM treats the application as new. We recommend that you use the -g flag to reconfigure the application.

What about backwards compatibility?

If you are using SAM for a non-interactive deployment, it is possible to pass all required information as parameters. For example, for a continuous integration continuous delivery (CICD) pipeline:

SAM deploy values

This same deployment is achieved using the older process with the following commands:

sam package --s3-bucket aws-sam-cli-managed-default-samclisourcebucket-xic3fipuh9n9 --output-template-file out.yaml
sam deploy --template-file out.yaml --capabilities CAPABILITY_IAM --stack-name sam-app --region us-west-2

The package command still exists in the latest version of SAM CLI for backwards compatibility with existing CICD processes.

Updated user experience

Along a streamlined process for deploying applications, the new version of SAM CLI brings an improved user interface. This provides developers with more feedback and validation choices. First, during the deployment process, all deployment parameters are displayed:

SAM deploy values

Once the changeset is created, the developer is presented with all the proposed changes.

SAM change-set report

Developers also have the option to confirm the changes, or cancel the deployment. This option is a setting in the samconfig.toml file that can be turned on or off as needed.

SAM change-set prompt

As the changeset is applied, the console displays the changes being made in the AWS Cloud.

SAM deploy status

Finally, the resulting output is displayed.

Conclusion

By streamlining the deployment process, removing the need to manage an S3 bucket, and providing clear deployment feedback and data, the latest version SAM CLI makes serverless development easier for developers.

Happy coding and deploying!

This Is My Architecture: Mobile Cryptocurrency Mining

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/this-is-my-architecture-mobile-cryptocurrency-mining/

In North America, approximately 95% of adults over the age of 25 have a bank account. In the developing world, that number is only about 52%. Cryptocurrencies can provide a platform for millions of unbanked people in the world to achieve financial freedom on a more level financial playing field.

Electroneum, a cryptocurrency company located in England, built its cryptocurrency mobile back end on AWS and is using the power of blockchain to unlock the global digital economy for millions of people in the developing world.

Electroneum’s cryptocurrency mobile app allows Electroneum customers in developing countries to transfer ETNs (exchange-traded notes) and pay for goods using their smartphones. Listen in to the discussion between AWS Solutions Architect Toby Knight and Electroneum CTO Barry Last as they explain how the company built its solution. Electroneum’s app is a web application that uses a feedback loop between its web servers and AWS WAF (a web application firewall) to automatically block malicious actors. The system then uses Athena, with a gamified approach, to provide an additional layer of blocking to prevent DDoS attacks. Finally, Electroneum built a serverless, instant payments system using AWS API Gateway, AWS Lambda, and Amazon DynamoDB to help its customers avoid the usual delays in confirming cryptocurrency transactions.

 

How to automate replication of secrets in AWS Secrets Manager across AWS Regions

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-automate-replication-of-secrets-in-aws-secrets-manager-across-aws-regions/

Assume that you make snapshot copies or read-replicas of your RDS databases in a secondary or backup AWS Region as a best practice. By using AWS Secrets Manager, you can store your RDS database credentials securely using AWS KMS Customer Master Keys, otherwise known as CMKs. AWS Key Management Service (AWS KMS) ensures secrets are encrypted at rest. With the integration of AWS Lambda, you can now more easily rotate these credentials regularly and replicate them for disaster recovery situations. This automation keeps credentials stored in AWS Secrets Manager for Amazon Relational Database Service (Amazon RDS) in sync between the origin Region, where your AWS RDS database lives, and the replica Region where your read-replicas live. While using the same credentials for all databases is not ideal, in the instance of disaster recovery, it can be useful for a quicker recovery.

In this post, I show you how to set up secret replication using an AWS CloudFormation template to create an AWS Lambda Function. By replicating secrets across AWS Regions, you can reduce the time required to get back up and running in production in a disaster recovery situation by ensuring your credentials are securely stored in the replica Region as well.

Solution overview

The solution described in this post uses a combination of AWS Secrets Manager, AWS CloudTrail, Amazon CloudWatch Events, and AWS Lambda. You create a secret in Secrets Manager that houses your RDS database credentials. This secret is encrypted using AWS KMS. Lambda automates the replication of the secret’s value in your origin AWS Region by performing a PUT operation on a secret of the same name in the same AWS Region as your read-replica. CloudWatch Events ensures that each time the secret housing your AWS RDS database credentials is rotated, it triggers the Lambda function to copy the secret’s value to your read-replica Region. By doing this, your RDS database credentials are always in sync for recovery.

Note: You might incur charges for using the services used in this solution, including Lambda. For information about potential costs, see the AWS pricing page.

The following diagram illustrates the process covered in this post.
 

Figure 1: Process diagram

Figure 1: Process diagram

This process assumes you have already created a secret housing your RDS database credentials in your main AWS Region and configured your CloudTrail Logs to send to CloudWatch Logs. Once this is complete, the steps to replicate are here:

  1. Secrets Manager rotates a secret in your original AWS Region.
  2. CloudTrail receives a log with “eventName”: “RotationSuceeded”.
  3. CloudTrail passes this log to CloudWatch Events.
  4. A filter in CloudWatch Events for this EventName triggers a Lambda function.
  5. The Lambda function retrieves the secret value from the origin AWS Region.
  6. The Lambda function then performs PutSecretValue on a secret with the same name in the replica AWS Region.

The Lambda function is triggered by a CloudWatch Event passed by CloudTrail. The triggering event is raised whenever a secret successfully rotates, which creates a CloudTrail log with the EventName property set to RotationSucceeded. You will know the secret rotation was successful when it has the label AWSCURRENT. You can read more about the secret labels and how they change during the rotation process here. The Lambda function retrieves the new secret, then calls PutSecretValue on a secret with the same name in the replica AWS Region. This AWS Region is specified by an environment variable inside the Lambda function.

Note: If the origin secret uses a customer-managed Customer Master Key (CMK), then the cloned secret must, as well. If the origin secret uses an AWS-managed CMK, then the cloned secret must, as well. You can’t mix them or the Lambda function will fail. AWS recommends you use customer-managed CMKs because you have full control of the permissions regarding which entities can use the CMK.

The CloudFormation template also creates an AWS Identity and Access Management (IAM) role with the required permissions to create and update secret replicas. Next, you’ll launch the CloudFormation template by using the AWS CloudFormation CLI.

Deploy the solution

Now that you understand how the Lambda function copies your secrets to the replica AWS Region, I’ll explain the commands to launch your CloudFormation stack. This stack creates the necessary resources to perform the automation. It includes the Lambda function, an IAM role, and the CloudWatch Event trigger.

First, make sure you have credentials for an IAM user or role that can launch all resources included in this template configured on your CLI. To launch the template, run the command below. You’ll choose a unique Stack Name to easily identify its purpose and the URL of the provided template you uploaded to your own S3 Bucket. For the following examples, I will use US-EAST-1 as my origin Region, and EU-WEST-1 as my replica Region. Make sure to replace these values and the other variables (identified by red font) with actual values from your own account.


$ aws cloudformation create-stack --stack-name Replication_Stack --template-url S3_URL --parameters ParameterKey=ReplicaKmsKeyArn,ParameterValue=arn:aws:kms:eu-west-1:111122223333:key/Example_Key_ID_12345 ParameterKey=TargetRegion,ParameterValue=eu-west-1 --capabilities CAPABILITY_NAMED_IAM –-region us-east-1

After the previous command is successful, you will see an output similar to the following with your CloudFormation Stack ARN:


$ {
    "StackId": "arn:aws:cloudformation:us-east-1:111122223333:stack/Replication_Stack/Example_additional_id_0123456789"
}

You can verify that the stack has completed successfully by running the following command:


$ aws cloudformation describe-stacks --stack-name Replication_Stack --region us-east-1

Verify that the StackStatus shows CREATE_COMPLETE. After you verify this, you’ll see three resources created in your account. The first is the IAM role, which allows the Lambda function to perform its API calls. The name of this role is SecretsManagerRegionReplicatorRole, and can be found in the IAM console under Roles. There are two policies attached to this role. The first policy is the managed permissions policy AWSLambdaBasicExecutionRole, which grants permissions for the Lambda function to write to AWS CloudWatch Logs. These logs will be used further on in the Event Rule creation, which will trigger the cloning of the origin secret to the replication region.

The second policy attached to the SecretsManagerRegionReplicatorRole role is an inline policy that grants permissions to decrypt and encrypt the secret in both your original AWS Region and in the replica AWS Region. This policy also grants permissions to retrieve the secret from the original AWS Region, and to store the secret in the replica AWS Region. You can see an example of this policy granting access to specific secrets below. Should you choose to use this policy, please remember to place your parameters into the placeholder values.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "KMSPermissions",
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:Encrypt",
                "kms:GenerateDataKey"
            ],
            "Resource": [
          "arn:aws:kms:us-east-1:111122223333:key/Example_Key_ID_12345",
     "arn:aws:kms:eu-west-1:111122223333:key/Example_Key_ID_12345"
      ]
        },
        {
            "Sid": "SecretsManagerOriginRegion",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:DescribeSecret",
                "secretsmanager:GetSecretValue"
            ],
            "Resource": "arn:aws:secretsmanager:us-east-1:111122223333:secret:replica/myexamplereplica*"
        },
        {
            "Sid": "SecretsManagerReplicaRegion",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:CreateSecret",
                "secretsmanager:UpdateSecretVersionStage",
                "secretsmanager:PutSecretValue",
                "secretsmanager:DescribeSecret"
            ],
            "Resource": "arn:aws:secretsmanager:eu-west-1:111122223333:secret:replica/myexamplereplica*"
        }
    ]
}

The next resource created is the CloudWatch Events rule SecretsManagerCrossRegionReplicator. You can find this rule by going to the AWS CloudWatch console, and, under Events, selecting Rules. Here’s an example Rule for the EventName I described earlier that’s used to trigger the Lambda function:


{
    "detail-type": [
        "AWS Service Event via CloudTrail"
    ],
    "source": [
        "aws.secretsmanager"
    ],
    "detail": {
        "eventSource": [
            "secretsmanager.amazonaws.com"
        ],
        "eventName": [
            "RotationSucceeded"
        ]
    }
}

The last resource created by the CloudFormation template is the Lambda function, which will do most of the actual work for the replication. After it’s created, you’ll be able to find this resource in your Lambda console with the name SecretsManagerRegionReplicator. You can download a copy of the Python script here, and you can see the full script below. In the function’s environment variables, you’ll also notice the parameter names and values you entered when launching your stack.


import boto3
from os import environ

targetRegion = environ.get('TargetRegion')
if targetRegion == None:
    raise Exception('Environment variable "TargetRegion" must be set')

smSource = boto3.client('secretsmanager')
smTarget = boto3.client('secretsmanager', region_name=targetRegion)

def lambda_handler(event, context):
    detail = event['detail']

    print('Retrieving SecretArn from event data')
    secretArn = detail['additionalEventData']['SecretId']

    print('Retrieving new version of Secret "{0}"'.format(secretArn))
    newSecret = smSource.get_secret_value(SecretId = secretArn)

    secretName = newSecret['Name']
    currentVersion = newSecret['VersionId']

    replicaSecretExists = True
    print('Replicating secret "{0}" (Version {1}) to region "{2}"'.format(secretName, currentVersion, targetRegion))
    try:
        smTarget.put_secret_value(
            SecretId = secretName,
            ClientRequestToken = currentVersion,
            SecretString = newSecret['SecretString']
        )
        pass
    except smTarget.exceptions.ResourceNotFoundException:
        print('Secret "{0}" does not exist in target region "{1}". Creating it now with default values'.format(secretName, targetRegion))
        replicaSecretExists = False
    except smTarget.exceptions.ResourceExistsException:
        print('Secret version "{0}" has already been created, this must be a duplicate invocation'.format(currentVersion))
        pass

    if replicaSecretExists == False:
        secretMeta = smSource.describe_secret(SecretId = secretArn)
        if secretMeta['KmsKeyId'] != None:
            replicaKmsKeyArn = environ.get('ReplicaKmsKeyArn')
            if replicaKmsKeyArn == None:
                raise Exception('Cannot create replica of a secret that uses a custom KMS key unless the "ReplicaKmsKeyArn" environment variable is set. Alternatively, you can also create the key manually in the replica region with the same name')

            smTarget.create_secret(
                Name = secretName,
                ClientRequestToken = currentVersion,
                KmsKeyId = replicaKmsKeyArn,
                SecretString = newSecret['SecretString'],
                Description = secretMeta['Description']
            )
        else:
            smTarget.create_secret(
                Name = secretName,
                ClientRequestToken = currentVersion,
                SecretString = newSecret['SecretString'],
                Description = secretMeta['Description']
            )
    else:
        secretMeta = smTarget.describe_secret(SecretId = secretName)
        for previousVersion, labelList in secretMeta['VersionIdsToStages'].items():
            if 'AWSCURRENT' in labelList and previousVersion != currentVersion:
                print('Moving "AWSCURRENT" label from version "{0}" to new version "{1}"'.format(previousVersion, currentVersion))
                smTarget.update_secret_version_stage(
                    SecretId = secretName,
                    VersionStage = 'AWSCURRENT',
                    MoveToVersionId = currentVersion,
                    RemoveFromVersionId = previousVersion
                )
                break

    print('Secret {0} replicated successfully to region "{1}"'.format(secretName, targetRegion))

Now that your CloudFormation Stack is completed, and all necessary resources set up, you are ready to begin secret replication. To verify the setup works, you can modify the secret in the origin region that houses your RDS database credentials. It will take a few moments for the secret label to return to AWSCURRENT, and then roughly another 5-15 minutes for the CloudTrail Logs to populate and then send the Events to CloudWatch Logs. Once received, the Event will trigger the Lambda function to complete the replication process. You will be able to verify the secret has correctly replicated by going to the Secrets Manager Console in your replication Region, selecting the replicated secret, and viewing the values. If the values are the same in the replicated secret as they are the origin secret, and both labels are AWSCURRENT, you know replication completed successfully and your credentials will be ready if needed.

Summary

In this post, you learned how you can use AWS Lambda and Amazon CloudWatch Events to automate replication of your secrets in AWS Secrets Manager across AWS Regions. You used a CloudFormation template to create the necessary resources for the replication setup. The CloudWatch Event will watch for any CloudTrail Logs which would trigger the Lambda function that then pulls the secret name and value and replicates it to the AWS Region of your choice. Should a disaster occur, you’ll have increased your chances for a smooth recovery of your databases, and you’ll be back in production quicker.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.