Tag Archives: Amazon Cognito

Building well-architected serverless applications: Regulating inbound request rates – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-regulating-inbound-request-rates-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Reliability question REL1: How do you regulate inbound request rates?

Defining, analyzing, and enforcing inbound request rates helps achieve better throughput. Regulation helps you adapt different scaling mechanisms based on customer demand. By regulating inbound request rates, you can achieve better throughput, and adapt client request submissions to a request rate that your workload can support.

Required practice: Control inbound request rates using throttling

Throttle inbound request rates using steady-rate and burst rate requests

Throttling requests limits the number of requests a client can make during a certain period of time. Throttling allows you to control your API traffic. This helps your backend services maintain their performance and availability levels by limiting the number of requests to actual system throughput.

To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API. These limits are applied across all clients using the token bucket algorithm. API Gateway sets a limit on a steady-state rate and a burst of request submissions. The algorithm is based on an analogy of filling and emptying a bucket of tokens representing the number of available requests that can be processed.

Each API request removes a token from the bucket. The throttle rate then determines how many requests are allowed per second. The throttle burst determines how many concurrent requests are allowed. I explain the token bucket algorithm in more detail in “Building well-architected serverless applications: Controlling serverless API access – part 2

Token bucket algorithm

Token bucket algorithm

API Gateway limits the steady-state rate and burst requests per second. These are shared across all APIs per Region in an account. For further information on account-level throttling per Region, see the documentation. You can request account-level rate limit increases using the AWS Support Center. For more information, see Amazon API Gateway quotas and important notes.

You can configure your own throttling levels, within the account and Region limits to improve overall performance across all APIs in your account. This restricts the overall request submissions so that they don’t exceed the account-level throttling limits.

You can also configure per-client throttling limits. Usage plans restrict client request submissions to within specified request rates and quotas. These are applied to clients using API keys that are associated with your usage policy as a client identifier. You can add throttling levels per API route, stage, or method that are applied in a specific order.

For more information on API Gateway throttling, see the AWS re:Invent presentation “I didn’t know Amazon API Gateway could do that”.

API Gateway throttling

API Gateway throttling

You can also throttle requests by introducing a buffering layer using Amazon Kinesis Data Stream or Amazon SQS. Kinesis can limit the number of requests at the shard level while SQS can limit at the consumer level. For more information on using SQS as a buffer with Amazon Simple Notification Service (SNS), read “How To: Use SNS and SQS to Distribute and Throttle Events”.

Identify steady-rate and burst rate requests that your workload can sustain at any point in time before performance degraded

Load testing your serverless application allows you to monitor the performance of an application before it is deployed to production. Serverless applications can be simpler to load test, thanks to the automatic scaling built into many of the services. During a load test, you can identify quotas that may act as a limiting factor for the traffic you expect and take action.

Perform load testing for a sustained period of time. Gradually increase the traffic to your API to determine your steady-state rate of requests. Also use a burst strategy with no ramp up to determine the burst rates that your workload can serve without errors or performance degradation. There are a number of AWS Marketplace and AWS Partner Network (APN) solutions available for performance testing, Gatling Frontline, BlazeMeter, and Apica.

In the serverless airline example used in this series, you can run a performance test suite using Gatling, an open source tool.

To deploy the test suite, follow the instructions in the GitHub repository perf-tests directory. Uncomment the deploy.perftest line in the repository Makefile.

Perf-test makefile

Perf-test makefile

Once the file is pushed to GitHub, AWS Amplify Console rebuilds the application, and deploys an AWS CloudFormation stack. You can run the load tests locally, or use an AWS Step Functions state machine to run the setup and Gatling load test simulation.

Performance test using Step Functions

Performance test using Step Functions

The Gatling simulation script uses constantUsersPerSec and rampUsersPerSec to add users for a number of test scenarios. You can use the test to simulate load on the application. Once the tests run, it generates a downloadable report.

Gatling performance results

Gatling performance results

Artillery Community Edition is another open-source tool for testing serverless APIs. You configure the number of requests per second and overall test duration, and it uses a headless Chromium browser to run its test flows. For Artillery, the maximum number of concurrent tests is constrained by your local computing resources and network. To achieve higher throughput, you can use Serverless Artillery, which runs the Artillery package on Lambda functions. As a result, this tool can scale up to a significantly higher number of tests.

For more information on how to use Artillery, see “Load testing a web application’s serverless backend”. This runs tests against APIs in a demo application. For example, one of the tests fetches 50,000 questions per hour. This calls an API Gateway endpoint and tests whether the AWS Lambda function, which queries an Amazon DynamoDB table, can handle the load.

Artillery performance test

Artillery performance test

This is a synchronous API so the performance directly impacts the user’s experience of the application. This test shows that the median response time is 165 ms with a p95 time of 201 ms.

Performance test API results

Performance test API results

Another consideration for API load testing is whether the authentication and authorization service can handle the load. For more information on load testing Amazon Cognito and API Gateway using Step Functions, see “Using serverless to load test Amazon API Gateway with authorization”.

API load testing with authentication and authorization

API load testing with authentication and authorization

Conclusion

Regulating inbound requests helps you adapt different scaling mechanisms based on customer demand. You can achieve better throughput for your workloads and make them more reliable by controlling requests to a rate that your workload can support.

In this post, I cover controlling inbound request rates using throttling. I show how to use throttling to control steady-rate and burst rate requests. I show some solutions for performance testing to identify the request rates that your workload can sustain before performance degradation.

This well-architected question will be continued where I look at using, analyzing, and enforcing API quotas. I cover mechanisms to protect non-scalable resources.

For more serverless learning resources, visit Serverless Land.

Architecting a Highly Available Serverless, Microservices-Based Ecommerce Site

Post Syndicated from Senthil Kumar original https://aws.amazon.com/blogs/architecture/architecting-a-highly-available-serverless-microservices-based-ecommerce-site/

The number of ecommerce vendors is growing globally—they often handle large traffic at different times of the day and different days of the year. This, in addition to building, managing, and maintaining IT infrastructure on-premises data centers can present challenges to ecommerce businesses’ scalability and growth.

This blog provides you a Serverless on AWS solution that offloads the undifferentiated heavy lifting of managing resources and ensures your businesses’ architecture can handle peak traffic.

Common architecture set up versus serverless solution

The following sections describe a common monolithic architecture and our suggested alternative approach: setting up microservices-based order submission and product search modules. These modules are independently deployable and scalable.

Typical monolithic architecture

Figure 1 shows how a typical on-premises ecommerce infrastructure with different tiers is set up:

  • Web servers serve static assets and proxy requests to application servers
  • Application servers process ecommerce business logic and authentication logic
  • Databases store user and other dynamic data
  • Firewall and load balancers provide network components for load balancing and network security
Monolithic on-premises ecommerce infrastructure with different tiers

Figure 1. Monolithic on-premises ecommerce infrastructure with different tiers

Monolithic architecture tightly couples different layers of the application. This prevents them from being independently deployed and scaled.

Microservices-based modules

Order submission workflow module

This three-layer architecture can be set up in the AWS Cloud using serverless components:

  • Static content layer (Amazon CloudFront and Amazon Simple Storage Service (Amazon S3)). This layer stores static assets on Amazon S3. By using CloudFront in front of the S3 storage cache, you can deliver assets to customers globally with low latency and high transfer speeds.
  • Authentication layer (Amazon Cognito or customer proprietary layer). Ecommerce sites deliver authenticated and unauthenticated content to the user. With Amazon Cognito, you can manage users’ sign-up, sign-in, and access controls, so this authentication layer ensures that only authenticated users have access to secure data.
  • Dynamic content layer (AWS Lambda and Amazon DynamoDB). All business logic required for the ecommerce site is handled by the dynamic content layer. Using Lambda and DynamoDB ensures that these components are scalable and can handle peak traffic.

As shown in Figure 2, the order submission workflow is split into two sections: synchronous and asynchronous.

By splitting the order submission workflow, you allow users to submit their order details and get an orderId. This makes sure that they don’t have to wait for backend processing to complete. This helps unburden your architecture during peak shopping periods when the backend process can get busy.

Microservices-based order submission workflow

Figure 2. Microservices-based order submission workflow

The details of the order, such as credit card information in encrypted form, shipping information, etc., are stored in DynamoDB. This action invokes an asynchronous workflow managed by AWS Step Functions.

Figure 3 shows sample step functions from the asynchronous process. In this scenario, you are using external payment processing and shipping systems. When both systems get busy, step functions can manage long-running transactions and also the required retry logic. It uses a decision-based business workflow, so if a payment transaction fails, the order can be canceled. Or, once payment is successful, the order can proceed.

Amazon Simple Notification Service (Amazon SNS) notifies users whenever their order status changes. You can even extend Step Functions to have it react based on status of shipping.

Sample AWS Step Functions asynchronous workflow that uses external payment processing service and shipping system

Figure 3. Sample AWS Step Functions asynchronous workflow that uses external payment processing service and shipping system

Product search module

Our product search module is set up using the following serverless components:

  • Amazon Elasticsearch Service (Amazon ES) stores product data, which is updated whenever product-related data changes.
  • Lambda formats the data.
  • Amazon API Gateway allows users to search without authentication. As shown in Figure 4, searching for products on the ecommerce portal does not require users to log in. All traffic via API Gateway is unauthenticated.
Microservices-based product search workflow module with dynamic traffic through API Gateway

Figure 4. Microservices-based product search workflow module with dynamic traffic through API Gateway

Replicating data across Regions

If your ecommerce application runs on multiple Regions, it may require the content and data to be replicated. This allows the application to handle local traffic from that Region and also act as a failover option if the application fails in another Region. The content and data are replicated using the multi-Region replication features of Amazon S3 and DynamoDB global tables.

Figure 5 shows a multi-Region ecommerce site built on AWS with serverless services. It uses the following features to make sure that data between all Regions are in sync for data/assets that do not need data residency compliance:

  • Amazon S3 multi-Region replication keeps static assets in sync for assets.
  • DynamoDB global tables keeps dynamic data in sync across Regions.

Assets that are specific to their Region are stored in Regional specific buckets.

Data replication for a multi-Region ecommerce website built using serverless components

Figure 5. Data replication for a multi-Region ecommerce website built using serverless components

Amazon Route 53 DNS web service manages traffic failover from one Region to another. Route 53 provides different routing policies, and depending on your business requirement, you can choose the failover routing policy.

Best practices

Now that we’ve shown you how to build these applications, make sure you follow these best practices to effectively build, deploy, and monitor the solution stack:

  • Infrastructure as Code (IaC). A well-defined, repeatable infrastructure is important for managing any solution stack. AWS CloudFormation allows you to treat your infrastructure as code and provides a relatively easy way to model a collection of related AWS and third-party resources.
  • AWS Serverless Application Model (AWS SAM). An open-source framework. Use it to build serverless applications on AWS.
  • Deployment automation. AWS CodePipeline is a fully managed continuous delivery service that automates your release pipelines for fast and reliable application and infrastructure updates.
  • AWS CodeStar. Allows you to quickly develop, build, and deploy applications on AWS. It provides a unified user interface, enabling you to manage all of your software development activities in one place.
  • AWS Well-Architected Framework. Provides a mechanism for regularly evaluating your workloads, identifying high risk issues, and recording your improvements.
  • Serverless Applications Lens. Documents how to design, deploy, and architect serverless application workloads.
  • Monitoring. AWS provides many services that help you monitor and understand your applications, including Amazon CloudWatch, AWS CloudTrail, and AWS X-Ray.

Conclusion

In this blog post, we showed you how to architect a highly available, serverless, and microservices-based ecommerce website that operates in multiple Regions.

We also showed you how to replicate data between different Regions for scaling and if your workload fails. These serverless services reduce the burden of building and managing physical IT infrastructure to help you focus more on building solutions.

Related information

Protect public clients for Amazon Cognito by using an Amazon CloudFront proxy

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/protect-public-clients-for-amazon-cognito-by-using-an-amazon-cloudfront-proxy/

In Amazon Cognito user pools, an app client is an entity that has permission to call unauthenticated API operations (that is, operations that don’t have an authenticated user), such as operations to sign up, sign in, and handle forgotten passwords. In this post, I show you a solution designed to protect these API operations from unwanted bots and distributed denial of service (DDoS) attacks.

To protect Amazon Cognito services and customers, Amazon Cognito applies request rate quotas on all API categories, and throttles rapid calls that exceed the assigned quota. For that reason, you must ensure your applications control who can call unauthenticated API operations and at what rate, so that user calls aren’t throttled because of unwanted or misconfigured clients that call these API operations at high rates.

App clients fall into one of two categories: public clients (used from web or mobile applications) and private or confidential clients (used from a secured backend). Public clients shouldn’t have secrets, because it isn’t possible to protect secrets in these types of clients. Confidential clients, on the other hand, use a secret to authorize calls to unauthenticated operations. In these clients, the secret can be protected in the backend.

The benefit of using a confidential app client with a secret in Amazon Cognito is that unauthenticated API operations will accept only the calls that include the secret hash for this client, and will drop calls with an invalid or missing secret. In this way, you control who calls these API operations. Public applications can use a confidential app client by implementing a lightweight proxy layer in front of the Amazon Cognito endpoint, and then using this proxy to add a secret hash in relevant requests before passing the requests to Amazon Cognito.

There are multiple options that you can use to implement this proxy. One option is to use Amazon CloudFront and [email protected] to add the secret hash to the incoming requests. When you use a CloudFront proxy, you can also use AWS WAF, which gives you tools to detect and block unwanted clients. From [email protected], you can also integrate with other services (like Amazon Fraud Detector or third-party bot detection services) to help you detect possible fraudulent requests and block them. The CloudFront proxy, with the right set of security tools, helps protect your Amazon Cognito user pool from unwanted clients.

Solution overview

To implement this lightweight proxy pattern, you need to create an application client with a secret. Unauthenticated API calls to this client must include the secret hash which is added to the request from the proxy layer. Client applications use an SDK like AWS Amplify, the Amazon Cognito Identity SDK, or a mobile SDK to communicate with Amazon Cognito. By default, the SDK sends requests to the Regional Amazon Cognito endpoint. Your application must override the default endpoint by manually adding an “Endpoint” property in the app configuration. See the Integrate the client application with the proxy section later in this post for more details.

Figure 1 shows how this works, step by step.
 

Figure 1: A proxy solution to the Amazon Cognito Regional endpoint

Figure 1: A proxy solution to the Amazon Cognito Regional endpoint

The workflow is as follows:

  1. You configure the client application (mobile or web client) to use a CloudFront endpoint as a proxy to an Amazon Cognito Regional endpoint. You also create an application client in Amazon Cognito with a secret. This means that any unauthenticated API call must have the secret hash.
  2. Clients that send unauthenticated API calls to the Amazon Cognito endpoint directly are blocked and dropped because of the missing secret.
  3. You use [email protected] to add a secret hash to the relevant incoming requests before passing them on to the Amazon Cognito endpoint.
  4. From [email protected], you must have the app client secret to be able to calculate the secret hash and add it to the request. It’s recommended that you keep the secret in AWS Secrets Manager and cache it for the lifetime of the function.
  5. You use AWS WAF with CloudFront distribution to enforce rate limiting, allow and deny lists, and other rule groups according to your security requirements.

When to use this pattern

It’s a best practice to use this proxy pattern with clients that use SDKs to integrate with Amazon Cognito user pools. Examples include mobile applications that use the iOS or Android SDK, or web applications that use client-side libraries like Amplify or the Amazon Cognito Identity SDK to integrate with Amazon Cognito.

You don’t need to use a proxy pattern with server-side applications that use an AWS SDK to integrate with Amazon Cognito user pools from a protected backend, because server-side applications can natively use confidential clients and protect the secret in the backend.

You can’t use this solution with applications that use Hosted UI and OAuth 2.0 endpoints to integrate with Amazon Cognito user pools. This includes federation scenarios where users sign in with an external identity provider (IdP).

Implementation and deployment details

Before you deploy this solution, you need a user pool and an application client that has the client secret. When you have these in place, choose the following Launch Stack button to launch a CloudFormation stack in your account and deploy the proxy solution.

Select the Launch Stack button to launch the template

Note: The CloudFormation stack must be created in the us-east-1 AWS Region, but the user pool itself can exist in any supported Region.

The template takes the parameters shown in Figure 2 below.
 

Figure 2: CloudFormation stack creation with initial parameters

Figure 2: CloudFormation stack creation with initial parameters

The parameters in Figure 2 include:

  • AdvancedSecurityEnabled is a flag that indicates whether advanced security is enabled in the user pool or not. This flag determines which version of the Lambda function is deployed. Notice that if you change this flag as part of a stack update, it overrides the function code, so if you have any manual changes, make sure to back up your changes.
  • AppClientSecret is the secret for your application client. This secret is stored in Secrets Manager and accessed from L[email protected] as needed.
  • LambdaS3BucketName is the bucket that hosts the Lambda code package. You don’t need to change this parameter unless you have a requirement to modify or extend the solution with your own Lambda function.
  • RateLimit is the maximum number of calls from a single IP address that are allowed within a 5 minute period. Values between 100 requests and 20 million requests are valid for RateLimit.
  • Important: provide a value suitable for your application and security requirements.

  • UserPoolId is the ID of your user pool. This value is used by [email protected] when needed (for example, to call admin APIs, which require the user pool ID).
  • UserPoolRegion is the AWS Region where you created your user pool. This value is used to determine which Amazon Cognito Regional endpoint to proxy the calls to.

This template creates several resources in your AWS account, as follows:

  1. A CloudFront distribution that serves as a proxy to an Amazon Cognito Regional endpoint.
  2. An AWS WAF web access control list (ACL) with rules for the allow list, deny list, and rate limit.
  3. A Lambda function to be deployed at the edge and assigned to the origin request event.
  4. A secret in Secrets Manager, to hold the values of the application client secret and user pool ID.

After you create the stack, the CloudFront distribution domain name is available on the Outputs tab in the CloudFront console, as shown in Figure 3. This is the value that’s used as the Endpoint property in your client-side application. You can optionally add an alternative domain name to the CloudFront distribution if you prefer to use your own custom domain.
 

Figure 3: The output of the CloudFormation stack creation, displaying the CloudFront domain name

Figure 3: The output of the CloudFormation stack creation, displaying the CloudFront domain name

Use [email protected] to add a secret hash to the request

As explained earlier, the purpose of having this proxy is to be able to inject the secret hash in unauthenticated API calls before passing them to the Amazon Cognito endpoint. This injection is achieved by a Lambda function that intercepts incoming requests at the edge (the CloudFront distribution) before passing them to the origin (the Amazon Cognito Regional endpoint).

The Lambda function that is deployed to the edge has two versions. One is a simple pass-through proxy that only adds the secret hash, and this version is used if Amazon Cognito advanced security isn’t enabled. The other version is a proxy that uses the AdminInitiateAuth and AdminRespondToAuthChallenge API operations instead of unauthenticated API operations for the user authentication and challenge response. This allows the proxy layer to propagate the client IP address to the Amazon Cognito endpoint, which guides the adaptive authentication features of advanced security. The version that is deployed by the stack is determined by the AdvancedSecurityEnabled flag when you create or update the CloudFormation stack.

You can extend this solution by manually modifying the Lambda function with your own processing logic. For example, you can integrate with fraud detection or bot detection services to evaluate the request and decide to proceed or reject the call. Note that after making any change to the Lambda function code, you must deploy a new version to the edge location. To do that from the Lambda console, navigate to Actions, choose Deploy to [email protected], and then choose Use existing CloudFront trigger on this function.

Important: If you update the stack from CloudFormation and change the value of the AdvancedSecurityEnabled flag, the new value overrides the Lambda code with the default version for the choice. In that case, all manual changes are lost.

Allow or block requests

The template that is provided in this blog post creates a web ACL with three rules: AllowList, DenyList, and RateLimit. These rules are evaluated in order and determine which requests are allowed or blocked. The template also creates four IP sets, as shown in Figure 4, to hold the values of allowed or blocked IPs for both IPv4 and IPv6 address types.
 

Figure 4: The CloudFormation template creates IP sets in the AWS WAF console for allow and deny lists

Figure 4: The CloudFormation template creates IP sets in the AWS WAF console for allow and deny lists

If you want to always allow requests from certain clients, for example, trusted enterprise clients or server-side clients in cases where a large volume of requests is coming from the same IP address like a VPN gateway, add these IP addresses to the corresponding AllowList IP set. Similarly, if you want to always block traffic from certain IPs, add those IPs to the corresponding DenyList IP set.

Requests from sources that aren’t on the allow list or deny list are evaluated based on the volume of calls within 5 minutes, and sources that exceed the defined rate limit within 5 minutes are automatically blocked. If you want to change the defined rate limit, you can do so by updating the CloudFormation stack and providing a different value for the RateLimit parameter. Or you can modify this value directly in the AWS WAF console by editing the RateLimit rule.

Note: You can also use AWS Managed Rules for AWS WAF to add additional protection according to your security needs.

Integrate the client application with the proxy

You can integrate the client application with the proxy by changing the Endpoint in your client application to use the CloudFront distribution domain name. The domain name is located in the Outputs section of the CloudFormation stack.

You then need to edit your client-side code to forward calls to Amazon Cognito through the proxy endpoint. For example, if you’re using the Identity SDK, you should change this property as follows.

var poolData = {
  UserPoolId: '<USER-POOL-ID>',
  ClientId: '<APP-CLIENT-ID>',
  endpoint: 'https://<CF-DISTRIBUTION-DOMAIN>'
};

If you’re using AWS Amplify, you can change the endpoint in the aws-exports.js file by overriding the property aws_cognito_endpoint. Or, if you configure Amplify Auth in your code, you can provide the endpoint as follows.

Amplify.Auth.configure({
  userPoolId: '<USER-POOL-ID>',
  userPoolWebClientId: '<APP-CLIENT-ID>',
  endpoint: 'https://<CF-DISTRIBUTION-DOMAIN>'
});

If you have a mobile application that uses the Amplify mobile SDK, you can override the endpoint in your configuration as follows (don’t include AppClientSecret parameter in your configuration). Note that the Endpoint value contains the domain name only, not the full URL. This feature is available in the latest releases of the iOS and Android SDKs.

"CognitoUserPool": {
  "Default": {
    "AppClientId": "<APP-CLIENT-ID>",
    "Endpoint": "<CF-DISTRIBUTION-DOMAIN>",
    "PoolId": "<USER-POOL-ID>",
    "Region": "<REGION>"
  }
}

Warning: The Amplify CLI overwrites customizations to the awsconfiguration.json and amplifyconfiguration.json files if you do an amplify push or amplify pull operation. You must manually re-apply the Endpoint customization and remove the AppClientSecret if you use the CLI to modify your cloud backend.

Solution limitations

This solution has these limitations:

  • If advanced security features are enabled for the user pool, Amazon Cognito calculates risk for user events. If you use this proxy pattern, the IP address that is propagated in user events is the proxy IP address, which causes risk calculation for SignUp, ForgotPassword, and ResendCode events to be inaccurate. On the other hand, Sign-In events still have the client IP address propagated correctly, and risk calculation and adaptive authentication for Sign-In events aren’t affected by the use of this proxy.
  • This solution is not applicable to Hosted UI, OAuth 2.0 endpoints, and federation flows.
  • Authenticated and admin API operations (which require developer credentials or an access token) aren’t covered in this solution. These API operations don’t require a secret hash, and they use other authentication mechanisms.
  • Using this proxy solution with mobile apps requires an update to the application. The update might take time to be available in the relevant app store, and you must depend on end users to update their app. Plan ahead of time to use the solution with mobile apps.

How to detect unusual behavior

In this section, I share with you the steps to detect, quickly analyze and respond to unwanted clients. It’s a best practice to configure monitoring and alarms that help you to detect unexpected spikes in activity. Additionally, I show you how to be ready to quickly identify clients that are calling your resources at a higher-than-usual rate.

Monitor utilization compared to quotas

Amazon Cognito integrates with Service Quotas, which monitor service utilization compared to quotas. These metrics help you detect unexpected spikes and be alerted if you’re approaching your quota for a certain API category. Approaching your quota indicates that there is a risk that calls from legitimate users will be throttled.

To view utilization versus quota metrics

  1. In the Service Quotas console, choose Service Quotas, choose AWS Services, and then choose Amazon Cognito User Pools.
  2. Under Service quotas, enter the search term rate of. This shows you the list of API categories and the assigned quotas for each category.
     
    Figure 5: The Service Quotas console showing Amazon Cognito API category rate quotas

    Figure 5: The Service Quotas console showing Amazon Cognito API category rate quotas

  3. Choose any of the API categories to see utilization versus quota metrics.
     
    Figure 6: The Service Quotas console showing utilization vs quota metrics for Amazon Cognito UserCreation APIs

    Figure 6: The Service Quotas console showing utilization vs quota metrics for Amazon Cognito UserCreation APIs

  4. You can also create alarms from this page to alert you if utilization is above a pre-defined threshold. You can create alarms starting at 50 percent utilization. It’s recommended that you create multiple alarms, for example at the 50 percent, 70 percent, and 90 percent thresholds, and configure CloudWatch alarms as appropriate.
     
    Figure 7: Creating an alarm for the utilization of the UserCreation API category

    Figure 7: Creating an alarm for the utilization of the UserCreation API category

Analyze CloudTrail logs with Athena

If you detect an unexpected spike in traffic to a certain API category, the next step is to identify the sources of this spike. You can do that by using CloudTrail logs or, after you deploy and use this proxy solution, CloudFront logs as sources of information. You can then analyze these logs by using Amazon Athena queries.

The first step is to create Athena tables from CloudTrail and CloudFront logs. You can do that by following these steps for CloudTrail and similar steps for CloudFront. After you have these tables created, you can create a set of queries that help you identify unwanted clients. Here are a couple of examples:

  • Use the following query to identify clients with the highest call rate to the InitiateAuth API operation within the timeframe you noticed the spike (change the eventtime value to reflect the attack window).
    SELECT sourceipaddress, count(*)
    FROM "default"."cloudtrail_logs"
    WHERE eventname='InitiateAuth'
    AND eventtime >= '2021-03-01T00:00:00Z'and eventtime < '2021-03-31T00:00:00Z'
    GROUP BY sourceipaddress
    LIMIT 10
    

  • Use the following query to identify clients that come through CloudFront with the highest error rate.
    SELECT count(*) as count, request_ip
    FROM "default"."cloudfront_logs"
    WHERE status>500
    GROUP BY request_ip
    

After you identify sources that are calling your service with a higher-than-usual rate, you can block these clients by adding them to the DenyList IP set that was created in AWS WAF.

Analyze CloudTrail events with CloudWatch Logs Insights

It’s a best practice to configure your trail to send events to CloudWatch Logs. After you do this, you can interactively search and analyze your Amazon Cognito CloudTrail events with CloudWatch Logs Insights to identify errors, unusual activity, or unusual user behavior in your account.

Conclusion

In this post, I showed you how to implement a lightweight proxy to an Amazon Cognito endpoint, which can be used with an application client secret to control access to unauthenticated API operations. This approach, together with security tools such as AWS WAF, helps provide protection for these API operations from unwanted clients. I also showed you strategies to help detect an ongoing attack and quickly analyze, identify, and block unwanted clients.

For more strategies for DDoS mitigation, see the AWS Best Practices for DDoS Resiliency.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Using serverless to load test Amazon API Gateway with authorization

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/using-serverless-to-load-test-amazon-api-gateway-with-authorization/

This post was written by Ashish Mehra, Sr. Solutions Architect and Ramesh Chidirala, Solutions Architect

Many customers design their applications to use Amazon API Gateway as the front door and load test their API endpoints before deploying to production. Customers want to simulate the actual usage scenario, including authentication and authorization. The load test ensures that the application works as expected under high traffic and spiky load patterns.

This post demonstrates using AWS Step Functions for orchestration, AWS Lambda to simulate the load and Amazon Cognito for authentication and authorization. There is no need to use any third-party software or containers to implement this solution.

The serverless load test solution shown here can scale from 1,000 to 1,000,000 calls in a few minutes. It invokes API Gateway endpoints but you can reuse the solution for other custom API endpoints.

Overall architecture

Overall architecture diagram

Overall architecture diagram

Solution design 

The serverless API load test framework is built using Step Functions that invoke Lambda functions using a fan-out design pattern. The Lambda function obtains the user specific JWT access token from Amazon Cognito user pool and invokes the API Gateway authenticated route..

The solution contains two workflows.

1. Load test workflow

The load test workflow comprises a multi-step process that includes a combination of sequential and parallel steps. The sequential steps include user pool configuration, user creation, and access token generation followed by API invocation in a fan-out design pattern. Step Functions provides a reliable way to build and run such multi-step workflows with support for logging, retries, and dynamic parallelism.

Step Functions workflow diagram for load test

Step Functions workflow diagram for load test

The Step Functions state machine orchestrates the following workflow:

  1. Validate input parameters.
  2. Invoke Lambda function to create a user ID array in the series loadtestuser0, loadtestuser1, and so on. This array is passed as an input to subsequent Lambda functions.
  3. Invoke Lambda to create:
    1. Amazon Cognito user pool
    2. Test users
    3. App client configured for admin authentication flow.
  4. Invoke Lambda functions in a fan-out pattern using dynamic parallelism support in Step Functions. Each function does the following:
    1. Retrieves an access token (one token per user) from Amazon Cognito
    2. Sends an HTTPS request to the specified API Gateway endpoint by passing an access token in the header.

For testing purposes, users can configure mock integration or use Lambda integration for the backend.

2. Cleanup workflow

Step Functions workflow diagram for cleanup

Step Functions workflow diagram for cleanup

As part of the cleanup workflow, the Step Functions state machine invokes a Lambda function to delete the specified number of users from the Amazon Cognito user pool.

Prerequisites to implement the solution

The following prerequisites are required for this walk-through:

  1. AWS account
  2. AWS SAM CLI
  3. Python 3.7
  4. Pre-existing non-production API Gateway HTTP API deployed with a JWT authorizer that uses Amazon Cognito as an identity provider. Refer to this video from the Twitch series #SessionsWithSAM which provides a walkthough for building and deploying a simple HTTP API with JWT authorizer.

Since this solution involves modifying API Gateway endpoint’s authorizer settings, it is recommended to load test non-production environments or production comparable APIs. Revert these settings after the load test is complete. Also, first check Lambda and Amazon Cognito Service Quotas in the AWS account you plan to use.

Step-by-step instructions

Use the AWS CloudShell to deploy the AWS Serverless Application Model (AWS SAM) template. AWS CloudShell is a browser-based shell pre-installed with common development tools. It includes 1 GB of free persistent storage per Region pre-authenticated with your console credentials. You can also use AWS Cloud9 or your preferred IDE. You can check for AWS CloudShell supported Regions here. Depending on your load test requirements, you can specify the total number of unique users to be created. You can also specify the number of API Gateway requests to be invoked per user every time you run the load test. These factors influence the overall test duration, concurrency and cost. Refer to the cost optimization section of this post for tips on minimizing the overall cost of the solution. Refer to the cleanup section of this post for instructions to delete the resources to stop incurring any further charges.

  1. Clone the repository by running the following command:
    git clone https://github.com/aws-snippets/sam-apiloadtest.git
  2. Change to the sam-apiloadtest directory and run the following command to build the application source:
    sam build
  3. Run the following command to package and deploy the application to AWS, with a series of prompts. When prompted for apiGatewayUrl, provide the API Gateway URL route you intend to load test.
    sam deploy --guided

    Example of SAM deploy

    Example of SAM deploy

  4. After the stack creation is complete, you should see UserPoolID and AppClientID in the outputs section.

    Example of stack outputs

    Example of stack outputs

  5. Navigate to the API Gateway console and choose the HTTP API you intend to load test.
  6. Choose Authorization and select the authenticated route configured with a JWT authorizer.

    API Gateway console display after stack is deployed

    API Gateway console display after stack is deployed

  7. Choose Edit Authorizer and update the IssuerURL with Amazon Cognito user pool ID and audience app client ID with the corresponding values from the stack output section in step 4.

    Editing the issuer URL

    Editing the issuer URL

  8. Set authorization scope to aws.cognito.signin.user.admin.

    Setting the authorization scopes

    Setting the authorization scopes

  9. Open the Step Functions console and choose the state machine named apiloadtestCreateUsersAndFanOut-xxx.
  10. Choose Start Execution and provide the following JSON input. Configure the number of users for the load test and the number of calls per user:
    {
      "users": {
        "NumberOfUsers": "100",
        "NumberOfCallsPerUser": "100"
      }
    }
  11. After the execution, you see the status updated to Succeeded.

 

Checking the load test results

The load test’s primary goal is to achieve high concurrency. The main metric to check the test’s effectiveness is the count of successful API Gateway invocations. While load testing your application, find other metrics that may identify potential bottlenecks. Refer to the following steps to inspect CloudWatch Logs after the test is complete:

  1. Navigate to API Gateway service within the console, choose Monitor → Logging, select the $default stage, and choose the Select button.
  2. Choose View Logs in CloudWatch to navigate to the CloudWatch Logs service, which loads the log group and displays the most recent log streams.
  3. Choose the “View in Logs Insights” button to navigate to the Log Insights page. Choose Run Query.
  4. The query results appear along with a bar graph showing the log group’s distribution of log events. The number of records indicates the number of API Gateway invocations.

    Histogram of API Gateway invocations

    Histogram of API Gateway invocations

  5. To visualize p95 metrics, navigate to CloudWatch metrics, choose ApiGateway → ApiId → Latency.
  6. Choose the “Graphed metrics (1)” tab.

    Addig latency metric

    Addig latency metric

  7. Select p95 from the Statistic dropdown.

    Setting the p95 value

    Setting the p95 value

  8. The percentile metrics help visualize the distribution of latency metrics. It can help you find critical outliers or unusual behaviors, and discover potential bottlenecks in your application’s backend.

    Example of the p95 data

    Example of the p95 data

Cleanup 

  1. To delete Amazon Cognito users, run the Step Functions workflow apiloadtestDeleteTestUsers. Provide the following input JSON with the same number of users that you created earlier:
    {
    “NumberOfUsers”: “100”
    }
  2. Step Functions invokes the cleanUpTestUsers Lambda function. It is configured with the test Amazon Cognito user pool ID and app client ID environment variables created during the stack deployment. The users are deleted from the test user pool.
  3. The Lambda function also schedules the corresponding KMS keys for deletion after seven days, the minimum waiting period.
  4. After the state machine is finished, navigate to Cognito → Manage User Pools → apiloadtest-loadtestidp → Users and Groups. Refresh the page to confirm that all users are deleted.
  5. To delete all the resources permanently and stop incurring cost, navigate to the CloudFormation console, select aws-apiloadtest-framework stack, and choose Delete → Delete stack.

Cost optimization

The load test workflow is repeatable and can be reused multiple times for the same or different API Gateway routes. You can reuse Amazon Cognito users for multiple tests since Amazon Cognito pricing is based on the monthly active users (MAUs). Repeatedly deleting and recreating users may exceed the AWS Free Tier or incur additional charges.

Customizations

You can change the number of users and number of calls per user to adjust the API Gateway load. The apiloadtestCreateUsersAndFanOut state machine validation step allows a maximum value of 1,000 for input parameters NumberOfUsers and NumberOfCallsPerUser.

You can customize and increase these values within the Step Functions input validation logic based on your account limits. To load test a different API Gateway route, configure the authorizer as per the step-by-step instructions provided earlier. Next, modify the api_url environment variable within aws-apiloadtest-framework-triggerLoadTestPerUser Lambda function. You can then run the load test using the apiloadtestCreateUsersAndFanOut state machine.

Conclusion

The blog post shows how to use Step Functions and its features to orchestrate a multi-step load test solution. I show how changing input parameters could increase the number of calls made to the API Gateway endpoint without worrying about scalability. I also demonstrate how to achieve cost optimization and perform clean-up to avoid any additional charges. You can modify this example to load test different API endpoints, identify bottlenecks, and check if your application is production-ready.

For more serverless learning resources, visit Serverless Land.

How to integrate third-party IdP using developer authenticated identities

Post Syndicated from Andrew Lee original https://aws.amazon.com/blogs/security/how-to-integrate-third-party-idp-using-developer-authenticated-identities/

Amazon Cognito identity pools enable you to create and manage unique identifiers for your users and provide temporary, limited-privilege credentials to your application to access AWS resources. Currently, there are several out of the box external identity providers (IdPs) to integrate with Amazon Cognito identity pools, including Facebook, Google, and Apple. If your application’s primary users use another social media network such as Snapchat and you would like to make it easier for them to authenticate with your application, you would need to use developer authenticated identities and interface with their third-party IdP. This blog post will describe what is needed from the third-party IdP, how to build a scalable backend for authentication, and how to access AWS services from the client.

As an example, this post will use Snapchat’s Login Kit to integrate with Amazon Cognito. The overall authentication flow for the integration is shown in Figure 1.

Figure 1: Overall authentication flow of integration

Figure 1: Overall authentication flow of integration

Prerequisites

The following are the prerequisites for integrating third-party IdP using developer authenticated identities.

  1. The client SDK to authenticate with third-party IdP. This will handle client authentication and access token retrieval. In the example in this post, we use Snapchat’s Login Kit.
  2. A method to authenticate access tokens that are retrieved from the third-party IdP. For this blog post, I am using an endpoint provided by Snapchat which will retrieve user data by passing in access tokens. A successful query of user data indicates the access token is valid.
  3. Developer authenticated identities (identity pool) configured in Amazon Cognito. You will need to note the identity pool ID and the developer provider name you specify.

Client SDK

Follow the third-party client SDK instructions for implementing authentication in your application. Snapchat’s Login Kit provides an SDK to mount a login button in your app, and to allow you to authenticate against your Snapchat account credentials. After a user clicks on the login button, they will be redirected to Snapchat to login. After successfully logging in, they will be redirected back to your application with an access token. The handleResponseCallback is where you can implement an API call to your developer backend, to pass your access token to retrieve credentials from Amazon Cognito to access AWS services. The following code example mounts a login button on your application, to allow the user to authenticate with Snapchat and retrieve an access token.

var loginButtonIconId = "<HTML div id>";
// Mount Login Button
snap.loginkit.mountButton(loginButtonIconId, {
    clientId: "<Snapchat Client Id>",
    redirectURI: "<Developer backend url>",
    scopeList: [
    "user.display_name",
    "user.bitmoji.avatar",
    "user.external_id",
    ],
    handleResponseCallback: function (token) {
   <IMPLEMENT API CALL TO DEVELOPER BACKEND PASSING SNAPCHAT ACCESS TOKEN>
   }
});

Developer backend

The developer backend is responsible for authenticating access tokens from the third-party IdP and exchanging them for an OpenID Connect token that can be used to access AWS services. For this example, you will use Amazon API Gateway with AWS Lambda with the IAM permissions to call getOpenIdTokenForDeveloperIdentity.

The following is a code example to authenticate access tokens with Snapchat.

let result = await axios({
    method: 'post',
    url: 'https://kit.snapchat.com/v1/me',
    headers: {'Content-Type': 'application/json', 
             'Authorization': 'Bearer ' + body.access_token},
    data: {"query":"{me{displayName bitmoji{avatar} externalId}}"}
});

After successful authentication, next you call getOpenIdTokenForDeveloperIdentity with the identity pool ID and logins map. The logins map has a mapping of the developer provider name to an external ID from the IdP. An OpenID Connect token and the Amazon Cognito identifier (identity ID) will be returned from the call, which can be sent to the application. The identity ID and token can be used to access AWS services. The following is a code example to retrieve AWS credentials after authenticating with Snapchat.

if(result.status == 200) {
    returnbody = JSON.stringify(await cognitoidentity.getOpenIdTokenForDeveloperIdentity({
        IdentityPoolId: '<Identity Pool Id>',
        Logins: {
            '<Developer provider name>': result.data.data.me.externalId,  
        }
    }).promise());
}

Considerations

The following are considerations about Amazon Cognito identity pools that you should keep in mind when building your solution.

  1. The identity ID returned by getOpenIdTokenForDeveloperIdentity is mapped to the external ID provided in the logins map. This mapping is stored in Amazon Cognito. You can then use the identity ID to identify who is calling AWS services, which is especially useful for auditing purposes.
  2. This solution is suitable for multi-regional deployments. All that is required is that you copy the identity pool to another AWS Region. Please note that the identity ID will be different in each Region, but that should not affect functionality.

Conclusion

By using developer authenticated identities, you can integrate your application with Amazon Cognito and a third-party IdP with the proper prerequisites. For more examples of using developer authenticated identities, see Developer Authenticated Identities (Identity Pools) in the Amazon Cognito Developer Guide. If you have feedback about this post, submit comments in the Comments section below or start a new thread on one of our forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Andrew Lee

Andrew is a Solutions Architect at AWS. He is passionate about projects that drive collaboration and partnerships between Amazon and Snapchat. A builder at heart, he believes that only through building something can a vision be truly realized.

Building fine-grained authorization using Amazon Cognito, API Gateway, and IAM

Post Syndicated from Artem Lovan original https://aws.amazon.com/blogs/security/building-fine-grained-authorization-using-amazon-cognito-api-gateway-and-iam/

June 5, 2021: We’ve updated Figure 1: User request flow.


Authorizing functionality of an application based on group membership is a best practice. If you’re building APIs with Amazon API Gateway and you need fine-grained access control for your users, you can use Amazon Cognito. Amazon Cognito allows you to use groups to create a collection of users, which is often done to set the permissions for those users. In this post, I show you how to build fine-grained authorization to protect your APIs using Amazon Cognito, API Gateway, and AWS Identity and Access Management (IAM).

As a developer, you’re building a customer-facing application where your users are going to log into your web or mobile application, and as such you will be exposing your APIs through API Gateway with upstream services. The APIs could be deployed on Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Lambda, or Elastic Load Balancing where each of these options will forward the request to your Amazon Elastic Compute Cloud (Amazon EC2) instances. Additionally, you can use on-premises services that are connected to your Amazon Web Services (AWS) environment over an AWS VPN or AWS Direct Connect. It’s important to have fine-grained controls for each API endpoint and HTTP method. For instance, the user should be allowed to make a GET request to an endpoint, but should not be allowed to make a POST request to the same endpoint. As a best practice, you should assign users to groups and use group membership to allow or deny access to your API services.

Solution overview

In this blog post, you learn how to use an Amazon Cognito user pool as a user directory and let users authenticate and acquire the JSON Web Token (JWT) to pass to the API Gateway. The JWT is used to identify what group the user belongs to, as mapping a group to an IAM policy will display the access rights the group is granted.

Note: The solution works similarly if Amazon Cognito would be federating users with an external identity provider (IdP)—such as Ping, Active Directory, or Okta—instead of being an IdP itself. To learn more, see Adding User Pool Sign-in Through a Third Party. Additionally, if you want to use groups from an external IdP to grant access, Role-based access control using Amazon Cognito and an external identity provider outlines how to do so.

The following figure shows the basic architecture and information flow for user requests.

Figure 1: User request flow

Figure 1: User request flow

Let’s go through the request flow to understand what happens at each step, as shown in Figure 1:

  1. A user logs in and acquires an Amazon Cognito JWT ID token, access token, and refresh token. To learn more about each token, see using tokens with user pools.
  2. A RestAPI request is made and a bearer token—in this solution, an access token—is passed in the headers.
  3. API Gateway forwards the request to a Lambda authorizer—also known as a custom authorizer.
  4. The Lambda authorizer verifies the Amazon Cognito JWT using the Amazon Cognito public key. On initial Lambda invocation, the public key is downloaded from Amazon Cognito and cached. Subsequent invocations will use the public key from the cache.
  5. The Lambda authorizer looks up the Amazon Cognito group that the user belongs to in the JWT and does a lookup in Amazon DynamoDB to get the policy that’s mapped to the group.
  6. Lambda returns the policy and—optionally—context to API Gateway. The context is a map containing key-value pairs that you can pass to the upstream service. It can be additional information about the user, the service, or anything that provides additional information to the upstream service.
  7. The API Gateway policy engine evaluates the policy.

    Note: Lambda isn’t responsible for understanding and evaluating the policy. That responsibility falls on the native capabilities of API Gateway.

  8. The request is forwarded to the service.

Note: To further optimize Lambda authorizer, the authorization policy can be cached or disabled, depending on your needs. By enabling cache, you could improve the performance as the authorization policy will be returned from the cache whenever there is a cache key match. To learn more, see Configure a Lambda authorizer using the API Gateway console.

Let’s have a closer look at the following example policy that is stored as part of an item in DynamoDB.

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"PetStore-API",
         "Effect":"Allow",
         "Action":"execute-api:Invoke",
         "Resource":[
            "arn:aws:execute-api:*:*:*/*/*/petstore/v1/*",
            "arn:aws:execute-api:*:*:*/*/GET/petstore/v2/status"
         ],
         "Condition":{
            "IpAddress":{
               "aws:SourceIp":[
                  "192.0.2.0/24",
                  "198.51.100.0/24"
               ]
            }
         }
      }
   ]
}

Based on this example policy, the user is allowed to make calls to the petstore API. For version v1, the user can make requests to any verb and any path, which is expressed by an asterisk (*). For v2, the user is only allowed to make a GET request for path /status. To learn more about how the policies work, see Output from an Amazon API Gateway Lambda authorizer.

Getting started

For this solution, you need the following prerequisites:

  • The AWS Command Line Interface (CLI) installed and configured for use.
  • Python 3.6 or later, to package Python code for Lambda

    Note: We recommend that you use a virtual environment or virtualenvwrapper to isolate the solution from the rest of your Python environment.

  • An IAM role or user with enough permissions to create Amazon Cognito User Pool, IAM Role, Lambda, IAM Policy, API Gateway and DynamoDB table.
  • The GitHub repository for the solution. You can download it, or you can use the following Git command to download it from your terminal.

    Note: This sample code should be used to test out the solution and is not intended to be used in production account.

     $ git clone https://github.com/aws-samples/amazon-cognito-api-gateway.git
     $ cd amazon-cognito-api-gateway
    

    Use the following command to package the Python code for deployment to Lambda.

     $ bash ./helper.sh package-lambda-functions
     …
     Successfully completed packaging files.
    

To implement this reference architecture, you will be utilizing the following services:

Note: This solution was tested in the us-east-1, us-east-2, us-west-2, ap-southeast-1, and ap-southeast-2 Regions. Before selecting a Region, verify that the necessary services—Amazon Cognito, API Gateway, and Lambda—are available in those Regions.

Let’s review each service, and how those will be used, before creating the resources for this solution.

Amazon Cognito user pool

A user pool is a user directory in Amazon Cognito. With a user pool, your users can log in to your web or mobile app through Amazon Cognito. You use the Amazon Cognito user directory directly, as this sample solution creates an Amazon Cognito user. However, your users can also log in through social IdPs, OpenID Connect (OIDC), and SAML IdPs.

Lambda as backing API service

Initially, you create a Lambda function that serves your APIs. API Gateway forwards all requests to the Lambda function to serve up the requests.

An API Gateway instance and integration with Lambda

Next, you create an API Gateway instance and integrate it with the Lambda function you created. This API Gateway instance serves as an entry point for the upstream service. The following bash command below creates an Amazon Cognito user pool, a Lambda function, and an API Gateway instance. The command then configures proxy integration with Lambda and deploys an API Gateway stage.

Deploy the sample solution

From within the directory where you downloaded the sample code from GitHub, run the following command to generate a random Amazon Cognito user password and create the resources described in the previous section.

 $ bash ./helper.sh cf-create-stack-gen-password
 ...
 Successfully created CloudFormation stack.

When the command is complete, it returns a message confirming successful stack creation.

Validate Amazon Cognito user creation

To validate that an Amazon Cognito user has been created successfully, run the following command to open the Amazon Cognito UI in your browser and then log in with your credentials.

Note: When you run this command, it returns the user name and password that you should use to log in.

 $ bash ./helper.sh open-cognito-ui
  Opening Cognito UI. Please use following credentials to login:
  Username: cognitouser
  Password: xxxxxxxx

Alternatively, you can open the CloudFormation stack and get the Amazon Cognito hosted UI URL from the stack outputs. The URL is the value assigned to the CognitoHostedUiUrl variable.

Figure 2: CloudFormation Outputs - CognitoHostedUiUrl

Figure 2: CloudFormation Outputs – CognitoHostedUiUrl

Validate Amazon Cognito JWT upon login

Since we haven’t installed a web application that would respond to the redirect request, Amazon Cognito will redirect to localhost, which might look like an error. The key aspect is that after a successful log in, there is a URL similar to the following in the navigation bar of your browser:

http://localhost/#id_token=eyJraWQiOiJicVhMYWFlaTl4aUhzTnY3W...

Test the API configuration

Before you protect the API with Amazon Cognito so that only authorized users can access it, let’s verify that the configuration is correct and the API is served by API Gateway. The following command makes a curl request to API Gateway to retrieve data from the API service.

 $ bash ./helper.sh curl-api
{"pets":[{"id":1,"name":"Birds"},{"id":2,"name":"Cats"},{"id":3,"name":"Dogs"},{"id":4,"name":"Fish"}]}

The expected result is that the response will be a list of pets. In this case, the setup is correct: API Gateway is serving the API.

Protect the API

To protect your API, the following is required:

  1. DynamoDB to store the policy that will be evaluated by the API Gateway to make an authorization decision.
  2. A Lambda function to verify the user’s access token and look up the policy in DynamoDB.

Let’s review all the services before creating the resources.

Lambda authorizer

A Lambda authorizer is an API Gateway feature that uses a Lambda function to control access to an API. You use a Lambda authorizer to implement a custom authorization scheme that uses a bearer token authentication strategy. When a client makes a request to one of the API operations, the API Gateway calls the Lambda authorizer. The Lambda authorizer takes the identity of the caller as input and returns an IAM policy as the output. The output is the policy that is returned in DynamoDB and evaluated by the API Gateway. If there is no policy mapped to the caller identity, Lambda will generate a deny policy and request will be denied.

DynamoDB table

DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. This is ideal for this use case to ensure that the Lambda authorizer can quickly process the bearer token, look up the policy, and return it to API Gateway. To learn more, see Control access for invoking an API.

The final step is to create the DynamoDB table for the Lambda authorizer to look up the policy, which is mapped to an Amazon Cognito group.

Figure 3 illustrates an item in DynamoDB. Key attributes are:

  • Group, which is used to look up the policy.
  • Policy, which is returned to API Gateway to evaluate the policy.

 

Figure 3: DynamoDB item

Figure 3: DynamoDB item

Based on this policy, the user that is part of the Amazon Cognito group pet-veterinarian is allowed to make API requests to endpoints https://<domain>/<api-gateway-stage>/petstore/v1/* and https://<domain>/<api-gateway-stage>/petstore/v2/status for GET requests only.

Update and create resources

Run the following command to update existing resources and create a Lambda authorizer and DynamoDB table.

 $ bash ./helper.sh cf-update-stack
Successfully updated CloudFormation stack.

Test the custom authorizer setup

Begin your testing with the following request, which doesn’t include an access token.

$ bash ./helper.sh curl-api
{"message":"Unauthorized"}

The request is denied with the message Unauthorized. At this point, the Amazon API Gateway expects a header named Authorization (case sensitive) in the request. If there’s no authorization header, the request is denied before it reaches the lambda authorizer. This is a way to filter out requests that don’t include required information.

Use the following command for the next test. In this test, you pass the required header but the token is invalid because it wasn’t issued by Amazon Cognito but is a simple JWT-format token stored in ./helper.sh. To learn more about how to decode and validate a JWT, see decode and verify an Amazon Cognito JSON token.

$ bash ./helper.sh curl-api-invalid-token
{"Message":"User is not authorized to access this resource"}

This time the message is different. The Lambda authorizer received the request and identified the token as invalid and responded with the message User is not authorized to access this resource.

To make a successful request to the protected API, your code will need to perform the following steps:

  1. Use a user name and password to authenticate against your Amazon Cognito user pool.
  2. Acquire the tokens (id token, access token, and refresh token).
  3. Make an HTTPS (TLS) request to API Gateway and pass the access token in the headers.

Before the request is forwarded to the API service, API Gateway receives the request and passes it to the Lambda authorizer. The authorizer performs the following steps. If any of the steps fail, the request is denied.

  1. Retrieve the public keys from Amazon Cognito.
  2. Cache the public keys so the Lambda authorizer doesn’t have to make additional calls to Amazon Cognito as long as the Lambda execution environment isn’t shut down.
  3. Use public keys to verify the access token.
  4. Look up the policy in DynamoDB.
  5. Return the policy to API Gateway.

The access token has claims such as Amazon Cognito assigned groups, user name, token use, and others, as shown in the following example (some fields removed).

{
    "sub": "00000000-0000-0000-0000-0000000000000000",
    "cognito:groups": [
        "pet-veterinarian"
    ],
...
    "token_use": "access",
    "scope": "openid email",
    "username": "cognitouser"
}

Finally, let’s programmatically log in to Amazon Cognito UI, acquire a valid access token, and make a request to API Gateway. Run the following command to call the protected API.

$ bash ./helper.sh curl-protected-api
{"pets":[{"id":1,"name":"Birds"},{"id":2,"name":"Cats"},{"id":3,"name":"Dogs"},{"id":4,"name":"Fish"}]}

This time, you receive a response with data from the API service. Let’s examine the steps that the example code performed:

  1. Lambda authorizer validates the access token.
  2. Lambda authorizer looks up the policy in DynamoDB based on the group name that was retrieved from the access token.
  3. Lambda authorizer passes the IAM policy back to API Gateway.
  4. API Gateway evaluates the IAM policy and the final effect is an allow.
  5. API Gateway forwards the request to Lambda.
  6. Lambda returns the response.

Let’s continue to test our policy from Figure 3. In the policy document, arn:aws:execute-api:*:*:*/*/GET/petstore/v2/status is the only endpoint for version V2, which means requests to endpoint /GET/petstore/v2/pets should be denied. Run the following command to test this.

 $ bash ./helper.sh curl-protected-api-not-allowed-endpoint
{"Message":"User is not authorized to access this resource"}

Note: Now that you understand fine grained access control using Cognito user pool, API Gateway and lambda function, and you have finished testing it out, you can run the following command to clean up all the resources associated with this solution:

 $ bash ./helper.sh cf-delete-stack

Advanced IAM policies to further control your API

With IAM, you can create advanced policies to further refine access to your APIs. You can learn more about condition keys that can be used in API Gateway, their use in an IAM policy with conditions, and how policy evaluation logic determines whether to allow or deny a request.

Summary

In this post, you learned how IAM and Amazon Cognito can be used to provide fine-grained access control for your API behind API Gateway. You can use this approach to transparently apply fine-grained control to your API, without having to modify the code in your API, and create advanced policies by using IAM condition keys.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Artem Lovan

Artem is a Senior Solutions Architect based in New York. He helps customers architect and optimize applications on AWS. He has been involved in IT at many levels, including infrastructure, networking, security, DevOps, and software development.

Micro-frontend Architectures on AWS

Post Syndicated from Bryant Bost original https://aws.amazon.com/blogs/architecture/micro-frontend-architectures-on-aws/

A microservice architecture is characterized by independent services that are focused on a specific business function and maintained by small, self-contained teams. Microservice architectures are used frequently for web applications developed on AWS, and for good reason. They offer many well-known benefits such as development agility, technological freedom, targeted deployments, and more. Despite the popularity of microservices, many frontend applications are still built in a monolithic style. For example, they have one large code base that interacts with all backend microservices, and is maintained by a large team of developers.

Monolith Frontend

Figure 1. Microservice backend with monolith frontend

What is a Micro-frontend?

The micro-frontend architecture introduces microservice development principles to frontend applications. In a micro-frontend architecture, development teams independently build and deploy “child” frontend applications. These applications are combined by a “parent” frontend application that acts as a container to retrieve, display, and integrate the various child applications. In this parent/child model, the user interacts with what appears to be a single application. In reality, they are interacting with several independent applications, published by different teams.

Micro Frontend

Figure 2. Microservice backend with micro-frontend

Micro-frontend Benefits

Compared to a monolith frontend, a micro-frontend offers the following benefits:

  • Independent artifacts: A core tenet of microservice development is that artifacts can be deployed independently, and this remains true for micro-frontends. In a micro-frontend architecture, teams should be able to independently deploy their frontend applications with minimal impact to other services. Those changes will be reflected by the parent application.
  • Autonomous teams: Each team is the expert in its own domain. For example, the billing service team members have specialized knowledge. This includes the data models, the business requirements, the API calls, and user interactions associated with the billing service. This knowledge allows the team to develop the billing frontend faster than a larger, less specialized team.
  • Flexible technology choices: Autonomy allows each team to make technology choices that are independent from other teams. For instance, the billing service team could develop their micro-frontend using Vue.js and the profile service team could develop their frontend using Angular.
  • Scalable development: Micro-frontend development teams are smaller and are able to operate without disrupting other teams. This allows us to quickly scale development by spinning up new teams to deliver additional frontend functionality via child applications.
  • Easier maintenance: Keeping frontend repositories small and specialized allows them to be more easily understood, and this simplifies long-term maintenance and testing. For instance, if you want to change an interaction on a monolith frontend, you must isolate the location and dependencies of the feature within the context of a large codebase. This type of operation is greatly simplified when dealing with the smaller codebases associated with micro-frontends.

Micro-frontend Challenges

Conversely, a micro-frontend presents the following challenges:

  • Parent/child integration: A micro-frontend introduces the task of ensuring the parent application displays the child application with the same consistency and performance expected from a monolith application. This point is discussed further in the next section.
  • Operational overhead: Instead of managing a single frontend application, a micro-frontend application involves creating and managing separate infrastructure for all teams.
  • Consistent user experience: In order to maintain a consistent user experience, the child applications must use the same UI components, CSS libraries, interactions, error handling, and more. Maintaining consistency in the user experience can be difficult for child applications that are at different stages in the development lifecycle.

Building Micro-frontends

The most difficult challenge with the micro-frontend architecture pattern is integrating child applications with the parent application. Prioritizing the user experience is critical for any frontend application. In the context of micro-frontends, this means ensuring a user can seamlessly navigate from one child application to another inside the parent application. We want to avoid disruptive behavior such as page refreshes or multiple logins. At its most basic definition, parent/child integration involves the parent application dynamically retrieving and rendering child applications when the parent app is loaded. Rendering the child application depends on how the child application was built, and this can be done in a number of ways. Two of the most popular methods of parent/child integration are:

  1. Building each child application as a web component.
  2. Importing each child application as an independent module. These modules either declares a function to render itself or is dynamically imported by the parent application (such as with module federation).

Registering child apps as web components:

<html>
    <head>
        <script src="https://shipping.example.com/shipping-service.js"></script>
        <script src="https://profile.example.com/profile-service.js"></script>
        <script src="https://billing.example.com/billing-service.js"></script>
        <title>Parent Application</title>
    </head>
    <body>
        <shipping-service />
        <profile-service />
        <billing-service />
    </body>
</html>

Registering child apps as modules:

<html>
    <head>
        <script src="https://shipping.example.com/shipping-service.js"></script>
        <script src="https://profile.example.com/profile-service.js"></script>
        <script src="https://billing.example.com/billing-service.js"></script>
     <title>Parent Application</title>
    </head>
    <body>
    </body>
    <script>
        // Load and render the child applications form their JS bundles.
    </script>
</html>

The following diagram shows an example micro-frontend architecture built on AWS.

AWS Micro Frontend

Figure 3. Micro-frontend architecture on AWS

In this example, each service team is running a separate, identical stack to build their application. They use the AWS Developer Tools and deploy the application to Amazon Simple Storage Service (S3) with Amazon CloudFront. The CI/CD pipelines use shared components such as CSS libraries, API wrappers, or custom modules stored in AWS CodeArtifact. This helps drive consistency across parent and child applications.

When you retrieve the parent application, it should prompt you to log in to an identity provider and retrieve JWTs. In this example, the identity provider is an Amazon Cognito User Pool. After a successful login, the parent application retrieves the child applications from CloudFront and renders them inside the parent application. Alternatively, the parent application can elect to render the child applications on demand, when you navigate to a specific route. The child applications should not require you to log in again to the Amazon Cognito user pool. They should be configured to use the JWT obtained by the parent app or silently retrieve a new JWT from Amazon Cognito.

Conclusion

Micro-frontend architectures introduce many of the familiar benefits of microservice development to frontend applications. A micro-frontend architecture also simplifies the process of building complex frontend applications by allowing you to manage small, independent components.

The Satellite Ear Tag that is Changing Cattle Management

Post Syndicated from Karen Hildebrand original https://aws.amazon.com/blogs/architecture/the-satellite-ear-tag-that-is-changing-cattle-management/

Most cattle are not raised in cities—they live on cattle stations, large open plains, and tracts of land largely unpopulated by humans. It’s hard to keep connected with the herd. Cattle don’t often carry their own mobile phones, and they don’t pay a mobile phone bill. Naturally, the areas in which cattle live, often do not have cellular connectivity or reception. But they now have one way to stay connected: a world-first satellite ear tag.

Ceres Tag co-founders Melita Smith and David Smith recognized the problem given their own farming background. David explained that they needed to know simple things to begin with, such as:

  • Where are they?
  • How many are out there?
  • What are they doing?
  • What condition are they in?
  • Are they OK?

Later, the questions advanced to:

  • Which are the higher performing animals that I want to keep?
  • Where do I start when rounding them up?
  • As assets, can I get better financing and insurance if I can prove their location, existence, and condition?

To answer these questions, Ceres Tag first had to solve the biggest challenge, and it was not to get cattle to carry their mobile phones and pay mobile phone bills to generate the revenue needed to get greater coverage. David and Melita knew they needed help developing a new method of tracking, but in a way that aligned with current livestock practices. Their idea of a satellite connected ear tag came to life through close partnership and collaboration with CSIRO, Australia’s national science agency. They brought expertise to the problem, and rallied together teams of experts across public and private partnerships, never accepting “that’s not been done before” as a reason to curtail their innovation.

 

Figure 1: How Ceres Tag works in practice

Thinking Big: Ceres Tag Protocol

Melita and David constructed their idea and brought the physical hardware to reality. This meant finding strategic partners to build hardware, connectivity partners that provided global coverage at a cost that was tenable to cattle operators, integrations with existing herd management platforms and a global infrastructure backbone that allowed their solution to scale. They showed resilience, tenacity and persistence that are often traits attributed to startup founders and lifelong agricultural advocates. Explaining the purpose of the product often requires some unique approaches to defining the value proposition while fundamentally breaking down existing ways of thinking about things. As David explained, “We have an internal saying, ‘As per Ceres Tag protocol …..’ to help people to see the problem through a new lens.” This persistence led to the creation of an easy to use ear tagging applicator and a two-prong smart ear tag. The ear tag connects via satellite for data transmission, providing connectivity to more than 120 countries in the world and 80% of the earth’s surface.

The Ceres Tag applicator, smart tag, and global satellite connectivity

Figure 2: The Ceres Tag applicator, smart tag, and global satellite connectivity

Unlocking the blocker: data-driven insights

With the hardware and connectivity challenges solved, Ceres Tag turned to how the data driven insights would be delivered. The company needed to select a technology partner that understood their global customer base, and what it means to deliver a low latency solution for web, mobile and API-driven solutions. David, once again knew the power in leveraging the team around him to find the best solution. The evaluation of cloud providers was led by Lewis Frost, COO, and Heidi Perrett, Data Platform Manager. Ceres Tag ultimately chose to partner with AWS and use the AWS Cloud as the backbone for the Ceres Tag Management System.

Ceres Tag conceptual diagram

Figure 3: Ceres Tag conceptual diagram

The Ceres Tag Management System houses the data and metadata about each tag, enabling the traceability of that tag throughout each animal’s life cycle. This includes verification as to whom should have access to their health records and history. Based on the nature of the data being stored and transmitted, security of the application is critical. As a startup, it was important for Ceres Tag to keep costs low, but to also to be able to scale based on growth and usage as it expands globally.

Ceres Tag is able to quickly respond to customers regardless of geography, routing traffic to the appropriate end point. They accomplish this by leveraging Amazon CloudFront as the Content Delivery Network (CDN) for traffic distribution of front-end requests and Amazon Route 53 for DNS routing. A multi-Availability Zone deployment and AWS Application Load Balancer distribute incoming traffic across multiple targets, increasing the availability of your application.

Ceres Tag is using AWS Fargate to provide a serverless compute environment that matches the pay-as-you-go usage-based model. AWS also provides many advanced security features and architecture guidance that has helped to implement and evaluate best practice security posture across all of the environments. Authentication is handled by Amazon Cognito, which allows Ceres Tag to scale easily by supporting millions of users. It leverages easy-to-use features like sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.

The data captured from the ear tag on the cattle is will be ingested via AWS PrivateLink. By providing a private endpoint to access your services, AWS PrivateLink ensures your traffic is not exposed to the public internet. It also makes it easy to connect services across different accounts and VPCs to significantly simplify your network architecture. In leveraging a satellite connectivity provider running on AWS, Ceres Tag will benefit from the AWS Ground Station infrastructure leveraged by the provider in addition to the streaming IoT database.

 

How to implement password-less authentication with Amazon Cognito and WebAuthn

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-implement-password-less-authentication-with-amazon-cognito-and-webauthn/

In this blog post, I show you how to offer a password-less authentication experience to your customers. To do this, you’ll allow physical security keys or platform authenticators (like finger-print scanners) to be used as the authentication factor to your web or mobile applications that use Amazon Cognito user pools for authentication.

An Amazon Cognito user pool is a user directory that Amazon Web Services (AWS) customers use to manage their customer identities. Web Authentication (WebAuthn) is a W3C standard that lets users authenticate to web applications using public-key cryptography. Using public-key cryptography enables you to implement a stronger authentication mechanism that’s less dependent on passwords.

Mobile and web applications can use WebAuthn together with browser and device support for the Client-To-Authenticator-Protocol (CTAP) to implement Fast ID Online (FIDO) authentication. To learn more about the flow of WebAuthn and CTAP, visit the FIDO Alliance.

How it works

Amazon Cognito user pools allow you to build a custom authentication flow that uses AWS Lambda functions to authenticates users based on one or more challenge/response cycles. You can use this flow to implement password-less authentication that is based on custom challenges. To use this flow to implement FIDO authentication, you need to create credentials during the registration phase and reference these credentials in the user’s profile. You can then validate these credentials during the authentication phase in a custom challenge.

During registration, a new set of credentials that are bound to your application (relying party), are created through a FIDO authenticator. For example, a platform authenticator with a biometric sensor or a roaming authenticator like a physical security key. The private key of this credential set remains on the authenticator, the public key, together with a credential identifier are saved in a custom attribute that’s part of the user profile in Amazon Cognito.

During authentication, a Cognito custom authentication flow will be used to implement authentication through a custom challenge. The application prompts the user to sign in using the authenticator that they used during registration. Response from the authenticator is then passed as a challenge response to Amazon Cognito and verified using the stored public key.

In this password-less flow, the private key has never left the physical device, the authenticator also validates that relying party in authentication request matches the relying party that was used to create the credentials. This combination provides a more secured authentication flow that uses stronger credentials, protects user from phishing and provides better user experience.

About the demo project

This blog post and the diagrams below explain a scenario that uses FIDO as the only authentication factor, to implement password-less authentication. To help with implementation details, I created a project to demonstrate WebAuthn integration with Amazon Cognito that provides sample code for three scenarios:

  • A scenario that uses FIDO as the only factor (password-less)
  • A scenario that uses FIDO as a second factor (with password)
  • A scenario that lets users sign-in with only a password

This project is only a demonstration and shouldn’t be used as-is in production environments. When using FIDO as authentication factor, it is a best practice to allow users to register multiple authenticators and you need to implement an account recovery workflow in case of a lost authenticator.

Implementing FIDO Authentication

Let’s take a deeper dive into the design and components involved in implementing this solution. To deploy this project in your development environment, follow the instructions in the WebAuthn integration with Amazon Cognito project.

Creating and configuring user pool

The first step is to create a Cognito user pool and triggers that orchestrate a custom authentication flow. You do that by deploying the CloudFormation stack that will create all resources as explained in the demo project.

Few implementation details to note about the user pool:

  • The template creates a user pool, app client, triggers, and Lambda functions to use for custom authentication.
  • The template creates a custom attribute called publicKeyCred. This is the custom attribute that holds a base64 encoded representation of the credential identifier and public key for the user’s authenticator.
  • The app client defines what authentication flows are allowed. You can limit allowed flows according to your use-case. To support FIDO authentication, you must allow CUSTOM_AUTH flow.
  • The app client has “write” permissions to the custom attribute publicKeyCred but not “read” permissions. This allows your application to write the attribute during registration or profile updates but excludes this attribute from the user’s id_token. Since this attribute is considered back-end data that is only used during authentication, it doesn’t need to be part of user profile in the id_token.

User registration flow

The registration flow needs to create credentials using the authenticator and store the public-key in user’s profile. Let’s take a closer look at the sequence of calls and involved components to implement this flow.
 

Figure 1: WebAuthn user registration process

Figure 1: WebAuthn user registration process

  1. The user navigates to your application, www.example.com (relying party), and creates an account. A request is sent to the relying party to build a credentials options object and send it back to the browser. (in the demo project, this starts in the createCredentials function in webauthn-client.js and creates the credentials options object by making a call to createCredRequest in authn.js)
  2. The browser uses built-in WebAuthn APIs to create the new credentials with an available authenticator using the credentials options object that was created in first step. This is done by making a call to navigator.credentials.create API (this API is available in browsers and platforms that support FIDO and WebAuthn).
  3. The user experience in this step depends on the OS, browser, and the authenticator. For example, the browser could prompt the user to attach a security key or, on devices that support it, to use a biometric scanner.
     
    Figure 2: User registration and browser alert to use an authenticator in Firefox

    Figure 2: User registration and browser alert to use an authenticator in Firefox

  4. The user interacts with an authenticator (by touching a security key or scanning finger on a touch-id device), which generates new credentials bound to the relying party and returns a response object to the browser.
  5. The browser sends a credential response object to the relying party to parse and validate the response on the server-side. The credential identifier and public key are extracted from the credential response. At this step, your application can also check additional authenticator data and use it to make authentication decisions. For example, your application can check if the authenticator was able to verify user identity through PIN or biometrics (UV flag) or only user presence (UP flag) was verified by authenticator. In the demo project, this is still part of createCredentials function and server-side parsing and validation is done in parseCredResponse that is implemented in authn.js
  6. To complete the user registration, the browser passes the profile attributes that have been collected during registration through Amazon Cognito APIs as custom attributes. This step is performed in signUp function in webauthn-client.js

At the conclusion of this process, a new user will be created in Amazon Cognito and the custom attribute “publicKeyCred” will be populated with a base64 encoded string that includes a credential identifier and the public key generated by the authenticator. This attribute is not considered secret or sensitive data, it rather includes the public key that will be used to verify the authenticator response during subsequent authentications.

User authentication flow

The following diagram describes the custom authentication flow to implement password-less authentication.
 

Figure 3: WebAuthn user authentication process

Figure 3: WebAuthn user authentication process

  1. The user provides their user name and selects the sign-in button, script (running in browser) starts the sign-in process using Amazon Cognito InitiateAuth API passing the user name and indicating that authentication flow is CUSTOM_AUTH. In the demo project, this part is performed in the signIn function in webauthn-client.js.
  2. The Amazon Cognito service passes control to the Define Auth Challenge Lambda trigger. The trigger then determines that this is the first step in the authentication and returns CUSTOM_CHALLENGE as the next challenge to the user.
  3. Control then moves to Create Auth Challenge Lambda trigger to create the custom challenge. This trigger creates a random challenge (a 64 bytes random string), extracts the credential identifier from the user profile (the value passed initially during the sign-up process) combines them and returns them as a custom challenge to the client. This is performed in CreateAuthChallenge Lambda function.
  4. The browser then prompts the user to activate an authenticator. At this stage, the authenticator verifies that credentials exist for the identifier and that the relying party matches the one that is bound to the credentials. This is implemented by making a call to navigator.credentials.get API that is available in browsers and devices that support FIDO2 and WebAuthn.
     
    Figure 4: Authentication and browser prompt to use a registered authenticator

    Figure 4: Authentication and browser prompt to use a registered authenticator

  5. If credentials exist and the relying party is verified, the authenticator requests a user attention or verification. Depending on the type of authenticator, user verification through biometrics or a PIN code is performed and the credentials response is passed back to the browser.
     
    Figure 5: Authentication examples from different browsers and platforms

    Figure 5: Authentication examples from different browsers and platforms

  6. The signIn function continues the sign-in process by calling respondToAuthChallenge API and sending the credentials response to Amazon Cognito.
  7. Amazon Cognito sends the response to the Verify Auth Challenge Lambda trigger. This trigger extracts the public key from the user profile, parses and validates the credentials response, and if the signature is valid, it responds with success. This is performed in VerifyAuthChallenge Lambda trigger.
  8. Lastly, Amazon Cognito sends the control again to Define Auth Challenge to determine the next step. If the results from Verify Auth Challenge indicate a successful response, authentication succeeds and Amazon Cognito responds with ID, access, and refresh tokens.

Conclusion

When building customer facing applications, you as the application owner and developer need to balance security with usability. Reducing the risk of account take-over and phishing is based on using strong credentials, strong second-factors, and minimizing the role of passwords. The flexibility of Amazon Cognito custom authentication flow integrated with WebAuthn offer a technical path to make this possible in addition to offering better user experience to your customers.

Check out the WebAuthn with Amazon Cognito project for code samples and deployment steps, deploy this in your development environment to see this integration in action and go build an awesome password-less experience in your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

How to configure Duo multi-factor authentication with Amazon Cognito

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-configure-duo-multi-factor-authentication-with-amazon-cognito/

Adding multi-factor authentication (MFA) reduces the risk of user account take-over, phishing attacks, and password theft. Adding MFA while providing a frictionless sign-in experience requires you to offer a variety of MFA options that support a wide range of users and devices. Let’s see how you can achieve that with Amazon Cognito and Duo Multi-Factor Authentication (MFA).

Amazon Cognito user pools are user directories that are used by Amazon Web Services (AWS) customers to manage the identities of their customers and to add sign-in, sign-up and user management features to their customer-facing web and mobile applications. Duo Security is an APN Partner that provides unified access security and multi-factor authentication solutions.

In this blog post, I show you how to use Amazon Cognito custom authentication flow to integrate Duo Multi-Factor Authentication (MFA) into your sign-in flow and offer a wide range of MFA options to your customers. Some second factors available through Duo MFA are mobile phone SMS passcodes, approval of login via phone call, push-notification-based approval on smartphones, biometrics on devices that support it, and security keys that can be attached via USB.

How it works

Amazon Cognito user pools enable you to build a custom authentication flow that authenticates users based on one or more challenge/response cycles. You can use this flow to integrate Duo MFA into your authentication as a custom challenge.

Duo Web offers a software development kit to make it easier for you to integrate your web applications with Duo MFA. You need an account with Duo and an application to protect (which can be created from the Duo admin dashboard). When you create your application in the Duo admin dashboard, note the integration key (ikey), secret key (skey), and API hostname. These details, together with a random string (akey) that you generate, are the primary factors used to integrate your Amazon Cognito user pool with Duo MFA.

Note: ikey, skey, and akey are referred to as Duo keys.

Duo MFA will be integrated into the sign-in flow as a custom challenge. To do that, you need to generate a signed challenge request using Duo APIs and use it to load Duo MFA in an iframe and request the user’s second factor. When the challenge is answered by the user, a signed response is returned to your application and sent to Amazon Cognito for verification. If the response is valid then the MFA challenge is successful.

Let’s take a closer look at the sequence of calls and components involved in this flow.

Implementation details

In this section, I walk you through the end-to-end flow of integrating Duo MFA with Amazon Cognito using a custom authentication flow. To help you with this integration, I built a demo project that provides deployment steps and sample code to create a working demo in your environment.

Create and configure a user pool

The first step is to create the AWS resources needed for the demo. You can do that by deploying the AWS CloudFormation stack as described in the demo project.

A few implementation details to be aware of:

  • The template creates an Amazon Cognito user pool, application client, and AWS Lambda triggers that are used for the custom authentication.
  • The template also accepts ikey, skey, and akey as inputs. For security, the parameters are masked in the AWS CloudFormation console. These parameters are stored in a secret in AWS Secrets Manager with a resource policy that allows relevant Lambda functions read access to that secret.
  • Duo keys are loaded from secrets manager at the initialization of create auth challenge and verify auth challenge Lambda triggers to be used to create sign-request and verify sign-response.

Authentication flow

Figure 1: User authentication process for the custom authentication flow

Figure 1: User authentication process for the custom authentication flow

The preceding sequence diagram (Figure 1) illustrates the sequence of calls to sign in a user, which are as follows:

  1. In your application, the user is presented with a sign-in UI that captures their user name and password and starts the sign-in flow. A script—running in the browser—starts the sign-in process using the Amazon Cognito authenticateUser API with CUSTOM_AUTH set as the authentication flow. This validates the user’s credentials using Secure Remote Password (SRP) protocol and moves on to the second challenge if the credentials are valid.

    Note: The authenticateUser API automatically starts the authentication process with SRP. The first challenge that’s sent to Amazon Cognito is SRP_A. This is followed by PASSWORD_VERIFIER to verify the user’s credentials.

  2. After the SRP challenge step, the define auth challenge Lambda trigger will return CUSTOM_CHALLENGE and this will move control to the create auth challenge trigger.
  3. The create auth challenge Lambda trigger creates a Duo signed request using the Duo keys plus the username and returns the signed request as a challenge to the client. Here is a sample code of what create auth challenge should look like:
    <JavaScript>
    
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
    
        if(ikey == null || skey == null || akey == null){ 
          const promise = new Promise(function(resolve, reject) {
              secretsManagerClient.getSecretValue({SecretId: secretName}, function(err, data) {
                    if (err) {throw err; }
                    else {
                        if ('SecretString' in data) {
                            secret = JSON.parse(data.SecretString);
                            ikey = secret['duo-ikey'];
                            skey = secret['duo-skey'];
                            akey = secret['duo-akey'];
                        }
                    }
                    resolve();
                });
            })
            
            await promise; 
        }
    
        
        var username = event.userName;
        var sig_request = duo_web.sign_request(ikey, skey, akey, username);
        
        event.response.publicChallengeParameters = {
            sig_request: sig_request
        };
        
        return event;
    };
    

  4. The client initializes the Duo Web library with the signed request and displays Duo MFA in an iframe to request a second factor from the user. To initialize the Duo library, you need the api_hostname that is generated for your application in the Duo dashboard, the sign-request that was received as a challenge, and a callback function to invoke after the MFA step is completed by the user. This is done on the client side as follows:
    <JavaScript>
          //render Duo MFA iframe
          $("#duo-mfa").html('<iframe id="duo_iframe" title="Two-Factor Authentication" </iframe>');
            
          Duo.init({
            'host': api_hostname,
            'sig_request': challengeParameters.sig_request,
            'submit_callback': mfa_callback
          });
    

  5. Through the Duo iframe, the user can set up their MFA preferences and respond to an MFA challenge. After successful MFA setup, a signed response from the Duo Web library will be returned to the client and passed to the callback function that was provided in Duo.init call.
     
    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

  6. The client sends the Duo signed response to the Amazon Cognito service as a challenge response.
  7. Amazon Cognito sends the response to the verify auth challenge Lambda trigger, which uses Duo keys and username to verify the response.
    <JavaScript>
    const duo_web = require('duo_web');
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
        
        var username = event.userName;
        
        //-------get challenge response
        const sig_response = event.request.challengeAnswer;
        const verificationResult = duo_web.verify_response(ikey, skey, akey, sig_response);
        
        if (verificationResult === username) {
            event.response.answerCorrect = true;
        } else {
            event.response.answerCorrect = false;
        }
        return event;
    };
    

  8. Validation results and current state are passed once again to the define auth challenge Lambda trigger. If the user response is valid, then the Duo MFA challenge is successful. You can then decide to introduce additional challenges to the user or issue tokens and complete the authentication process.

Conclusion

As you build your mobile or web application, keep in mind that using multi-factor authentication is an effective and recommended approach to protect your customers from account take-over, phishing, and the risks of weak or compromised passwords. Making multi-factor authentication easy for your customers enables you to offer authentication experience that protects their accounts but doesn’t slow them down.

Visit the security pillar of AWS Well-Architected Framework to learn more about AWS security best practices and recommendations.

In this blog post, I showed you how to integrate Duo MFA with an Amazon Cognito user pool. Visit the demo application and review the code samples in it to learn how to integrate this with your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

How to add authentication to a single-page web application with Amazon Cognito OAuth2 implementation

Post Syndicated from George Conti original https://aws.amazon.com/blogs/security/how-to-add-authentication-single-page-web-application-with-amazon-cognito-oauth2-implementation/

In this post, I’ll be showing you how to configure Amazon Cognito as an OpenID provider (OP) with a single-page web application.

This use case describes using Amazon Cognito to integrate with an existing authorization system following the OpenID Connect (OIDC) specification. OIDC is an identity layer on top of the OAuth 2.0 protocol to enable clients to verify the identity of users. Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Some key reasons customers select Amazon Cognito include:

  • Simplicity of implementation: The console is very intuitive; it takes a short time to understand how to configure and use Amazon Cognito. Amazon Cognito also has key out-of-the-box functionality, including social sign-in, multi-factor authentication (MFA), forgotten password support, and infrastructure as code (AWS CloudFormation) support.
  • Ability to customize workflows: Amazon Cognito offers the option of a hosted UI where users can sign-in directly to Amazon Cognito or sign-in via social identity providers such as Amazon, Google, Apple, and Facebook. The Amazon Cognito hosted UI and workflows help save your team significant time and effort.
  • OIDC support: Amazon Cognito can securely pass user profile information to an existing authorization system following the ODIC authorization code flow. The authorization system uses the user profile information to secure access to the app.

Amazon Cognito overview

Amazon Cognito follows the OIDC specification to authenticate users of web and mobile apps. Users can sign in directly through the Amazon Cognito hosted UI or through a federated identity provider, such as Amazon, Facebook, Apple, or Google. The hosted UI workflows include sign-in and sign-up, password reset, and MFA. Since not all customer workflows are the same, you can customize Amazon Cognito workflows at key points with AWS Lambda functions, allowing you to run code without provisioning or managing servers. After a user authenticates, Amazon Cognito returns standard OIDC tokens. You can use the user profile information in the ID token to grant your users access to your own resources or you can use the tokens to grant access to APIs hosted by Amazon API Gateway. You can also exchange the tokens for temporary AWS credentials to access other AWS services.

Figure 1: Amazon Cognito sign-in flow

Figure 1: Amazon Cognito sign-in flow

OAuth 2.0 and OIDC

OAuth 2.0 is an open standard that allows a user to delegate access to their information to other websites or applications without handing over credentials. OIDC is an identity layer on top of OAuth 2.0 that uses OAuth 2.0 flows. OAuth 2.0 defines a number of flows to manage the interaction between the application, user, and authorization server. The right flow to use depends on the type of application.

The client credentials flow is used in machine-to-machine communications. You can use the client credentials flow to request an access token to access your own resources, which means you can use this flow when your app is requesting the token on its own behalf, not on behalf of a user. The authorization code grant flow is used to return an authorization code that is then exchanged for user pool tokens. Because the tokens are never exposed directly to the user, they are less likely to be shared broadly or accessed by an unauthorized party. However, a custom application is required on the back end to exchange the authorization code for user pool tokens. For security reasons, we recommend the Authorization Code Flow with Proof Key Code Exchange (PKCE) for public clients, such as single-page apps or native mobile apps.

The following table shows recommended flows per application type.

Application CFlow Description
Machine Client credentials Use this flow when your application is requesting the token on its own behalf, not on behalf of the user
Web app on a server Authorization code grant A regular web app on a web server
Single-page app Authorization code grant PKCE An app running in the browser, such as JavaScript
Mobile app Authorization code grant PKCE iOS or Android app

Securing the authorization code flow

Amazon Cognito can help you achieve compliance with regulatory frameworks and certifications, but it’s your responsibility to use the service in a way that remains compliant and secure. You need to determine the sensitivity of the user profile data in Amazon Cognito; adhere to your company’s security requirements, applicable laws and regulations; and configure your application and corresponding Amazon Cognito settings appropriately for your use case.

Note: You can learn more about regulatory frameworks and certifications at AWS Services in Scope by Compliance Program. You can download compliance reports from AWS Artifact.

We recommend that you use the authorization code flow with PKCE for single-page apps. Applications that use PKCE generate a random code verifier that’s created for every authorization request. Proof Key for Code Exchange by OAuth Public Clients has more information on use of a code verifier. In the following sections, I will show you how to set up the Amazon Cognito authorization endpoint for your app to support a code verifier.

The authorization code flow

In OpenID terms, the app is the relying party (RP) and Amazon Cognito is the OP. The flow for the authorization code flow with PKCE is as follows:

  1. The user enters the app home page URL in the browser and the browser fetches the app.
  2. The app generates the PKCE code challenge and redirects the request to the Amazon Cognito OAuth2 authorization endpoint (/oauth2/authorize).
  3. Amazon Cognito responds back to the user’s browser with the Amazon Cognito hosted sign-in page.
  4. The user signs in with their user name and password, signs up as a new user, or signs in with a federated sign-in. After a successful sign-in, Amazon Cognito returns the authorization code to the browser, which redirects the authorization code back to the app.
  5. The app sends a request to the Amazon Cognito OAuth2 token endpoint (/oauth2/token) with the authorization code, its client credentials, and the PKCE verifier.
  6. Amazon Cognito authenticates the app with the supplied credentials, validates the authorization code, validates the request with the code verifier, and returns the OpenID tokens, access token, ID token, and refresh token.
  7. The app validates the OpenID ID token and then uses the user profile information (claims) in the ID token to provide access to resources.(Optional) The app can use the access token to retrieve the user profile information from the Amazon Cognito user information endpoint (/userInfo).
  8. Amazon Cognito returns the user profile information (claims) about the authenticated user to the app. The app then uses the claims to provide access to resources.

The following diagram shows the authorization code flow with PKCE.

Figure 2: Authorization code flow

Figure 2: Authorization code flow

Implementing an app with Amazon Cognito authentication

Now that you’ve learned about Amazon Cognito OAuth implementation, let’s create a working example app that uses Amazon Cognito OAuth implementation. You’ll create an Amazon Cognito user pool along with an app client, the app, an Amazon Simple Storage Service (Amazon S3) bucket, and an Amazon CloudFront distribution for the app, and you’ll configure the app client.

Step 1. Create a user pool

Start by creating your user pool with the default configuration.

Create a user pool:

  1. Go to the Amazon Cognito console and select Manage User Pools. This takes you to the User Pools Directory.
  2. Select Create a user pool in the upper corner.
  3. Enter a Pool name, select Review defaults, and select Create pool.
  4. Copy the Pool ID, which will be used later to create your single-page app. It will be something like region_xxxxx. You will use it to replace the variable YOUR_USERPOOL_ID in a later step.(Optional) You can add additional features to the user pool, but this demonstration uses the default configuration. For more information see, the Amazon Cognito documentation.

The following figure shows you entering the user pool name.

Figure 3: Enter a name for the user pool

Figure 3: Enter a name for the user pool

The following figure shows the resulting user pool configuration.

Figure 4: Completed user pool configuration

Figure 4: Completed user pool configuration

Step 2. Create a domain name

The Amazon Cognito hosted UI lets you use your own domain name or you can add a prefix to the Amazon Cognito domain. This example uses an Amazon Cognito domain with a prefix.

Create a domain name:

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and select your user pool.
  2. Under App integration, select Domain name.
  3. In the Amazon Cognito domain section, add your Domain prefix (for example, myblog).
  4. Select Check availability. If your domain isn’t available, change the domain prefix and try again.
  5. When your domain is confirmed as available, copy the Domain prefix to use when you create your single-page app. You will use it to replace the variable YOUR_COGNITO_DOMAIN_PREFIX in a later step.
  6. Choose Save changes.

The following figure shows creating an Amazon Cognito hosted domain.

Figure 5: Creating an Amazon Cognito hosted UI domain

Figure 5: Creating an Amazon Cognito hosted UI domain

Step 3. Create an app client

Now create the app client user pool. An app client is where you register your app with the user pool. Generally, you create an app client for each app platform. For example, you might create an app client for a single-page app and another app client for a mobile app. Each app client has its own ID, authentication flows, and permissions to access user attributes.

Create an app client:

  1. Sign in to the Amazon Cognito console, select Manage User Pools, and select your user pool.
  2. Under General settings, select App clients.
  3. Choose Add an app client.
  4. Enter a name for the app client in the App client name field.
  5. Uncheck Generate client secret and accept the remaining default configurations.

    Note: The client secret is used to authenticate the app client to the user pool. Generate client secret is unchecked because you don’t want to send the client secret on the URL using client-side JavaScript. The client secret is used by applications that have a server-side component that can secure the client secret.

  6. Choose Create app client as shown in the following figure.

    Figure 6: Create and configure an app client

    Figure 6: Create and configure an app client

  7. Copy the App client ID. You will use it to replace the variable YOUR_APPCLIENT_ID in a later step.

The following figure shows the App client ID which is automatically generated when the app client is created.

Figure 7: App client configuration

Figure 7: App client configuration

Step 4. Create an Amazon S3 website bucket

Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. We use Amazon S3 here to host a static website.

Create an Amazon S3 bucket with the following settings:

  1. Sign in to the AWS Management Console and open the Amazon S3 console.
  2. Choose Create bucket to start the Create bucket wizard.
  3. In Bucket name, enter a DNS-compliant name for your bucket. You will use this in a later step to replace the YOURS3BUCKETNAME variable.
  4. In Region, choose the AWS Region where you want the bucket to reside.

    Note: It’s recommended to create the Amazon S3 bucket in the same AWS Region as Amazon Cognito.

  5. Look up the region code from the region table (for example, US-East [N. Virginia] has a region code of us-east-1). You will use the region code to replace the variable YOUR_REGION in a later step.
  6. Choose Next.
  7. Select the Versioning checkbox.
  8. Choose Next.
  9. Choose Next.
  10. Choose Create bucket.
  11. Select the bucket you just created from the Amazon S3 bucket list.
  12. Select the Properties tab.
  13. Choose Static website hosting.
  14. Choose Use this bucket to host a website.
  15. For the index document, enter index.html and then choose Save.

Step 5. Create a CloudFront distribution

Amazon CloudFront is a fast content delivery network service that helps securely deliver data, videos, applications, and APIs to customers globally with low latency and high transfer speeds—all within a developer-friendly environment. In this step, we use CloudFront to set up an HTTPS-enabled domain for the static website hosted on Amazon S3.

Create a CloudFront distribution (web distribution) with the following modified default settings:

  1. Sign into the AWS Management Console and open the CloudFront console.
  2. Choose Create Distribution.
  3. On the first page of the Create Distribution Wizard, in the Web section, choose Get Started.
  4. Choose the Origin Domain Name from the dropdown list. It will be YOURS3BUCKETNAME.s3.amazonaws.com.
  5. For Restrict Bucket Access, select Yes.
  6. For Origin Access Identity, select Create a New Identity.
  7. For Grant Read Permission on Bucket, select Yes, Update Bucket Policy.
  8. For the Viewer Protocol Policy, select Redirect HTTP to HTTPS.
  9. For Cache Policy, select Managed-Caching Disabled.
  10. Set the Default Root Object to index.html.(Optional) Add a comment. Comments are a good place to describe the purpose of your distribution, for example, “Amazon Cognito SPA.”
  11. Select Create Distribution. The distribution will take a few minutes to create and update.
  12. Copy the Domain Name. This is the CloudFront distribution domain name, which you will use in a later step as the DOMAINNAME value in the YOUR_REDIRECT_URI variable.

Step 6. Create the app

Now that you’ve created the Amazon S3 bucket for static website hosting and the CloudFront distribution for the site, you’re ready to use the code that follows to create a sample app.

Use the following information from the previous steps:

  1. YOUR_COGNITO_DOMAIN_PREFIX is from Step 2.
  2. YOUR_REGION is the AWS region you used in Step 4 when you created your Amazon S3 bucket.
  3. YOUR_APPCLIENT_ID is the App client ID from Step 3.
  4. YOUR_USERPOOL_ID is the Pool ID from Step 1.
  5. YOUR_REDIRECT_URI, which is https://DOMAINNAME/index.html, where DOMAINNAME is your domain name from Step 5.

Create userprofile.js

Use the following text to create the userprofile.js file. Substitute the preceding pre-existing values for the variables in the text.

var myHeaders = new Headers();
myHeaders.set('Cache-Control', 'no-store');
var urlParams = new URLSearchParams(window.location.search);
var tokens;
var domain = "YOUR_COGNITO_DOMAIN_PREFIX";
var region = "YOUR_REGION";
var appClientId = "YOUR_APPCLIENT_ID";
var userPoolId = "YOUR_USERPOOL_ID";
var redirectURI = "YOUR_REDIRECT_URI";

//Convert Payload from Base64-URL to JSON
const decodePayload = payload => {
  const cleanedPayload = payload.replace(/-/g, '+').replace(/_/g, '/');
  const decodedPayload = atob(cleanedPayload)
  const uriEncodedPayload = Array.from(decodedPayload).reduce((acc, char) => {
    const uriEncodedChar = ('00' + char.charCodeAt(0).toString(16)).slice(-2)
    return `${acc}%${uriEncodedChar}`
  }, '')
  const jsonPayload = decodeURIComponent(uriEncodedPayload);

  return JSON.parse(jsonPayload)
}

//Parse JWT Payload
const parseJWTPayload = token => {
    const [header, payload, signature] = token.split('.');
    const jsonPayload = decodePayload(payload)

    return jsonPayload
};

//Parse JWT Header
const parseJWTHeader = token => {
    const [header, payload, signature] = token.split('.');
    const jsonHeader = decodePayload(header)

    return jsonHeader
};

//Generate a Random String
const getRandomString = () => {
    const randomItems = new Uint32Array(28);
    crypto.getRandomValues(randomItems);
    const binaryStringItems = randomItems.map(dec => `0${dec.toString(16).substr(-2)}`)
    return binaryStringItems.reduce((acc, item) => `${acc}${item}`, '');
}

//Encrypt a String with SHA256
const encryptStringWithSHA256 = async str => {
    const PROTOCOL = 'SHA-256'
    const textEncoder = new TextEncoder();
    const encodedData = textEncoder.encode(str);
    return crypto.subtle.digest(PROTOCOL, encodedData);
}

//Convert Hash to Base64-URL
const hashToBase64url = arrayBuffer => {
    const items = new Uint8Array(arrayBuffer)
    const stringifiedArrayHash = items.reduce((acc, i) => `${acc}${String.fromCharCode(i)}`, '')
    const decodedHash = btoa(stringifiedArrayHash)

    const base64URL = decodedHash.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '');
    return base64URL
}

// Main Function
async function main() {
  var code = urlParams.get('code');

  //If code not present then request code else request tokens
  if (code == null){

    // Create random "state"
    var state = getRandomString();
    sessionStorage.setItem("pkce_state", state);

    // Create PKCE code verifier
    var code_verifier = getRandomString();
    sessionStorage.setItem("code_verifier", code_verifier);

    // Create code challenge
    var arrayHash = await encryptStringWithSHA256(code_verifier);
    var code_challenge = hashToBase64url(arrayHash);
    sessionStorage.setItem("code_challenge", code_challenge)

    // Redirtect user-agent to /authorize endpoint
    location.href = "https://"+domain+".auth."+region+".amazoncognito.com/oauth2/authorize?response_type=code&state="+state+"&client_id="+appClientId+"&redirect_uri="+redirectURI+"&scope=openid&code_challenge_method=S256&code_challenge="+code_challenge;
  } else {

    // Verify state matches
    state = urlParams.get('state');
    if(sessionStorage.getItem("pkce_state") != state) {
        alert("Invalid state");
    } else {

    // Fetch OAuth2 tokens from Cognito
    code_verifier = sessionStorage.getItem('code_verifier');
  await fetch("https://"+domain+".auth."+region+".amazoncognito.com/oauth2/token?grant_type=authorization_code&client_id="+appClientId+"&code_verifier="+code_verifier+"&redirect_uri="+redirectURI+"&code="+ code,{
  method: 'post',
  headers: {
    'Content-Type': 'application/x-www-form-urlencoded'
  }})
  .then((response) => {
    return response.json();
  })
  .then((data) => {

    // Verify id_token
    tokens=data;
    var idVerified = verifyToken (tokens.id_token);
    Promise.resolve(idVerified).then(function(value) {
      if (value.localeCompare("verified")){
        alert("Invalid ID Token - "+ value);
        return;
      }
      });
    // Display tokens
    document.getElementById("id_token").innerHTML = JSON.stringify(parseJWTPayload(tokens.id_token),null,'\t');
    document.getElementById("access_token").innerHTML = JSON.stringify(parseJWTPayload(tokens.access_token),null,'\t');
  });

    // Fetch from /user_info
    await fetch("https://"+domain+".auth."+region+".amazoncognito.com/oauth2/userInfo",{
      method: 'post',
      headers: {
        'authorization': 'Bearer ' + tokens.access_token
    }})
    .then((response) => {
      return response.json();
    })
    .then((data) => {
      // Display user information
      document.getElementById("userInfo").innerHTML = JSON.stringify(data, null,'\t');
    });
  }}}
  main();

Create the verifier.js file

Use the following text to create the verifier.js file.

var key_id;
var keys;
var key_index;

//verify token
async function verifyToken (token) {
//get Cognito keys
keys_url = 'https://cognito-idp.'+ region +'.amazonaws.com/' + userPoolId + '/.well-known/jwks.json';
await fetch(keys_url)
.then((response) => {
return response.json();
})
.then((data) => {
keys = data['keys'];
});

//Get the kid (key id)
var tokenHeader = parseJWTHeader(token);
key_id = tokenHeader.kid;

//search for the kid key id in the Cognito Keys
const key = keys.find(key =>key.kid===key_id)
if (key === undefined){
return "Public key not found in Cognito jwks.json";
}

//verify JWT Signature
var keyObj = KEYUTIL.getKey(key);
var isValid = KJUR.jws.JWS.verifyJWT(token, keyObj, {alg: ["RS256"]});
if (isValid){
} else {
return("Signature verification failed");
}

//verify token has not expired
var tokenPayload = parseJWTPayload(token);
if (Date.now() >= tokenPayload.exp * 1000) {
return("Token expired");
}

//verify app_client_id
var n = tokenPayload.aud.localeCompare(appClientId)
if (n != 0){
return("Token was not issued for this audience");
}
return("verified");
};

Create an index.html file

Use the following text to create the index.html file.

<!doctype html>

<html lang="en">
<head>
<meta charset="utf-8">

<title>MyApp</title>
<meta name="description" content="My Application">
<meta name="author" content="Your Name">
</head>

<body>
<h2>Cognito User</h2>

<p style="white-space:pre-line;" id="token_status"></p>

<p>Id Token</p>
<p style="white-space:pre-line;" id="id_token"></p>

<p>Access Token</p>
<p style="white-space:pre-line;" id="access_token"></p>

<p>User Profile</p>
<p style="white-space:pre-line;" id="userInfo"></p>
<script language="JavaScript" type="text/javascript"
src="https://kjur.github.io/jsrsasign/jsrsasign-latest-all-min.js">
</script>
<script src="js/verifier.js"></script>
<script src="js/userprofile.js"></script>
</body>
</html>

Upload the files into the Amazon S3 Bucket you created earlier

Upload the files you just created to the Amazon S3 bucket that you created in Step 4. If you’re using Chrome or Firefox browsers, you can choose the folders and files to upload and then drag and drop them into the destination bucket. Dragging and dropping is the only way that you can upload folders.

  1. Sign in to the AWS Management Console and open the Amazon S3 console.
  2. In the Bucket name list, choose the name of the bucket that you created earlier in Step 4.
  3. In a window other than the console window, select the index.html file to upload. Then drag and drop the file into the console window that lists the destination bucket.
  4. In the Upload dialog box, choose Upload.
  5. Choose Create Folder.
  6. Enter the name js and choose Save.
  7. Choose the js folder.
  8. In a window other than the console window, select the userprofile.js and verifier.js files to upload. Then drag and drop the files into the console window js folder.

    Note: The Amazon S3 bucket root will contain the index.html file and a js folder. The js folder will contain the userprofile.js and verifier.js files.

Step 7. Configure the app client settings

Use the Amazon Cognito console to configure the app client settings, including identity providers, OAuth flows, and OAuth scopes.

Configure the app client settings:

  1. Go to the Amazon Cognito console.
  2. Choose Manage your User Pools.
  3. Select your user pool.
  4. Select App integration, and then select App client settings.
  5. Under Enabled Identity Providers, select Cognito User Pool.(Optional) You can add federated identity providers. Adding User Pool Sign-in Through a Third-Party has more information about how to add federation providers.
  6. Enter the Callback URL(s) where the user is to be redirected after successfully signing in. The callback URL is the URL of your web app that will receive the authorization code. In our example, this will be the Domain Name for the CloudFront distribution you created earlier. It will look something like https://DOMAINNAME/index.html where DOMAINNAME is xxxxxxx.cloudfront.net.

    Note: HTTPS is required for the Callback URLs. For this example, I used CloudFront as a HTTPS endpoint for the app in Amazon S3.

  7. Next, select Authorization code grant from the Allowed OAuth Flows and OpenID from Allowed OAuth Scopes. The OpenID scope will return the ID token and grant access to all user attributes that are readable by the client.
  8. Choose Save changes.

Step 8. Show the app home page

Now that the Amazon Cognito user pool is configured and the sample app is built, you can test using Amazon Cognito as an OP from the sample JavaScript app you created in Step 6.

View the app’s home page:

  1. Open a web browser and enter the app’s home page URL using the CloudFront distribution to serve your index.html page created in Step 6 (https://DOMAINNAME/index.html) and the app will redirect the browser to the Amazon Cognito /authorize endpoint.
  2. The /authorize endpoint redirects the browser to the Amazon Cognito hosted UI, where the user can sign in or sign up. The following figure shows the user sign-in page.

    Figure 8: User sign-in page

    Figure 8: User sign-in page

Step 9. Create a user

You can use the Amazon Cognito user pool to manage your users or you can use a federated identity provider. Users can sign in or sign up from the Amazon Cognito hosted UI or from a federated identity provider. If you configured a federated identity provider, users will see a list of federated providers that they can choose from. When a user chooses a federated identity provider, they are redirected to the federated identity provider sign-in page. After signing in, the browser is directed back to Amazon Cognito. For this post, Amazon Cognito is the only identity provider, so you will use the Amazon Cognito hosted UI to create an Amazon Cognito user.

Create a new user using Amazon Cognito hosted UI:

  1. Create a new user by selecting Sign up and entering a username, password, and email address. Then select the Sign up button. The following figure shows the sign up screen.

    Figure 9: Sign up with a new account

    Figure 9: Sign up with a new account

  2. The Amazon Cognito sign up workflow will verify the email address by sending a verification code to that address. The following figure shows the prompt to enter the verification code.

    Figure 10: Enter the verification code

    Figure 10: Enter the verification code

  3. Enter the code from the verification email in the Verification Code text box.
  4. Select Confirm Account.

Step 10. Viewing the Amazon Cognito tokens and profile information

After authentication, the app displays the tokens and user information. The following figure shows the OAuth2 access token and OIDC ID token that are returned from the /token endpoint and the user profile returned from the /userInfo endpoint. Now that the user has been authenticated, the application can use the user’s email address to look up the user’s account information in an application data store. Based on the user’s account information, the application can grant/restrict access to paid content or show account information like order history.

Figure 11: Token and user profile information

Figure 11: Token and user profile information

Note: Many browsers will cache redirects. If your browser is repeatedly redirecting to the index.html page, clear the browser cache.

Summary

In this post, we’ve shown you how easy it is to add user authentication to your web and mobile apps with Amazon Cognito.

We created a Cognito User Pool as our user directory, assigned a domain name to the Amazon Cognito hosted UI, and created an application client for our application. Then we created an Amazon S3 bucket to host our website. Next, we created a CloudFront distribution for our Amazon S3 bucket. Then we created our application and uploaded it to our Amazon S3 website bucket. From there, we configured the client app settings with our identity provider, OAuth flows, and scopes. Then we accessed our application and used the Amazon Cognito sign-in flow to create a username and password. Finally, we logged into our application to see the OAuth and OIDC tokens.

Amazon Cognito saves you time and effort when implementing authentication with an intuitive UI, OAuth2 and OIDC support, and customizable workflows. You can now focus on building features that are important to your core business.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

George Conti

George is a Solution Architect for the AWS Financial Services team. He is passonate about technology and helping Financial Services Companies build solutions with AWS Services.

Role-based access control using Amazon Cognito and an external identity provider

Post Syndicated from Eran Medan original https://aws.amazon.com/blogs/security/role-based-access-control-using-amazon-cognito-and-an-external-identity-provider/

Amazon Cognito simplifies the development process by helping you manage identities for your customer-facing applications. As your application grows, some of your enterprise customers may ask you to integrate with their own Identity Provider (IdP) so that their users can sign-on to your app using their company’s identity, and have role-based access-control (RBAC) based on their company’s directory group membership.

For your own workforce identities, you can use AWS Single Sign-On (SSO) to enable single sign-on to your cloud applications or AWS resources.

For your customers who would like to integrate your application with their own IdP, you can use Amazon Cognito user pools’ external identity provider integration.

In this post, you’ll learn how to integrate Amazon Cognito with an external IdP by deploying a demo web application that integrates with an external IdP via SAML 2.0. You will use directory groups (for example, Active Directory or LDAP) for authorization by mapping them to Amazon Cognito user pool groups that your application can read to make access decisions.

Architecture

The demo application is implemented using Amazon Cognito, AWS Amplify, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, Amazon Simple Storage Service (S3), and Amazon CloudFront to achieve a serverless architecture. You will make use of infrastructure-as-code by using AWS CloudFormation and the AWS Cloud Development Kit (CDK) to model and provision your cloud application resources, using familiar programming languages.

The following diagram shows an overview of this architecture and the steps in the login flow, which should help clarify what you are going to deploy.
 

Figure 1: Architecture Diagram

Figure 1: Architecture Diagram

First visit

When a user visits the web application at the first time, the flow is as follows:

  1. The client side of the application (also referred to as the front end) uses the AWS Amplify JavaScript library (Amplify.js) to simplify authentication and authorization. Using Amplify, the application detects that the user is unauthenticated and redirects to Amazon Cognito, which then sends a SAML request to the IdP.
  2. The IdP authenticates the user and sends a SAML response back to Amazon Cognito. The SAML response includes common attributes and a multi-value attribute for group membership.
  3. Amazon Cognito handles the SAML response, and maps the SAML attributes to a just-in-time user profile. The SAML groups attribute is mapped to a custom user pool attribute named custom:groups.
  4. An AWS Lambda function named PreTokenGeneration reads the custom:groups custom attribute and converts it to a JSON Web Token (JWT) claim named cognito:groups. This associates the user to a group, without creating a group.

    This attribute conversion is optional and implemented to demo how you can use Pre Token Generation Lambda trigger to customize your JWT token claims, mapping the IdP groups to the attributes your application recognizes. You can also use this trigger to make additional authorization decisions. For example, if user is a member of multiple groups, you may choose to map only one of them.

  5. Amazon Cognito returns the JWT tokens to the front end.
  6. The Amplify client library stores the tokens and handles refreshes.
  7. The front end makes a call to a protected API in Amazon API Gateway.
  8. API Gateway uses an Amazon Cognito user pools authorizer to validate the JWT’s signature and expiration. If this is successful, API Gateway passes the JWT to the application’s Lambda function (also referred to as the backend).
  9. The backend application code reads the cognito:groups claim from the JWT and decides if the action is allowed. If the user is a member of the right group then the action is allowed, otherwise the action is denied.

We will go into more detail about these steps after describing a bit more about the implementation details.

For more information about JWT tokens and claims, see Introduction to JSON Web Tokens.

Prerequisites

The following are the prerequisites for the solution described in this post:

Cost estimate

For an account under the 12-month Free Tier period, there should be no cost associated with running this example. However, to avoid any unexpected costs you should terminate the example stack after it’s no longer needed. For more information, see AWS Free Tier and AWS Pricing.

Running the demo application

In this part, you will go over the steps to setup and run the demo application. All the example code in this solution can be found on the amazon-cognito-example-for-external-idp code repository on GitHub.

To deploy the application without an IdP integration

  1. Open a bash-compatible command-line terminal and navigate to a directory of your choice. For Windows users: install Git for Windows and open Git BASH from the start menu.
  2. To get the code from the GitHub repository, enter the following:
    git clone https://github.com/aws-samples/amazon-cognito-example-for-external-idp 
    cd amazon-cognito-example-for-external-idp
    

  3. The template env.sh.template contains configuration settings for the application that you will modify later when you configure the IdP. To copy env.sh.template to env.sh, enter the following:
    cp env.sh.template env.sh

    Figure 2: Cloning the example repository and copying the template configuration file

    Figure 2: Cloning the example repository and copying the template configuration file

  4. The install.sh script will install the AWS CDK toolkit with the dependencies and will configure and bootstrap your environment:
    ./install.sh
    

    Figure 3: Installing dependencies

    Figure 3: Installing dependencies

    You may get prompted to agree to sending Angular analytics. You will also get notified if there are package vulnerabilities. If this is the case run npm audit –fix –prod in all subdirectories to resolve them.

  5. Once the environment has been successfully bootstrapped you need to deploy the CloudFormation stack:
    ./deploy.sh 
    

    Figure 4: Deploying the CloudFormation stack

    Figure 4: Deploying the CloudFormation stack

  6. You will be prompted to accept the IAM changes. These changes will allow API gateway service to call the demo application lambda function (APIFunction), Amazon Cognito to invoke Pre-Token Generation lambda function, demo application lambda function to access DynamoDB user’s table (used to implement user’s global sign out), and more. You’ll need to review these changes according to your current security approval level and confirm them to continue.

    Under Do you wish to deploy these changes (y/n)?, type y and press Enter.

    Figure 5: Reviewing and confirming changes

    Figure 5: Reviewing and confirming changes

  7. A few moments after deploying the application’s CloudFormation stack, the terminal displays the IdP settings, which should look like the following:
     
    Figure 6: IdP settings

    Figure 6: IdP settings

    Make a note of these values; you will use them later to configure the IdP.

Configure the IdP

Every IdP is different, but there are some common steps you will need to follow. To configure the IdP, do the following:

  1. Provide the IdP with the values for the following two properties, which you made note of in the previous section:
    • Single sign on URL / Assertion Consumer Service URL / ACS URL:
      https://<domainPrefix>.auth.<region>.amazoncognito.com/saml2/idpresponse
      

    • Audience URI / SP Entity ID / Entity ID:
      urn:amazon:cognito:sp:<yourUserPoolID>
      

  2. Configure the field mapping for the SAML response in the IdP. Map the first name, last name, email, and groups (as a multivalue attribute) into SAML response attributes with the names firstName, lastName, email, and groups, respectively.

    Recommended: Filter the mapped groups to only those that are relevant to the application (for example, by a prefix filter). There is a 2,048-character limit on the custom attribute, so filtering avoids exceeding the character limit, and also avoids passing irrelevant information to the application.

  3. In the IdP, create two demo groups called pet-app-users and pet-app-admins, and create two demo users, for example, [email protected] and [email protected], and then assign one to each group, respectively.

See the following specific instructions for some popular IdPs, or see the documentation for your customer’s specific IdP:

Get the IdP SAML metadata URL or file

Get the metadata URL or file from the IdP: you will use this later to configure your Cognito user pool integration with the IdP. For more information, see Integrating Third-Party SAML Identity Providers with Amazon Cognito User Pools.

To update the application with the SAML metadata URL or file

The following will configure the SAML IdP in the Amazon Cognito User Pool using the IdP metadata above:

  1. Using your favorite text editor, open the env.sh file.
  2. Uncomment the line starting with # export IDENTITY_PROVIDER_NAME (remove the # sign).
  3. Uncomment the line starting with # export IDENTITY_PROVIDER_METADATA.
  4. If you have a metadata URL from the IdP, enter it following the = sign:
    export IDENTITY_PROVIDER_METADATA=REPLACE_WITH_URL
    

    Or, if you downloaded the metadata as a file, enter $(cat path/to/downloaded-metadata.xml):

    export IDENTITY_PROVIDER_METADATA=$(cat REPLACE_WITH_PATH)
    

    Figure 7: Editing the identity provider metadata in the env.sh configuration file

    Figure 7: Editing the identity provider metadata in the env.sh configuration file

To re-deploy the application

  1. Run ./diff.sh to see the changes to the CloudFormation stack (added metadata URL).
     
    Figure 8: Run ./diff.sh

    Figure 8: Run ./diff.sh

  2. Run ./deploy.sh to deploy the update.

To launch the UI

There is both an Angular version and a React version of the same UI, both have the same functionality. You can use either version depending on your preference.

  1. Start the front end application with your chosen version of the UI with one of the following:
    • React: cd ui-react && npm start
    • Angular: cd ui-angular && npm start
  2. To simulate a new session, in your web browser, open a new window in private browsing or incognito mode, then for the URL, enter http://localhost:3000. You should see a screen similar to the following:
     
    Figure 9: Private browsing sign-in screen

    Figure 9: Private browsing sign-in screen

  3. Choose Single Sign On to be taken to the IdP’s sign-in page, where you will sign in if needed. After you are authenticated by the IdP, you’ll be redirected back to the application.

    If you have multiple IdPs, or if you have both internal and external users that will authenticate directly with the user pool, you can choose the Sign In / Sign Up button instead. This redirects you to the Amazon Cognito hosted UI sign in page, rather than taking you directly to the IdP. For more information, see Using the Amazon Cognito Hosted UI for Sign-Up and Sign-In.

  4. Using a new private browsing session (to clear any state), sign in with the user associated with the group pet-app-users and create some sample entries. Then, sign out. Open another private browsing session, and sign in with the user associated with the pet-app-admins group. Notice that you can see the other user’s entries. Now, create a few entries as an admin, then sign out. Open another new private browsing session, sign in again as the pet-app-users user, and notice that you can’t see the entries created by the admin user.
     
    Figure 10: Example view for a user who is only a member of the pet-app-users group

    Figure 10: Example view for a user who is only a member of the pet-app-users group

     

    Figure 11: Example view for a user who is also a member of the pet-app-admins group

    Figure 11: Example view for a user who is also a member of the pet-app-admins group

Implementation

Next, review the details of what each part of the demo application does, so that you can modify it and use it as a starting point for your own application.

Infrastructure

Take a look at the code in the cdk.ts file—a sample CDK file that creates the infrastructure. You can find it in the amazon-cognito-example-for-external-idp/cdk/src directory in the cloned GitHub repo. The key resources it creates are the following:

  1. A Cognito user pool (new cognito.UserPool…). This is where the just-in-time provisioning created users who federate in from the IdP. It also creates a custom attribute named groups, which you can see as custom:groups in the console.
     
    Figure 12: Custom attribute named groups

    Figure 12: Custom attribute named groups

  2. IdP integration which provides the mapping between the attributes in the SAML assertion from the IdP and Amazon Cognito attributes. For more information, see Specifying Identity Provider Attribute Mappings for Your User Pool.
    (new cognito.CfnUserPoolIdentityProvider…).
  3. An authorizer (new apigateway.CfnAuthorizer…). The authorizer is linked to an API resource method (authorizer: {authorizerId: cfnAuthorizer.ref}).

    It ensures that the user must be authenticated and must have a valid JWT token to make API calls to this resource. It uses Lambda proxy integration to intercept requests.

  4. The PreTokenGeneration Lambda trigger, which is used for the mapping between a user’s Active Directory or LDAP groups (passed on the SAML response from the IdP) to user pool groups (const preTokenGeneration = new lambda.Function…). For the PreTokenGeneration Lambda trigger code used in this solution, see the index.ts file on GitHub.

The application

Backend

The example application in this solution uses a serverless backend, but you can modify it to use Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, AWS Elastic Beanstalk, or even an on-premises server as the backend. To configure your API gateway to point to a server-based application, see Set up HTTP Integrations in API Gateway or Set up API Gateway Private Integrations.

Middleware

Take a look at the code in the express.js sample in the app.ts file on GitHub. You’ll notice some statements starting with app.use. These are interceptors that are invoked for all requests.

app.use(eventContext());

app.use(authorizationMiddleware({
  authorizationHeaderName: authorizationHeaderName,
  supportedGroups: [adminsGroupName, usersGroupName],
  forceSignOutHandler: forceSignOutHandler,
  allowedPaths: ["/"],
}));

Some explanation:

  1. eventContext: the example application in this solution uses AWS Serverless Express which allows you to run the Express framework for Node.js directly on AWS Lambda.
  2. authorizationMiddleware is a helper middleware that does the following:
    1. It enriches the express.js request object with several syntactic sugars such as req.groups and req.username (a shortcut to get the respective claims from the JWT token).
    2. It ensures that the currently logged in user is a member of at least one of the supportedGroups provided. If not, it will return a 403 response.

Endpoints

Still in the Express.js app.ts file on GitHub, take a closer look at one of the API’s endpoints (GET /pets).

app.get("/pets", async (req: Request, res: Response) => {

  if (req.groups.has(adminsGroupName)) {
    // if the user has the admin group, we return all pets
    res.json(await storageService.getAllPets());
  } else {
    // else, just owned pets (middleware ensure that the user has at least one group)
    res.json(await storageService.getAllPetsByOwner(req.username));
  }
});

With the groups claim information, your application can now make authorization decisions based on the user’s role (show all items if they are an admin, otherwise just items they own). Having this logic as part of the application also allows you to unit test your authorization logic, and run it locally, or offline, before deploying it.

Front end

The front end can be built in your framework of choice. You can start with the sample UIs provided for either React or Angular. In both, the AWS Amplify client library handles the integration with Amazon Cognito and API Gateway for you. For more information about AWS Amplify, see the Amplify Framework page on GitHub.

Note: You can use AWS Amplify to create the infrastructure in a wizard-like way, without writing CloudFormation. In our example, because we used the AWS CDK for the infrastructure, we needed a configuration file to point Amplify to the created infrastructure.

The following are some notable files, and explanations of what they do:

  • generateConfig.ts reads the CloudFormation stack output parameters, and creates a file named autoGenConfig.js, which looks like the following:
    // this file is auto generated, do not edit it directly
    export default {
      cognitoDomain: "youruniquecognitodomain.auth.region.amazoncognito.com",
      region: "region",
      cognitoUserPoolId: "youruserpoolid",
      cognitoUserPoolAppClientId: "yourusepoolclientid",
      apiUrl: "https://yourapigwapiid.execute-api.region.amazonaws.com/prod/",
    };
    

    The file generateConfig.ts is triggered after calling ./deploy.sh, or ./config-ui.sh.

  • APIService.ts: calls the backend API, passing the user’s token. For example, calling the GET/pets API:
    public async getAllPets(): Promise<Pet[]> {
      const authorizationHeader = await this.getAuthorizationHeader();
      return await this.api.get(REST_API_NAME, '/pets', {headers: authorizationHeader});
    }
    

Step-by-step example

Now that you have an understanding of the solution, we will take you through a step-by-step example. You can see how everything works together in sequence, and how the tokens are passing between Cognito, your demo application, and the API gateway.

  1. Create a new browser session by starting a private/incognito session.
  2. Launch the UI by using the Angular example from the To launch the UI section:
    cd ui-angular && npm start
    

  3. Open the developer tools in your browser. In most browsers, you can do this by pressing F12 (in Chrome and FireFox in Windows), or Option+Command+i (Chrome, Firefox, or Safari on a Mac).
  4. In the developer tools panel, navigate to the Network tab, and ensure that it is in recording mode and logs are persisting. For more details for various browsers, see How to View a SAML Response in Your Browser for Troubleshooting.
  5. When the page loads, the following happens behind the scenes in the front end (example code available for either Angular or React):
    1. Using Amplify.js, AWS Amplify checks if the user is currently logged in
      let cognitoUser = await Auth.currentAuthenticatedUser();
      

      Because this is a new browsing session, the user is not logged in, and the Sign In / Sign Up and Single Sign On buttons will appear.

    2. Choose Single Sign On, and AWS Amplify will redirect the browser to the IdP.
      Auth.federatedSignIn(idpName)
      

  6. In the IdP sign-in page, sign in as one of the users created earlier (e.g. [email protected] or [email protected]).
  7. In the Network tab of your browser’s developer tools panel, locate the request to Amazon Cognito’s /saml2/idresponse endpoint.
  8. The following is an example using Chrome, but you can do it similarly using other browsers. In the Form Data section, you can see the SAMLResponse field that was sent back from the IdP after you authenticated.
     
    Figure 13: Inspecting the SAML response

    Figure 13: Inspecting the SAML response

  9. Copy the SAMLResponse value (drag to select the area marked in green above, and make sure you don’t include the RelayState field).
  10. At the command line, use the following example to decode the SAMLResponse value. Be sure to replace SAMLResponse by pasting the text copied in the previous step:
    echo "SAMLResponse" | base64 --decode > saml_response.xml 
    

  11. Open the saml_response.xml file, and look at the part that starts with <saml2:Attribute Name="groups". This is the attribute that contains the groups that your user belongs to, according to the IdP. For more ways to inspect and troubleshoot the SAML response, see How to View a SAML Response in Your Browser for Troubleshooting.
  12. Amazon Cognito applies the mapping defined in the CloudFormation stack to these attributes. For example, the IdP SAML response attribute named groups is mapped to the user pool custom attribute named custom:groups.
    • In order to modify the mapping, edit your local copy of the cdk.ts file.
    • In order to view the mapped attribute for a user, do the following:
      1. Sign into the AWS Management Console using the same account you used for the demo setup.
      2. Select Manage User Pools.
      3. Select the pool you created for this demo and choose Users and groups.
      4. Search for the user account you just signed in with, and choose its username.

        As you can see in the following example, the custom:groups claim is set automatically. (the custom: prefix is added to all custom attributes automatically):

    Figure 14: Mapped user attributes

    Figure 14: Mapped user attributes

  13. The PreTokenGeneration Lambda function then reads the mapped custom:groups attribute value, parses it, and converts it to an array; and then stores it in the cognito:groups claim. In order to customize the mapping, edit the Lambda function’s code in your local copy of the index.ts file and run ./deploy.sh to redeploy your application.
  14. Now that the front end has the JWT token, when the page loads, it will request to load all the items (a call to a protected API, passing the token in the form of an authorization header).
  15. Look at the Network tab again, under the GET request that starts with pets.

    Under Request Headers, look at the authorization header. The long value you see is the encoded token passed as part of the request. The following is an example of how the decoded JWT will look:

    {
      "cognito:groups": [
        "pet-app-users",
        "pet-app-admins"
      ], // <- this is what the PreTokenGeneration lambda added
      
      "cognito:username": "IdP_Alice",
      "custom:groups": "[pet-app-users, pet-app-admins]", //what we got via SAML
      "email": "[email protected]"
      …
    
    }
    

  16. Optionally, if you’d like to modify or add new requests to a new API paths, edit your local copy of the APIService.ts file by using one of the following examples.
    • Sending the request with the authorization header:
      public async getAllPets(): Promise<Pet[]> {
        const authorizationHeader = await this.getAuthorizationHeader();
        return await this.api.get(REST_API_NAME, '/pets', 
          {headers: authorizationHeader});
      }
      

    • The authorization header is obtained using this helper function:
      private async getAuthorizationHeader() {
        const session = await this.auth.currentSession();
        const idToken = session.getIdToken().getJwtToken();
        return {Authorization: idToken}
      }
      

  17. After the previous request is sent to Amazon API Gateway, the Amazon Cognito user pool authorizer validated the JWT token based on the token signature, to ensure that it was not tampered with, and that it was still valid. You can see the way the authorizer is setup in the cdk.ts file on GitHub.
  18. Based on which user you signed-in with previously, you’ll either see all items, or only items you own. How does it work? As mentioned earlier, the backend application code reads the groups claim from the validated token and decides if the action is allowed. If the user is a member of a specific group or has a specific attribute, allow; else, deny. The relevant code that makes that decision can be seen in the Express.js example app file in the app.ts file on GitHub.

Customizing the application

The following are some important issues to consider when customizing the app to your needs:

  • If you modify the app client, do not add the aws.cognito.signin.user.admin scope to it. The aws.cognito.signin.user.admin scope grants access to Amazon Cognito User Pool API operations that require access tokens, such as UpdateUserAttributes and VerifyUserAttribute. The demo application makes authorization decisions based on the custom:group attribute populated from the IdP. Because the IdP is the single source of truth for its users, they should not be able to modify any attribute, particularly the custom:groups attribute.
  • We recommend that you do not change the mapped attribute after the stack is deployed. The reason is that the attribute gets persisted in the user profile after it is mapped. For example, if you first map groups to custom:groups, and a user signs in, then later you change the mapping of groups to custom:groups2, the next time the user signs in, their profile will have both attributes: custom:groups (with the last value it was mapped to it) as well as custom:groups2 (with the current value). To avoid having to clear old mapped attributes, we recommend not changing the mapping after it is created.
  • This solution utilizes Amazon Cognito’s OAuth 2.0 flows to provide federated sign-in from an external IdP (and optionally also sign-in directly with the user pool via the hosted UI in case you would like to support both use cases). It is not applicable for non OAuth 2.0 flows (e.g. the custom UI), for example, using InititateAuth/SRP.

Conclusion

You can integrate your application with your customer’s IdP of choice for authentication and authorization for your application, without integrating with LDAP, or Active Directory directly. Instead, you can map read-only, need-to-know information from the IdP to the application. By using Amazon Cognito, you can normalize the structure of the JWT token, so that you can add multiple IdPs, social login providers, and even regular username and password-based users (stored in user pools). And you can do all this without changing any application code. Amazon API Gateway’s native integration with Amazon Cognito user pools authorizer streamlines your validation of the JWT integrity, and after it has been validated, you can use it to make authorization decisions in your application’s backend. Using this example, you can focus on what differentiates your application, and let AWS do the undifferentiated heavy lifting of identity management for your customer-facing applications.

For all the code examples described in this post, see the amazon-cognito-example-for-external-idp code repository on GitHub.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Eran Medan

Eran is a Software Development Manager based in Atlanta and leads the AWS Jam team, which uses Amazon Cognito and other services mentioned in this post to run their service. Other than jamming on AWS, Eran likes to jam on his guitar or fly airplanes in virtual reality.

Yuri Duchovny

Yuri is a New York-based Solutions Architect specializing in cloud security, identity, and compliance. He supports cloud transformations at large enterprises, helping them make optimal technology and organizational decisions. Prior to his AWS role, Yuri’s areas of focus included application and networking security, DoS, and fraud protection. Outside of work, he enjoys skiing, sailing, and traveling the world.

Building a serverless document scanner using Amazon Textract and AWS Amplify

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-serverless-document-scanner-using-amazon-textract-and-aws-amplify/

This guide demonstrates creating and deploying a production ready document scanning application. It allows users to manage projects, upload images, and generate a PDF from detected text. The sample can be used as a template for building expense tracking applications, handling forms and legal documents, or for digitizing books and notes.

The frontend application is written in Vue.js and uses the Amplify Framework. The backend is built using AWS serverless technologies and consists of an Amazon API Gateway REST API that invokes AWS Lambda functions. Amazon Textract is used to analyze text from uploaded images to an Amazon S3 bucket. Detected text is stored in Amazon DynamoDB.

An architectural diagram of the application.

An architectural diagram of the application.

Prerequisites

You need the following to complete the project:

Deploy the application

The solution consists of two parts, the frontend application and the serverless backend. The Amplify CLI deploys all the Amazon Cognito authentication, and hosting resources for the frontend. The backend requires the Amazon Cognito user pool identifier to configure an authorizer on the API. This enables an authorization workflow, as shown in the following image.

A diagram showing how an Amazon Cognito authorization workflow works

A diagram showing how an Amazon Cognito authorization workflow works

First, configure the frontend. Complete the following steps using a terminal running on a computer or by using the AWS Cloud9 IDE. If using AWS Cloud9, create an instance using the default options.

From the terminal:

  1. Install the Amplify CLI by running this command.
    npm install -g @aws-amplify/cli
  2. Configure the Amplify CLI using this command. Follow the guided process to completion.
    amplify configure
  3. Clone the project from GitHub.
    git clone https://github.com/aws-samples/aws-serverless-document-scanner.git
  4. Navigate to the amplify-frontend directory and initialize the project using the Amplify CLI command. Follow the guided process to completion.
    cd aws-serverless-document-scanner/amplify-frontend
    
    amplify init
  5. Deploy all the frontend resources to the AWS Cloud using the Amplify CLI command.
    amplify push
  6. After the resources have finishing deploying, make note of the StackName and UserPoolId properties in the amplify-frontend/amplify/backend/amplify-meta.json file. These are required when deploying the serverless backend.

Next, deploy the serverless backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the document-scanner application in the AWS Serverless Application Repository.
  2. In Application settings, name the application and provide the StackName and UserPoolId from the frontend application for the UserPoolID and AmplifyStackName parameters. Provide a unique name for the BucketName parameter.
  3. Choose Deploy.
  4. Once complete, copy the API endpoint so that it can be configured on the frontend application in the next section.

Configure and run the frontend application

  1. Create a file, amplify-frontend/src/api-config.js, in the frontend application with the following content. Include the API endpoint and the unique BucketName from the previous step. The s3_region value must be the same as the Region where your serverless backend is deployed.
    const apiConfig = {
    	"endpoint": "<API ENDPOINT>",
    	"s3_bucket_name": "<BucketName>",
    	"s3_region": "<Bucket Region>"
    };
    
    export default apiConfig;
  2. In a terminal, navigate to the root directory of the frontend application and run it locally for testing.
    cd aws-serverless-document-scanner/amplify-frontend
    
    npm install
    
    npm run serve

    You should see an output like this:

  3. To publish the frontend application to cloud hosting, run the following command.
    amplify publish

    Once complete, a URL to the hosted application is provided.

Using the frontend application

Once the application is running locally or hosted in the cloud, navigating to it presents a user login interface with an option to register. The registration flow requires a code sent to the provided email for verification. Once verified you’re presented with the main application interface.

Once you create a project and choose it from the list, you are presented with an interface for uploading images by page number.

On mobile, it uses the device camera to capture images. On desktop, images are provided by the file system. You can replace an image and the page selector also lets you go back and change an image. The corresponding analyzed text is updated in DynamoDB as well.

Each time you upload an image, the page is incremented. Choosing “Generate PDF” calls the endpoint for the GeneratePDF Lambda function and returns a PDF in base64 format. The download begins automatically.

You can also open the PDF in another window, if viewing a preview in a desktop browser.

Understanding the serverless backend

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

In the GitHub project, the folder serverless-backend/ contains the AWS SAM template file and the Lambda functions. It creates an API Gateway endpoint, six Lambda functions, an S3 bucket, and two DynamoDB tables. The template also defines an Amazon Cognito authorizer for the API using the UserPoolID passed in as a parameter:

Parameters:
  UserPoolID:
    Type: String
    Description: (Required) The user pool ID created by the Amplify frontend.

  AmplifyStackName:
    Type: String
    Description: (Required) The stack name of the Amplify backend deployment. 

  BucketName:
    Type: String
    Default: "ds-userfilebucket"
    Description: (Required) A unique name for the user file bucket. Must be all lowercase.  


Globals:
  Api:
    Cors:
      AllowMethods: "'*'"
      AllowHeaders: "'*'"
      AllowOrigin: "'*'"

Resources:

  DocumentScannerAPI:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      Auth:
        DefaultAuthorizer: CognitoAuthorizer
        Authorizers:
          CognitoAuthorizer:
            UserPoolArn: !Sub 'arn:aws:cognito-idp:${AWS::Region}:${AWS::AccountId}:userpool/${UserPoolID}'
            Identity:
              Header: Authorization
        AddDefaultAuthorizerToCorsPreflight: False

This only allows authenticated users of the frontend application to make requests with a JWT token containing their user name and email. The backend uses that information to fetch and store data in DynamoDB that corresponds to the user making the request.

Two DynamoDB tables are created. A Project table, which tracks all the project names by user, and a Pages table, which tracks pages by project and user. The DynamoDB tables are created by the AWS SAM template with the partition key and range key defined for each table. These are used by the Lambda functions to query and sort items. See the documentation to learn more about DynamoDB table key schema.

ProjectsTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "project_name"
          AttributeType: "S"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: project_name
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

  PagesTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "project"
          AttributeType: "S"
        - 
          AttributeName: "page"
          AttributeType: "N"
      KeySchema: 
        - AttributeName: project
          KeyType: HASH
        - AttributeName: page
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

When an API Gateway endpoint is called, it passes the user credentials in the request context to a Lambda function. This is used by the CreateProject Lambda function, which also receives a project name in the request body, to create an item in the Project Table and associate it with a user.

The endpoint for the FetchProjects Lambda function is called to retrieve the list of projects associated with a user. The DeleteProject Lambda function removes a specific project from the Project table and any associated pages in the Pages table. It also deletes the folder in the S3 bucket containing all images for the project.

When a user enters a Project, the API endpoint calls the FetchPageCount Lambda function. This returns the number of pages for a project to update the current page number in the upload selector. The project is retrieved from the path parameters, as defined in the AWS SAM template:

FetchPageCount:
    Type: AWS::Serverless::Function
    Properties:
      Handler: app.handler
      Runtime: python3.8
      CodeUri: lambda_functions/fetchPageCount/
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref PagesTable
      Environment:
        Variables:
          PAGES_TABLE_NAME: !Ref PagesTable
      Events:
        GetResource:
          Type: Api
          Properties:
            RestApiId: !Ref DocumentScannerAPI
            Path: /pages/count/{project+}
            Method: get  

The template creates an S3 bucket and two AWS IAM managed policies. The policies are applied to the AuthRole and UnauthRole created by Amplify. This allows users to upload images directly to the S3 bucket. To understand how Amplify works with Storage, see the documentation.

The template also sets an S3 event notification on the bucket for all object create events with a “.png” suffix. Whenever the frontend uploads an image to S3, the object create event invokes the ProcessDocument Lambda function.

The function parses the object key to get the project name, user, and page number. Amazon Textract then analyzes the text of the image. The object returned by Amazon Textract contains the detected text and detailed information, such as the positioning of text in the image. Only the raw lines of text are stored in the Pages table.

import os
import json, decimal
import boto3
import urllib.parse
from boto3.dynamodb.conditions import Key, Attr

client = boto3.resource('dynamodb')
textract = boto3.client('textract')

tableName = os.environ.get('PAGES_TABLE_NAME')

def handler(event, context):

  table = client.Table(tableName)

  print(table.table_status)
 
  key = urllib.parse.unquote(event['Records'][0]['s3']['object']['key'])
  bucket = event['Records'][0]['s3']['bucket']['name']
  project = key.split('/')[3]
  page = key.split('/')[4].split('.')[0]
  user = key.split('/')[2]
  
  response = textract.detect_document_text(
    Document={
        'S3Object': {
            'Bucket': bucket,
            'Name': key
        }
    })
    
  fullText = ""
  
  for item in response["Blocks"]:
    if item["BlockType"] == "LINE":
        fullText = fullText + item["Text"] + '\n'
  
  print(fullText)

  table.put_item(Item= {
    'project': user + '/' + project,
    'page': int(page), 
    'text': fullText
    })

  # print(response)
  return

The GeneratePDF Lambda function retrieves the detected text for each page in a project from the Pages table. It combines the text into a PDF and returns it as a base64-encoded string for download. This function can be modified if your document structure differs.

Understanding the frontend

In the GitHub repo, the folder amplify-frontend/src/ contains all the code for the frontend application. In main.js, the Amplify VueJS modules are configured to use the resources defined in aws-exports.js. It also configures the endpoint and S3 bucket of the serverless backend, defined in api-config.js.

In components/DocumentScanner.vue, the API module is imported and the API is defined.

API calls are defined as Vue methods that can be called by various other components and elements of the application.

In components/Project.vue, the frontend uses the Storage module for Amplify to upload images. For more information on how to use S3 in an Amplify project see the documentation.

Conclusion

This blog post shows how to create a multiuser application that can analyze text from images and generate PDF documents. This guide demonstrates how to do so in a secure and scalable way using a serverless approach. The example also shows an event driven pattern for handling high volume image processing using S3, Lambda, and Amazon Textract.

The Amplify Framework simplifies the process of implementing authentication, storage, and backend integration. Explore the full solution on GitHub to modify it for your next project or startup idea.

To learn more about AWS serverless and keep up to date on the latest features, subscribe to the YouTube channel.

#ServerlessForEveryone

Building a Pulse Oximetry tracker using AWS Amplify and AWS serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-pulse-oximetry-tracker-using-aws-amplify-and-aws-serverless/

This guide demonstrates an example solution for collecting, tracking, and sharing pulse oximetry data for multiple users. It’s built using AWS serverless technologies, enabling reliable scalability and security. The frontend application is written in VueJS and uses the Amplify Framework. It takes oxygen saturation measurements as manual input or a BerryMed pulse oximeter connected to a browser using Web Bluetooth.

The serverless backend that handles user data and shared access management is deployed using the AWS Serverless Application Model (AWS SAM). The backend application consists of an Amazon API Gateway REST API, which invokes AWS Lambda functions. The code is written in Python to handle the business logic of interacting with an Amazon DynamoDB database. Authentication is managed by Amazon Cognito.

A screenshot of the frontend application running in a desktop browser.

A screenshot of the frontend application running in a desktop browser.

Prerequisites

You need the following to complete the project:

Deploy the application

A high-level diagram of the full oxygen monitor application.

A high-level diagram of the full oxygen monitor application.

The solution consists of two parts, the frontend application and the serverless backend. The Amplify CLI deploys all the Amazon Cognito authentication and hosting resources for the frontend. The backend requires the Amazon Cognito user pool identifier to configure an authorizer on the API. This enables an authorization workflow, as shown in the following image.

A diagram showing how an Amazon Cognito authorization workflow works

A diagram showing how an Amazon Cognito authorization workflow works

First, configure the frontend. Complete the following steps using a terminal running on a computer or by using the AWS Cloud9 IDE. If using AWS Cloud9, create an instance using the default options.

From the terminal:

  1. Install the Amplify CLI by running this command.
    npm install -g @aws-amplify/cli
  2. Configure the Amplify CLI using this command. Follow the guided process to completion.
    amplify configure
  3. Clone the project from GitHub.
    git clone https://github.com/aws-samples/aws-serverless-oxygen-monitor-web-bluetooth.git
  4. Navigate to the amplify-frontend directory and initialize the project using the Amplify CLI command. Follow the guided process to completion.
    cd aws-serverless-oxygen-monitor-web-bluetooth/amplify-frontend
    
    amplify init
  5. Deploy all the frontend resources to the AWS Cloud using the Amplify CLI command.
    amplify push
  6. After the resources have finishing deploying, make note of the aws_user_pools_id property in the src/aws-exports.js file. This is required when deploying the serverless backend.

Next, deploy the serverless backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the oxygen-monitor-backend application in the AWS Serverless Application Repository.
  2. In Application settings, name the application and provide the aws_user_pools_id from the frontend application for the UserPoolID parameter.
  3. Choose Deploy.
  4. Once complete, copy the API endpoint so that it can be configured on the frontend application in the next step.

Configure and run the frontend application

  1. Create a file, amplify-frontend/src/api-config.js, in the frontend application with the following content. Include the API endpoint from the previous step.
    const apiConfig = {
      “endpoint”: “<API ENDPOINT>”
    };
    
    export default apiConfig;
  2. In a terminal, navigate to the root directory of the frontend application and run it locally for testing.
    cd aws-serverless-oxygen-monitor-web-bluetooth/amplify-frontend
    
    npm install
    
    npm run serve

    You should see an output like this:

  3. To publish the frontend application to cloud hosting, run the following command.
    amplify publish

    Once complete, a URL to the hosted application is provided.

Using the frontend application

Once the application is running locally or hosted in the cloud, navigating to it presents a user login interface with an option to register.

The registration flow requires a code sent to the provided email for verification. Once verified you’re presented with the main application interface. A sample value is displayed when the account has no oxygen saturation or pulse rate history.

To connect a BerryMed pulse oximeter to begin reading measurements, turn on the device. Choose the Connect Pulse Oximeter button and then select it from the list. A Chrome browser on a desktop or Android mobile device is required to use the Web Bluetooth feature.

If you do not have a compatible Bluetooth pulse oximeter or access to Web Bluetooth, checking the Enter Manually check box presents direct input boxes.

Once oxygen saturation and pulse rate values are available, choose the cloud upload icon. This publishes the values to the serverless backend, where they are stored in a DynamoDB table. The trend chart then updates to reflect the new data.

Access to your historical data can be shared to another user, for example a healthcare professional. Choose the share icon on the right to open sharing options. From here, you can add or remove access to others by user name.

To view data shared with you, select the user name from the drop-down and choose the refresh icon.

Understanding the serverless backend

In the GitHub project, the folder serverless-backend/ contains the AWS SAM template file and the Lambda functions. It creates an API Gateway endpoint, six Lambda functions, and two DynamoDB tables. The template also defines an Amazon Cognito authorizer for the API using the UserPoolID passed in as a parameter:

This only allows authenticated users of the frontend application to make requests with a JWT token containing their user name and email. The backend uses that information to fetch and store data in DynamoDB that corresponds to the user making the request.

The first three endpoints handle updating and retrieving oxygen and pulse rate levels. When a user publishes a new measurement, the AddLevels function is invoked which creates a new item in the DynamoDB “Levelstable.

The FetchLevels function retrieves the user’s personal history. The FetchSharedUserLevels function checks the Access Table to see if the requesting user has shared access rights.

The remaining endpoints handle access management. When you add a shared user, this invokes the ManageAccess function with a user name and an action, such as share or revoke. If sharing, the item is added to the Access Table that enables the relationship. If revoking, the item is removed from the table.

The GetSharedUsers function fetches the list of shared with the user making the request. This populates the drop-down of accessible users. FetchUsersWithAccess fetches all users that have access to the user making the request, this populates the list of users in the sharing options.

The DynamoDB tables are created by the AWS SAM template with the partition key and range key defined for each table. These are used by the Lambda functions to query and sort items. See the documentation to learn more about DynamoDB table key schema.

LevelsTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "timestamp"
          AttributeType: "N"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: timestamp
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

  SharedAccessTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "shared_user"
          AttributeType: "S"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: shared_user
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

 

Understanding the frontend

In the GitHub project, the folder amplify-frontend/src/ contains all the code for the frontend application. In main.js, the Amplify VueJS modules are configured to use the resources defined in aws-exports.js. It also configures the endpoint of the serverless backend, defined in api-config.js.

In the file, components/OxygenMonitor.vue, the API module is imported and the desired API is defined.

API calls are defined as Vue methods that can be called by various other components and elements of the application.

In components/ConnectDevice.vue, the connect method initializes a Web Bluetooth connection to the pulse oximeter. It searches for a Bluetooth service UUID and device name specific to BerryMed pulse oximeters. On a successful connection it creates an event listener on the Bluetooth characteristic that notifies changes on measurements.

The handleData method parses notification events. It emits on any changes to oxygen saturation or pulse rate.

The OxygenMonitor component defines the ConnectDevice component in its template. It binds handlers on emitted events.

The handlers assign the values to the Vue data object for use throughout the application.

Further explore the project code to see how the Amplify Framework and the serverless backend are used to make a practical application.

Conclusion

Tracking patient vitals remotely has become more relevant than ever. This guide demonstrates a solution for a personal health and telemedicine application. The full solution includes multiuser functionality and a secure and scalable serverless backend. The application uses a browser to interact with a physical device to measure oxygen saturation and pulse rate. It publishes measurements to a database using a serverless API. The historical data can be displayed as a trend chart and can also be shared with other users.

Once more familiarized with the sample project you may want to begin developing an application with your team. The Amplify Framework has support for team environments, allowing all your developers to work together seamlessly.

To learn more about AWS serverless and keep up to date on the latest features, subscribe to the YouTube channel.

Building well-architected serverless applications: Controlling serverless API access – part 3

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-controlling-serverless-api-access-part-3/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Security question SEC1: How do you control access to your serverless API?

This post continues part 2 of this security question. Previously, I cover Amazon Cognito user and identity pools, JSON web tokens (JWT), API keys and usage plans.

Best practice: Scope access based on identity’s metadata

Authenticated users should be separated into logical groups, roles, or tiers. Separation can also be based on custom authentication token attributes included within Security Assertion Markup Language (SAML) or JSON Web Tokens (JWT). Consider using the user’s identity metadata to enable fine-grain control access to resources and actions.

Scoping access based on authentication metadata allows you to provide limited and fine-grained capabilities and access to consumers based on their roles and intent.

Review levels of access, identity metadata, and separate consumers into logical groups/tiers

With JWT or SAML, ensure you have the right level of information available within the token claims to help you develop authorization logic. Use custom private claims along with a unique namespace for non-public information. Private claims are to share custom information specifically with your application client. Unique namespaces are to avoid name collision for custom claims. For more information, see the AWS Partner Network blog post “Understanding JWT Public, Private and Reserved Claims”.

With Amazon Cognito, you can use custom attributes or the Pre Token Generation Lambda Trigger feature. This AWS Lambda trigger allows you to customize a JWT token claim before the token is generated.

To illustrate using Amazon Cognito groups, I use the example from this blog post. The example uses Amplify CLI to create a web application for managing group membership. API Gateway handles authentication using an Amazon Cognito user pool as part of an administrator API. Two Amazon Cognito user pool groups are created using amplify auth update, one for admin, and one for editors.

  1. I navigate to the deployed web application and create two users, an administrator called someadminuser and an editor user called awesomeeditor.
  2. Show Amazon Cognito user creation

    Show Amazon Cognito user creation

  3. I navigate to the Amazon Cognito user pool console, choose Users and groups under General settings, and can see that both users are created.
  4. View Amazon Cognito users created

    View Amazon Cognito users created

  5. I choose the Groups tab and see that there are two user pool groups set up as part of amplify auth update.
  6. I add the someadminuser to the admin group.
  7. View Amazon Cognito user added to group and IAM role

    View Amazon Cognito user added to group and IAM role

  8. There is an AWS Identity and Access Management (IAM) role associated with the administrator group. This IAM role has an associated identity policy that grants permission to access an S3 bucket for some future application functionality.
  9. {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:DeleteObject"
                ],
                "Resource": [
                    "arn:aws:s3:::mystoragebucket194021-dev/*"
                ],
                "Effect": "Allow"
            }
        ]
    }
    
  10. I log on to the web application using both the someadminuser and awesomeeditor accounts and compare the two JWT accessToken Amazon Cognito has generated.

The someadminuser has a cognito:groups claim within the token showing membership of the user pool group admin.

View JWT with group membership

View JWT with group membership

This token with its group claim can be used in a number of ways to authorize access.

Within this example frontend application, the token is used against an API Gateway resource using an Amazon Cognito authorizer.

An Amazon Cognito authorizer is an alternative to using IAM or Lambda authorizers to control access to your API Gateway method. The client first signs in to the user pool, and receives a token. The client then calls the API method with the token which is typically in the request’s Authorization header. The API call only succeeds if a valid is supplied. Without the correct token, the client isn’t authorized to make the call.

In this example, the Amazon Cognito authorizer authorizes access at the API method. Next, the event payload passed to the Lambda function contains the token. The function reads the token information. If the group membership claim includes admin, it adds the awesomeeditor user to the Amazon Cognito user pool group editors.

  1. To see how this is configured, I navigate to the API Gateway console and select the AdminQueries API.
  2. I view the /{proxy+}/ANY resource.
  3. I see that the Integration Request is set to LAMBDA_PROXY. which calls the AdminQueries Lambda function.
  4. View API Gateway Lambda proxy path

    View API Gateway Lambda proxy path

  5. I view the Method Request.
  6. View API Gateway Method Request using Amazon Cognito authorization

    View API Gateway Method Request using Amazon Cognito authorization

  7. Authorization is set to an Amazon Cognito user pool authorizer with an OAuth scope of aws.cognito.signin.user.admin. This scope grants access to Amazon Cognito user pool API operations that require access tokens, such as AdminAddUserToGroup.
  8. I navigate to the Authorizers menu item, and can see the configured Amazon Cognito authorizer.
  9. In the Amazon Cognito user pool details, the Token Source is set to Authorization. This is the name of the header sent to the Amazon Cognito user pool for authorization.
  10. View Amazon Cognito authorizer settings

    View Amazon Cognito authorizer settings

  11. I navigate to the AWS Lambda console, select the AdminQueries function which amplify add auth added, and choose the Permissions tab. I select the Execution role and view its Permissions policies.
  12. I see that the function execution role allows write permission to the Amazon Cognito user pool resource. This allows the function to amend the user pool group membership.
  13. View Lambda execution role permissions including Amazon Cognito write

    View Lambda execution role permissions including Amazon Cognito write

  14. I navigate back to the AWS Lambda console, and view the configuration for the AdminQueries function. There is an environment variable set for GROUP=admin.
Lambda function environment variables

Lambda function environment variables

The Lambda function code checks if the authorizer.claims token includes the GROUP environment variable value of admin. If not, the function returns err.statusCode = 403 and an error message. Here is the relevant section of code within the function.

// Only perform tasks if the user is in a specific group
const allowedGroup = process.env.GROUP;
…..
  // Fail if group enforcement is being used
  if (req.apiGateway.event.requestContext.authorizer.claims['cognito:groups']) {
    const groups = req.apiGateway.event.requestContext.authorizer.claims['cognito:groups'].split(',');
    if (!(allowedGroup && groups.indexOf(allowedGroup) > -1)) {
      const err = new Error(`User does not have permissions to perform administrative tasks`);
      err.statusCode = 403;
      next(err);
    }
  } else {
    const err = new Error(`User does not have permissions to perform administrative tasks`);
    err.statusCode = 403;
    next(err);
  }

This example shows using a JWT to perform authorization within a Lambda function.

If the authorization is successful, the function continues and adds the awesomeeditor user to the editors group.

To show this flow in action:

  1. I log on to the web application using the awesomeeditor account, which is not a member of the admin group. I choose the Add to Group button.
  2. Sign in as editor

    Sign in as editor

  3. Using the browser developer tools I see that the API request has failed, returning the 403 error code from the Lambda function.
  4. Shows 403 access denied

    Shows 403 access denied

  5. I log on to the web application using the someadminuser account and choose the Add to Group button.
  6. Sign in as admin

    Sign in as admin

  7. Using the browser developer tools I see that the API request is now successful as the user is a member of the admin group.
  8. API successful call as admin

    API successful call as admin

  9. I navigate back to the Amazon Cognito user pool console, and view Users and groups. The awesomeeditor user is now a member of the editors group.
User now member of editors group

User now member of editors group

The Lambda function has added the awesomeeditor account to the editors group.

Implement authorization logic based on authentication metadata

Another way to separate users for authorization is using Amazon Cognito to define a resource server with custom scopes.

A resource server is a server for access-protected resources. It handles authenticated requests from an app that has an access token. This API can be hosted in Amazon API Gateway or outside of AWS. A scope is a level of access that an app can request to a resource. For example, if you have a resource server for airline flight details, it might define two scopes. One scope for all customers with read access to view the flight details, and one for airline employees with write access to add new flights. When the app makes an API call to request access and passes an access token, the token has one or more embedded scopes.

JWT with scope

JWT with scope

This allows you to provide different access levels to API resources for different application clients based on the custom scopes. It is another mechanism for separating users during authentication.

For authorizing based on token claims, use an API Gateway Lambda authorizer.

For more information, see “Using Amazon Cognito User Pool Scopes with Amazon API Gateway”.

With AWS AppSync, use GraphQL resolvers. AWS Amplify can also generate fine-grained authorization logic via GraphQL transformers (directives). You can annotate your GraphQL schema to a specific data type, data field, and specific GraphQL operation you want to allow access. These can include JWT groups or custom claims. For more information, see “GraphQL API Security with AWS AppSync and Amplify”, and the AWS AppSync documentation for Authorization Use Cases, and fine-grained access control.

Improvement plan summary:

  1. Review levels of access, identity metadata and separate consumers into logical groups/tiers.
  2. Implement authorization logic based on authentication metadata

Conclusion

Controlling serverless application API access using authentication and authorization mechanisms can help protect against unauthorized access and prevent unnecessary use of resources. In part 1, I cover the different mechanisms for authorization available for API Gateway and AWS AppSync. I explain the different approaches for public or private endpoints and show how to use IAM to control access to internal or private API consumers.

In part 2, I cover using Amplify CLI to add a GraphQL API with an Amazon Cognito user pool handling authentication. I explain how to view JSON Web Token (JWT) claims, and how to use Amazon Cognito identity pools to grant temporary access to AWS services. I also show how to use API keys and API Gateway usage plans for rate limiting and throttling requests.

In this post, I cover separating authenticated users into logical groups. I first show how to use Amazon Cognito user pool groups to separate users with an Amazon Cognito authorizer to control access to an API Gateway method. I also show how JWTs can be passed to a Lambda function to perform authorization within a function. I then explain how to also separate users using custom scopes by defining an Amazon Cognito resource server.

In an upcoming post, I will cover the second security question from the Well-Architected Serverless Lens about managing serverless security boundaries.

Building well-architected serverless applications: Controlling serverless API access – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-controlling-serverless-api-access-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Security question SEC1: How do you control access to your serverless API?

This post continues part 1 of this security question. Previously, I cover the different mechanisms for authentication and authorization available for Amazon API Gateway and AWS AppSync. I explain the different approaches for public or private endpoints and show how to use AWS Identity and Access Management (IAM) to control access to internal or private API consumers.

Required practice: Use appropriate endpoint type and mechanisms to secure access to your API

I continue to show how to implement security mechanisms appropriate for your API endpoint.

Using AWS Amplify CLI to add a GraphQL API

After adding authentication in part 1, I use the AWS Amplify CLI to add a GraphQL AWS AppSync API with the following command:

amplify add api

When prompted, I specify an Amazon Cognito user pool for authorization.

Amplify add Amazon Cognito user pool for authorization

Amplify add Amazon Cognito user pool for authorization

To deploy the AWS AppSync API configuration to the AWS Cloud, I enter:

amplify push

Once the deployment is complete, I view the GraphQL API from within the AWS AppSync console and navigate to Settings. I see the AWS AppSync API uses the authorization configuration added during the part 1 amplify add auth. This uses the Amazon Cognito user pool to store the user sign-up information.

View AWS AppSync authorization settings with Amazon Cognito

View AWS AppSync authorization settings with Amazon Cognito

For a more detailed walkthrough using Amplify CLI to add an AWS AppSync API for the serverless airline, see the build video.

Viewing JWT tokens

When I create a new account from the serverless airline web frontend, Amazon Cognito creates a user within the user pool. It handles the 3-stage sign-up process for new users. This includes account creation, confirmation, and user sign-in.

Serverless airline Amazon Cognito based sign-in process

Serverless airline Amazon Cognito based sign-in process

Once the account is created, I browse to the Amazon Cognito console and choose Manage User Pools. I navigate to Users and groups under General settings and view my user account.

View User Account

View User Account

When I sign in to the serverless airline web app, I authenticate with Amazon Cognito, and the client receives user pool tokens. The client then calls the AWS AppSync API, which authorizes access using the tokens, connects to data sources, and resolves the queries.

Amazon Cognito tokens used by AWS AppSync

Amazon Cognito tokens used by AWS AppSync

During the sign-in process, I can use the browser developer tools to view the three JWT tokens Amazon Cognito generates and returns to the client. These are the accesstoken, idToken, and refreshToken.

View tokens with browser developer tools

View tokens with browser developer tools

I copy the .idToken value and use the decoder at https://jwt.io/ to view the contents.

JSON web token decoded

JSON web token decoded

The decoded token contains claims about my identity. Claims are pieces of information asserted about my identity. In this example, these include my Amazon Cognito username, email address, and other sign-up fields specified in the user pool. The client can use this identity information inside the application.

The ID token expires one hour after I authenticate. The client uses the Amazon Cognito issued refreshToken to retrieve new ID and access tokens. By default, the refresh token expires after 30 days, but can be set to any value between 1 and 3650 days. When using the mobile SDKs for iOS and Android, retrieving new ID and access tokens is done automatically with a valid refresh token.

For more information, see “Using Tokens with User Pools”.

Accessing AWS services

An Amazon Cognito user pool is a managed user directory to provide access for a user to an application. Amazon Cognito has a feature called identity pools (federated identities), which allow you to create unique identities for your users. These can be from user pools, or other external identity providers.

These unique identities are used to get temporary AWS credentials to directly access other AWS services, or external services via API Gateway. The Amplify client libraries automatically expire, rotate, and refresh the temporary credentials.

Identity pools have identities that are either authenticated or unauthenticated. Unauthenticated identities typically belong to guest users. Authenticated identities belong to authenticated users who have received a token by a login provider, such as a user pool. The Amazon Cognito issued user pool tokens are exchanged for AWS access credentials from an identity pool.

JWT-tokens-from-Amazon-Cognito-user-pool-exchanged-for-AWS-credentials-from-Amazon-Cognito-identity-pool

JWT-tokens-from-Amazon-Cognito-user-pool-exchanged-for-AWS-credentials-from-Amazon-Cognito-identity-pool

API keys

For public content and unauthenticated access, both Amazon API Gateway and AWS AppSync provide API keys that can be used to track usage. API keys should not be used as a primary authorization method for production applications. Instead, use these for rate limiting and throttling. Unauthenticated APIs require stricter throttling than authenticated APIs.

API Gateway usage plans specify who can access API stages and methods, and also how much and how fast they can access them. API keys are then associated with the usage plans to identify API clients and meter access for each key. Throttling and quota limits are enforced on individual keys.

Throttling limits determine how many requests per second are allowed for a usage plan. This is useful to prevent a client from overwhelming a downstream resource. There are two API Gateway values to control this, the throttle rate and throttle burst, which use the token bucket algorithm. The algorithm is based on an analogy of filling and emptying a bucket of tokens representing the number of available requests that can be processed. The bucket in the algorithm has a fixed size based on the throttle burst and is filled at the token rate. Each API request removes a token from the bucket. The throttle rate then determines how many requests are allowed per second. The throttle burst determines how many concurrent requests are allowed and is shared across all APIs per Region in an account.

Token bucket algorithm

Token bucket algorithm

Quota limits allow you to set a maximum number of requests for an API key within a fixed time period. When billing for usage, this also allows you to enforce a limit when a client pays by monthly volume.

API keys are passed using the x-api-key header. API Gateway rejects requests without them.

For example, within the serverless airline, the loyalty service uses an AWS Lambda function to fetch loyalty points and next tier progress via an API Gateway REST API /loyalty/{customerId}/get resource.

I can use this API to simulate the effect of usage plans with API keys.

  1. I navigate to the airline-loyalty API /loyalty/{customerId}/get resource in API Gateway console.
  2. I change the API Key Required value to be true.
  3. Setting API Key Required on API Gateway method

    Setting API Key Required on API Gateway method

  4. I choose Deploy API from the Actions menu.
  5. I create a usage plan in the Usage Plans section of the API Gateway Console.
  6. I choose Create and enter a name for the usage plan.
  7. I select Enable throttling and set the rate to one request per second and the burst to two requests. These are artificially low numbers to simulate the effect.
  8. I select Enable quota and set the limit to 10 requests per day.
  9. Create API Gateway usage plan

    Create API Gateway usage plan

  10. I click Next.
  11. I associate an API Stage by choosing Add API Stage, and selecting the airline Loyalty API and Prod Stage.
  12. Associate usage plan to API Gateway stage

    Associate usage plan to API Gateway stage

  13. I click Next, and choose Create API Key and add to Usage Plan
  14. Create API key and add to usage plan.

    Create API key and add to usage plan.

  15. I name the API Key and ensure it is set to Auto Generate.
  16. Name API Key

  17. I choose Save then Done to associate the API key with the usage plan.
API key associated with usage plan

API key associated with usage plan

I test the API authentication, in addition to the throttles and limits using Postman.

I issue a GET request against the API Gateway URL using a customerId from the airline Airline-LoyaltyData Amazon DynamoDB table. I don’t specify any authorization or API key.

Postman unauthenticated GET request

Postman unauthenticated GET request

I receive a Missing Authentication Token reply, which I expect as the API uses IAM authentication and I haven’t authenticated.

I then configure authentication details within the Authorization tab, using an AWS Signature. I enter my AWS user account’s AccessKey and SecretKey, which has an associated IAM identity policy to access the API.

Postman authenticated GET request without access key

Postman authenticated GET request without access key

I receive a Forbidden reply. I have successfully authenticated, but the API Gateway method rejects the request as it requires an API key, which I have not provided.

I retrieve and copy my previously created API key from the API Gateway console API Keys section, and display it by choosing Show.

Retrieve API key.

Retrieve API key.

I then configure an x-api-key header in the Postman Headers section and paste the API key value.

Having authenticated and specifying the required API key, I receive a response from the API with the loyalty points value.

Postman successful authenticated GET request with access key

Postman successful authenticated GET request with access key

I then call the API with a number of quick successive requests.

When I exceed the throttle rate limit of one request per second, and the throttle burst limit of two requests, I receive:

{"message": "Too Many Requests"}

When I then exceed the quota of 10 requests per day, I receive:

{"message": "Limit Exceeded"}

I view the API key usage within the API Gateway console Usage Plan section.

I select the usage plan, choose the API Keys section, then choose Usage. I see how many requests I have made.

View API key usage

View API key usage

If necessary, I can also grant a temporary rate extension for this key.

For more information on using API Keys for unauthenticated access for AWS AppSync, see the documentation.

API Gateway also has support for AWS Web Application Firewall (AWS WAF) which helps protect web applications and APIs from attacks. It is another mechanism to apply rate-based rules to prevent public API consumers exceeding a configurable request threshold. AWS WAF rules are evaluated before other access control features, such as resource policies, IAM policies, Lambda authorizers, and Amazon Cognito authorizers. For more information, see “Using AWS WAF with Amazon API Gateway”.

AWS AppSync APIs have built-in DDoS protection to protect all GraphQL API endpoints from attacks.

Improvement plan summary:

  1. Determine your API consumer and choose an API endpoint type.
  2. Implement security mechanisms appropriate to your API endpoint

Conclusion

Controlling serverless application API access using authentication and authorization mechanisms can help protect against unauthorized access and prevent unnecessary use of resources.

In this post, I cover using Amplify CLI to add a GraphQL API with an Amazon Cognito user pool handling authentication. I explain how to view JSON Web Token (JWT) claims, and how to use identity pools to grant temporary access to AWS services. I also show how to use API keys and API Gateway usage plans for rate limiting and throttling requests.

This well-architected question will be continued where I look at segregating authenticated users into logical groups. I will first show how to use Amazon Cognito user pool groups to separate users with an Amazon Cognito authorizer to control access to an API Gateway method. I will also show how to pass JWTs to a Lambda function to perform authorization within a function. I will then explain how to also segregate users using custom scopes by defining an Amazon Cognito resource server.

Building a serverless tokenization solution to mask sensitive data

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-tokenization-solution-to-mask-sensitive-data/

This post is courtesy of Anuj Gupta, Senior Solutions Architect, and Steven David, Senior Solutions Architect.

Customers tell us that security and compliance are top priorities regardless of industry or location. Government and industry regulations are regularly updated and companies must move quickly to remain compliant. Organizations must balance the need to generate value from data and to ensure data privacy. There are many situations where it is prudent to obfuscate data to reduce the risk of exposure, while also improving the ability to innovate.

This blog discusses data obfuscation and how it can be used to reduce the risk of unauthorized access. It can also simplify PCI DSS compliance by reducing the number of components for which this compliance may apply.

Comparing tokenization and encryption

There is a difference between encryption and tokenization. Encryption is the process of using an algorithm to transform plaintext into ciphertext. An algorithm and an encryption key are required to decrypt the original plaintext.

Tokenization is the process of transforming a piece of data into a random string of characters called a token. It does not have direct meaningful value in relation to the original data. Tokens serve as a reference to the original data, but cannot be used to derive that data.

Unlike encryption, tokenization does not use a mathematical process to transform the sensitive information into the token. Instead, tokenization uses a database, often called a token vault, which stores the relationship between the sensitive value and the token. The real data in the vault is then secured, often via encryption. The token value can be used in various applications as a substitute for the original data.

For example, for processing a recurring credit card payment, the token is submitted to the vault. The index is used to fetch the original data for use in the authorization process. Recently, tokens are also being used to secure other types of sensitive or personally identifiable information. This includes data like social security numbers (SSNs), telephone numbers, and email addresses.

Overview

In this blog, we show how to design a secure, reliable, scalable, and cost-optimized tokenization solution. It can be integrated with applications to generate tokens, store ciphertext in an encrypted token vault, and exchange tokens for the original text.

In an example use-case, a data analyst needs access to a customer database. The database includes the customer’s name, SSN, credit card, order history, and preferences. Some of the customer information qualifies as sensitive data. To enforce the required information security policy, you must enforce methods such as column level access, role-based control, column level encryption, and protection from unauthorized access.

Providing access to the customer database increases the complexity of managing fine-grained access policies. Tokenization replaces the sensitive data with random unique tokens, which are stored in an application database. This lowers the complexity and the cost of managing access, while helping with data protection.

Walkthrough

This serverless application uses Amazon API Gateway, AWS Lambda, Amazon Cognito, Amazon DynamoDB, and the AWS KMS.

Serverless architecture diagram

The client authenticates with Amazon Cognito and receives an authorization token. This token is used to validate calls to the Customer Order Lambda function. The function calls the tokenization layer, providing sensitive information in the request. This layer includes the logic to generate unique random tokens and store encrypted text in a cipher database.

Lambda calls KMS to obtain an encryption key. It then uses the DynamoDB client-side encryption library to encrypt the original text and store the ciphertext in the cipher database. The Lambda function retrieves the generated token in the response from the tokenization layer. This token is then stored in the application database for future reference.

The KMS makes it easy to create and manage cryptographic keys. It provides logs of all key usage to help you meet regulatory and compliance needs.

One of the most important decisions when using the DynamoDB Encryption Client is selecting a cryptographic materials provider (CMP). The CMP determines how encryption and signing keys are generated, whether new key materials are generated for each item or are reused. It also sets the encryption and signing algorithms that are used. To identify a CMP for your workload, refer to this documentation.

The current solution selects the Direct KMS Provider as the CMP. This cryptographic materials provider returns a unique encryption key and signing key for every table item. To do this, it calls KMS every time you encrypt or decrypt an item.

The KMS process

  • To generate encryption materials, the Direct KMS Provider asks AWS KMS to generate a unique data key for each item using a customer master key (CMK) that you specify. It derives encryption and signing keys for the item from the plaintext copy of the data key, and then returns the encryption and signing keys, along with the encrypted data key, which is stored in the material description attribute of the item.
  • The item encryptor uses the encryption and signing keys and removes them from memory as soon as possible. Only the encrypted copy of the data key from which they were derived is saved in the encrypted item.
  • To generate decryption materials, the Direct KMS Provider asks AWS KMS to decrypt the encrypted data key. Then, it derives verification and signing keys from the plaintext data key, and returns them to the item encryptor.

The item encryptor verifies the item and, if verification succeeds, decrypts the encrypted values. Finally, it removes the keys from memory as soon as possible.

For enhanced security, the example creates the Lambda function inside a VPC with a security group attached to allow incoming HTTPS traffic from only private IPs. The Lambda function connects to DynamoDB and KMS via VPC endpoints instead of going through the public internet. It connects to DynamoDB using a service gateway endpoint and to KMS using an interface endpoint providing a highly available and secure connection.

Additionally, VPC endpoints can use endpoint policies to enforce allowing only permitted operations for KMS and DynamoDB over this connection. To further control the management of encryption keys, the KMS master key has a resource-based policy. It allows the Lambda layer to generate data keys for encryption and decryption, and restrict any administrative activity on master key.

To deploy this solution, follow the instructions in the aws-serverless-tokenization GitHub repo. The AWS Serverless Application Model (AWS SAM) template allows you to quickly deploy this solution into your AWS account.

Understanding the code

The solution uses the tokenizer package, deployed as a Lambda layer. It uses Python UUID4 to generate random values. You can optionally update the logic in hash_gen.py to use your own tokenization technique. For example, you could generate tokens with same length as the original text, preserving the format in the generated token.

The ddb_encrypt_item.py file contains the logic for encrypting DynamoDB items and uses a DynamoDB client-side encryption library. To learn more about how this library works, refer to this documentation.

There are three methods used in the application logic:

  • Encrypt_item encrypts the plaintext using the KMS customer managed key. In AttributeActions actions, you can specify if you don’t want to encrypt a portion of the plaintext. For example, you might exclude keys in the JSON input from being encrypted. It also requires a partition key to index the encrypted text in the DynamoDB table. The hash key is used as the name of the partition key in the DynamoDB table. The value of this partition key is the UUID token generated in the previous step.
def encrypt_item (plaintext_item,table_name):
    table = boto3.resource('dynamodb').Table(table_name)

    aws_kms_cmp = AwsKmsCryptographicMaterialsProvider(key_id=aws_cmk_id)

    actions = AttributeActions(
        default_action=CryptoAction.ENCRYPT_AND_SIGN,
        attribute_actions={'Account_Id': CryptoAction.DO_NOTHING}
    )

    encrypted_table = EncryptedTable(
        table=table,
        materials_provider=aws_kms_cmp,
        attribute_actions=actions
    )
    response = encrypted_table.put_item(Item=plaintext_item)
  • Get_decrypted_item gets the plaintext for a given partition key. For example, the UUID token using the KMS customer managed key.
  • Get_Item gets the obfuscated text, for example the ciphertext stored in the DynamoDB table for the provided partition key.

The dynamodb-encryption-sdk requires cryptography libraries as a dependency. Both of these libraries are platform-dependent and must be installed for a specific operating system. Since Lambda functions use Amazon Linux, you must install these libraries for Amazon Linux even if you are developing application code on different operating system. To do this, use the get_AMI_packages_cryptography.sh script to download the Docker image, install dependencies within the image, and export files to be used by our Lambda layer.

If you are processing DynamoDB items at a high frequency and large scale, you might exceed the AWS KMS requests-per-second limit, causing processing delays. You can use tools such as JMeter to test the required throughput based on the expected traffic for this serverless application. If you need to exceed a quota, you can request a quota increase in Service Quotas. Use the Service Quotas console or the RequestServiceQuotaIncrease operation. For details, see Requesting a quota increase in the Service Quotas User Guide. If Service Quotas for AWS KMS are not available in the AWS Region, create a case in the AWS Support Center.

After following this walkthrough, to avoid incurring future charges, delete the resources following step 7 of the README file.

Conclusion

This post shows how to use AWS Serverless services to design a secure, reliable, and cost-optimized tokenization solution. It can be integrated with applications to protect sensitive information and manage access using strict controls with less operational overhead.

Building well-architected serverless applications: Controlling serverless API access – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-controlling-serverless-api-access-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Security question SEC1: How do you control access to your serverless API?

Use authentication and authorization mechanisms to prevent unauthorized access, and enforce quota for public resources. By controlling access to your API, you can help protect against unauthorized access and prevent unnecessary use of resources.

AWS has a number of services to provide API endpoints including Amazon API Gateway and AWS AppSync.

Use Amazon API Gateway for RESTful and WebSocket APIs. Here is an example serverless web application architecture using API Gateway.

Example serverless application architecture using API Gateway

Example serverless application architecture using API Gateway

Use AWS AppSync for managed GraphQL APIs.

AWS AppSync overview diagram

AWS AppSync overview diagram

The serverless airline example in this series uses AWS AppSync to provide the frontend, user-facing public API. The application also uses API Gateway to provide backend, internal, private REST APIs for the loyalty and payment services.

Good practice: Use an authentication and an authorization mechanism

Authentication and authorization are mechanisms for controlling and managing access to a resource. In this well-architected question, that is a serverless API. Authentication is verifying who a client or user is. Authorization is deciding whether they have the permission to access a resource. By enforcing authorization, you can prevent unauthorized access to your workload from non-authenticated users.

Integrate with an identity provider that can validate your API consumer’s identity. An identity provider is a system that provides user authentication as a service. The identity provider may use the XML-based Security Assertion Markup Language (SAML), or JSON Web Tokens (JWT) for authentication. It may also federate with other identity management systems. JWT is an open standard that defines a way for securely transmitting information between parties as a JSON object. JWT uses frameworks such as OAuth 2.0 for authorization and OpenID Connect (OIDC), which builds on OAuth2, and adds authentication.

Only authorize access to consumers that have successfully authenticated. Use an identity provider rather than API keys as a primary authorization method. API keys are more suited to rate limiting and throttling.

Evaluate authorization mechanisms

Use AWS Identity and Access Management (IAM) for authorizing access to internal or private API consumers, or other AWS Managed Services like AWS Lambda.

For public, user facing web applications, API Gateway accepts JWT authorizers for authenticating consumers. You can use either Amazon Cognito or OpenID Connect (OIDC).

App client authenticates and gets tokens

App client authenticates and gets tokens

For custom authorization needs, you can use Lambda authorizers.

A Lambda authorizer (previously called a custom authorizer) is an AWS Lambda function which API Gateway calls for an authorization check when a client makes a request to an API method. This means you do not have to write custom authorization logic in a function behind an API. The Lambda authorizer function can validate a bearer token such as JWT, OAuth, or SAML, or request parameters and grant access. Lambda authorizers can be used when using an identity provider other than Amazon Cognito or AWS IAM, or when you require additional authorization customization.

Lambda authorizers

Lambda authorizers

For more information, see the AWS Hero blog post, “The Complete Guide to Custom Authorizers with AWS Lambda and API Gateway”.

The AWS documentation also has a useful section on “Understanding Lambda Authorizers Auth Workflow with Amazon API Gateway”.

Enforce authorization for non-public resources within your API

Within API Gateway, you can enable native authorization for users authenticated using Amazon Cognito or AWS IAM. For authorizing users authenticated by other identity providers, use Lambda authorizers.

For example, within the serverless airline, the loyalty service uses a Lambda function to fetch loyalty points and next tier progress. AWS AppSync acts as the client using an HTTP resolver, via an API Gateway REST API /loyalty/{customerId}/get resource, to invoke the function.

To ensure only AWS AppSync is authorized to invoke the API, IAM authorization is set within the API Gateway method request.

Viewing API Gateway IAM authorization

Viewing API Gateway IAM authorization

The serverless airline uses the AWS Serverless Application Model (AWS SAM) to deploy the backend infrastructure as code. This makes it easier to know which IAM role has access to the API. One of the benefits of using infrastructure as code is visibility into all deployed application resources, including IAM roles.

The loyalty service AWS SAM template contains the AppsyncLoyaltyRestApiIamRole.

AppsyncLoyaltyRestApiIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
  AppsyncLoyaltyRestApiIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service: appsync.amazonaws.com
            Action: sts:AssumeRole
      Path: /
      Policies:
        - PolicyName: LoyaltyApiInvoke
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - execute-api:Invoke
                # arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path
                Resource: !Sub arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${LoyaltyApi}/*/*/*

The IAM role specifies that appsync.amazonaws.com can perform an execute-api:Invoke on the specific API Gateway resource arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${LoyaltyApi}/*/*/*

Within AWS AppSync, you can enable native authorization for users authenticating using Amazon Cognito or AWS IAM. You can also use any external identity provider compliant with OpenID Connect (OIDC).

Improvement plan summary:

  1. Evaluate authorization mechanisms.
  2. Enforce authorization for non-public resources within your API

Required practice: Use appropriate endpoint type and mechanisms to secure access to your API

APIs may have public or private endpoints. Consider public endpoints to serve consumers where they may not be part of your network perimeter. Consider private endpoints to serve consumers within your network perimeter where you may not want to expose the API publicly. Public and private endpoints may have different levels of security.

Determine your API consumer and choose an API endpoint type

For providing public content, use Amazon API Gateway or AWS AppSync public endpoints.

For providing content with restricted access, use Amazon API Gateway with authorization to specific resources, methods, and actions you want to restrict. For example, the serverless airline application uses AWS IAM to restrict access to the private loyalty API so only AWS AppSync can call it.

With AWS AppSync providing a GraphQL API, restrict access to specific data types, data fields, queries, mutations, or subscriptions.

You can create API Gateway private REST APIs that you can only access from your AWS Virtual Private Cloud(VPC) by using an interface VPC endpoint.

API Gateway private endpoints

API Gateway private endpoints

For more information, see “Choose an endpoint type to set up for an API Gateway API”.

Implement security mechanisms appropriate to your API endpoint

With Amazon API Gateway and AWS AppSync, for both public and private endpoints, there are a number of mechanisms for access control.

For providing content with restricted access, API Gateway REST APIs support native authorization using AWS IAM, Amazon Cognito user pools, and Lambda authorizers. Amazon Cognito user pools is a feature that provides a managed user directory for authentication. For more detailed information, see the AWS Hero blog post, “Picking the correct authorization mechanism in Amazon API Gateway“.

You can also use resource policies to restrict content to a specific VPC, VPC endpoint, a data center, or a specific AWS Account.

API Gateway resource policies are different from IAM identity policies. IAM identity policies are attached to IAM users, groups, or roles. These policies define what that identity can do on which resources. For example, in the serverless airline, the IAM role AppsyncLoyaltyRestApiIamRole specifies that appsync.amazonaws.com can perform an execute-api:Invoke on the specific API Gateway resource arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${LoyaltyApi}/*/*/*

Resource policies are attached to resources such as an Amazon S3 bucket, or an API Gateway resource or method. The policies define what identities can access the resource.

IAM access is determined by a combination of identity policies and resource policies.

For more information on the differences, see “Identity-Based Policies and Resource-Based Policies”. To see which services support resource-based policies, see “AWS Services That Work with IAM”.

API Gateway HTTP APIs support JWT authorizers as a part of OpenID Connect (OIDC) and OAuth 2.0 frameworks.

API Gateway WebSocket APIs support AWS IAM and Lambda authorizers.

With AWS AppSync public endpoints, you can enable authorization with the following:

  • AWS IAM
  • Amazon Cognito User pools for email and password functionality
  • Social providers (Facebook, Google+, and Login with Amazon)
  • Enterprise federation with SAML

Within the serverless airline, AWS Amplify Console hosts the public user facing site. Amplify Console provides a git-based workflow for building, deploying, and hosting serverless web applications. Amplify Console manages the hosting of the frontend assets for single page app (SPA) frameworks in addition to static websites, along with an optional serverless backend. Frontend assets are stored in S3 and the Amazon CloudFront global edge network distributes the web app globally.

The AWS Amplify CLI toolchain allows you to add backend resources using AWS CloudFormation.

Using Amplify CLI to add authentication

For the serverless airline, I use the Amplify CLI to add authentication using Amazon Cognito with the following command:

amplify add auth

When prompted, I specify the authentication parameters I require.

Amplify add auth

Amplify add auth

Amplify CLI creates a local CloudFormation template. Use the following command to deploy the updated authentication configuration to the cloud:

amplify push

Once the deployment is complete, I view the deployed authentication nested stack resources from within the CloudFormation Console. I see the Amazon Cognito user pool.

View Amplify authentication CloudFormation nested stack resources

View Amplify authentication CloudFormation nested stack resources

For a more detailed walkthrough using Amplify CLI to add authentication for the serverless airline, see the build video.

For more information on Amplify CLI and authentication, see “Authentication with Amplify”.

Conclusion

To help protect against unauthorized access and prevent unnecessary use of serverless API resources, control access using authentication and authorization mechanisms.

In this post, I cover the different mechanisms for authorization available for API Gateway and AWS AppSync. I explain the different approaches for public or private endpoints and show how to use IAM to control access to internal or private API consumers. I walk through how to use the Amplify CLI to create an Amazon Cognito user pool.

This well-architected question will be continued in a future post where I continue using the Amplify CLI to add a GraphQL API. I will explain how to view JSON Web Tokens (JWT) claims, and how to use Cognito identity pools to grant temporary access to AWS services. I will also show how to use API keys and API Gateway usage plans for rate limiting and throttling requests.

Building well-architected serverless applications: Approaching application lifecycle management – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-approaching-application-lifecycle-management-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Question OPS2: How do you approach application lifecycle management?

Adopt lifecycle management approaches that improve the flow of changes to production with higher fidelity, fast feedback on quality, and quick bug fixing. These practices help you rapidly identify, remediate, and limit changes that impact customer experience. By having an approach to application lifecycle management, you can reduce errors caused by manual process and increase the levels of control to gain confidence your workload operates as intended.

Required practice: Use infrastructure as code and stages isolated in separate environments

Infrastructure as code is a process of provisioning and managing cloud resources by storing application configuration in a template file. Using infrastructure as code helps to deploy applications in a repeatable manner, reducing errors caused by manual processes such as creating resources in the AWS Management Console.

Storing code in a version control system enables tracking and auditing of changes and releases over time. This is used to roll back changes safely to a known working state if there is an issue with an application deployment.

Infrastructure as code

For AWS Cloud development the built-in choice for infrastructure as code is AWS CloudFormation. The template file, written in JSON or YAML, contains a description of the resources an application needs. CloudFormation automates the deployment and ongoing updates of the resources by creating CloudFormation stacks.

CloudFormation code example creating infrastructure

CloudFormation code example creating infrastructure

There are a number of higher-level tools and frameworks that abstract and then generate CloudFormation. A serverless specific framework helps model the infrastructure necessary for serverless workloads, providing either declarative or imperative mechanisms to define event sources for functions. It wires permissions between resources automatically, adds resource configuration, code packaging, and any infrastructure necessary for a serverless application to run.

The AWS Serverless Application Model (AWS SAM) is an AWS open-source framework optimized for serverless applications. The AWS Cloud Development Kit allows you to provision cloud resources using familiar programming languages such as TypeScript, JavaScript, Python, Java, and C#/.Net. There are also third-party solutions for creating serverless cloud resources such as the Serverless Framework.

The AWS Amplify Console provides a git-based workflow for building, deploying, and hosting serverless applications including both the frontend and backend. The AWS Amplify CLI toolchain enables you to add backend resources using CloudFormation.

For a large number of resources, consider breaking common functionality such as monitoring, alarms, or dashboards into separate infrastructure as code templates. With CloudFormation, use nested stacks to help deploy them as part of your serverless application stack. When using AWS SAM, import these nested stacks as nested applications from the AWS Serverless Application Repository.

AWS CloudFormation nested stacks

AWS CloudFormation nested stacks

Here is an example AWS SAM template using nested stacks. There are two AWS::Serverless::Application nested resources, api.template.yaml and database.template.yaml. For more information on nested stacks, see the AWS Partner Network blog post: CloudFormation Nested Stacks Primer.

Version control

The serverless airline example application used in this series uses Amplify Console to provide part of the backend resources, including authentication using Amazon Cognito, and a GraphQL API using AWS AppSync.

The airline application code is stored in GitHub as a version control system. Fork, or copy, the application to your GitHub account. Configure Amplify Console to connect to the GitHub fork.

When pushing code changes to a fork, Amplify Console automatically deploys these backend resources along with the rest of the application. It hosts the application at the Production branch URL, and you can also configure a custom domain name if needed.

AWS Amplify Console App details

AWS Amplify Console App details

The Amplify Console configuration to create the API and Authentication backend resources is found in the backend-config.json file. The resources are provisioned during the Amplify Console build phase.

To view the deployed resources, within the Amplify Console, navigate to the awsserverlessairline application. Select Backend environments and then select an environment, in this example sampledev.

Select the API and Authentication tabs to view the created backend resources.

AWS Amplify Console deployed backend resources

AWS Amplify Console deployed backend resources

Using multiple tools

Applications can use multiple tools and frameworks even within a single project to manage the infrastructure as code. Within the airline application, AWS SAM is also used to provision the rest of the serverless infrastructure using nested stacks. During the Amplify Console build process, the Makefile contains the AWS SAM build instructions for each application service.

For example, the AWS SAM build instructions to deploy the booking service are as follows:

deploy.booking: ##=> Deploy booking service using SAM
	$(info [*] Packaging and deploying Booking service...)
	cd src/backend/booking && \
		sam build && \
		sam package \
			--s3-bucket $${DEPLOYMENT_BUCKET_NAME} \
			--output-template-file packaged.yaml && \
		sam deploy \
			--template-file packaged.yaml \
			--stack-name $${STACK_NAME}-booking-$${AWS_BRANCH} \
			--capabilities CAPABILITY_IAM \
			--parameter-overrides \
	BookingTable=/$${AWS_BRANCH}/service/amplify/storage/table/booking \
	FlightTable=/$${AWS_BRANCH}/service/amplify/storage/table/flight \
	CollectPaymentFunction=/$${AWS_BRANCH}/service/payment/function/collect \
	RefundPaymentFunction=/$${AWS_BRANCH}/service/payment/function/refund \
	AppsyncApiId=/$${AWS_BRANCH}/service/amplify/api/id \
	Stage=$${AWS_BRANCH}

Each service has its own AWS SAM template.yml file. The files contain the resources for each of the booking, catalog, log-processing, loyalty, and payment services. This means that the services can be managed independently within the application as separate stacks. In larger applications, these services may be managed by separate teams, or be in separate repositories, environments or AWS accounts. It may make sense to split out some common functionality such as alarms, or dashboards into separate infrastructure as code templates.

AWS SAM can also use IAM roles to assume temporary credentials and deploy a serverless application to separate AWS accounts.

For more information on managing serverless code, see Best practices for organizing larger serverless applications.

View the deployed resources in the AWS CloudFormation Console. Select Stacks from the left-side navigation bar, and select the View nested toggle.

Viewing CloudFormation nested stacks

Viewing CloudFormation nested stacks

The serverless airline application is a more complex example application comprising multiple services composed of multiple CloudFormation stacks. Some stacks are managed via Amplify Console and others via AWS SAM. Using infrastructure as code is not only for large and complex applications. As a best practice, we suggest using SAM or another framework for even simple, small serverless applications with a single stack. For a getting started tutorial, see the example Deploying a Hello World Application.

Improvement plan summary:

  1. Use a serverless framework to help you execute functions locally, build and package application code. Separate packaging from deployment, deploy to isolated stages in separate environments, and support secrets via configuration management systems.
  2. For a large number of resources, consider breaking common functionalities such as alarms into separate infrastructure as code templates.

Conclusion

Introducing application lifecycle management improves the development, deployment, and management of serverless applications. In this post I cover using infrastructure as code with version control to deploy applications in a repeatable manner. This reduces errors caused by manual processes and gives you more confidence your application works as expected.

This well-architected question will continue in an upcoming post where I look further at deploying to multiple stages using temporary environments, and rollout deployments.

ICYMI: Serverless Q1 2020

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/icymi-serverless-q1-2020/

Welcome to the ninth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

A calendar of the January, February, and March.

In case you missed our last ICYMI, checkout what happened last quarter here.

Launches/New products

In 2018, we launched the AWS Well-Architected Tool. This allows you to review workloads in a structured way based on the AWS Well-Architected Framework. Until now, we’ve provided workload-specific advice using the concept of a “lens.”

As of February, this tool now lets you apply those lenses to provide greater visibility in specific technology domains to assess risks and find areas for improvement. Serverless is the first available lens.

You can apply a lens when defining a workload in the Well-Architected Tool console.

A screenshot of applying a lens.

HTTP APIs beta was announced at AWS re:Invent 2019. Now HTTP APIs is generally available (GA) with more features to help developers build APIs better, faster, and at lower cost. HTTP APIs for Amazon API Gateway is built from the ground up based on lessons learned from building REST and WebSocket APIs, and looking closely at customer feedback.

For the majority of use cases, HTTP APIs offers up to 60% reduction in latency.

HTTP APIs costs at least 71% lower when compared against API Gateway REST APIs.

A bar chart showing the cost comparison between HTTP APIs and API Gateway.

HTTP APIs also offers a more intuitive experience and powerful features, like easily configuring cross origin resource scripting (CORS), JWT authorizers, auto-deploying stages, and simplified route integrations.

AWS Lambda

You can now view and monitor the number of concurrent executions of your AWS Lambda functions by version and alias. Previously, the ConcurrentExecutions metric measured and emitted the sum of concurrent executions for all functions in the account. It included even those that had a reserved concurrency limit specified.

Now, the ConcurrentExecutions metric is emitted for all functions, versions, aliases. This can be used to see which functions consume your concurrency limits and estimate peak traffic based on consumption averages. Fine grain visibility in these areas can help plan appropriate configuration for Provisioned Concurrency.

A Lambda function written in Ruby 2.7.

A Lambda function written in Ruby 2.7.

AWS Lambda now supports Ruby 2.7. Developers can take advantage of new features in this latest release of Ruby, like pattern matching, argument forwarding and numbered arguments. Lambda functions written in Ruby 2.7 run on Amazon Linux 2.

Updated AWS Mock .NET Lambda Test Tool

Updated AWS Mock .NET Lambda Test Tool

.NET Core 3.1 is now a supported runtime in AWS Lambda. You can deploy to Lambda by setting the runtime parameter value to dotnetcore3.1. Updates have also been released for the AWS Toolkit for Visual Studio and .NET Core Global Tool Amazon .Lambda.Tools. These make it easier to build and deploy your .NET Core 3.1 Lambda functions.

With .NET Core 3.1, you can take advantage of all the new features it brings to Lambda, including C# 8.0, F# 4.7 support, and .NET Standard 2.1 support, a new JSON serializer, and a ReadyToRun feature for ahead-of-time compilation. The AWS Mock .NET Lambda Test Tool has also been updated to support .NET Core 3.1 with new features to help debug and improve your workloads.

Cost Savings

Last year we announced Savings Plans for AWS Compute Services. This is a flexible discount model provided in exchange for a commitment of compute usage over a period of one or three years. AWS Lambda now participates in Compute Savings Plans, allowing customers to save money. Visit the AWS Cost Explorer to get started.

Amazon API Gateway

With the HTTP APIs launched in GA, customers can build APIs for services behind private ALBs, private NLBs, and IP-based services registered in AWS Cloud Map such as ECS tasks. To make it easier for customers to work between API Gateway REST APIs and HTTP APIs, customers can now use the same custom domain across both REST APIs and HTTP APIs. In addition, this release also enables customers to perform granular throttling for routes, improved usability when using Lambda as a backend, and better error logging.

AWS Step Functions

AWS Step Functions VS Code plugin.

We launched the AWS Toolkit for Visual Studio Code back in 2019 and last month we added toolkit support for AWS Step Functions. This enables you to define, visualize, and create workflows without leaving VS Code. As you craft your state machine, it is continuously rendered with helpful tools for debugging. The toolkit also allows you to update state machines in the AWS Cloud with ease.

To further help with debugging, we’ve added AWS Step Functions support for CloudWatch Logs. For standard workflows, you can select different levels of logging and can exclude logging of a workflow’s payload. This makes it easier to monitor event-driven serverless workflows and create metrics and alerts.

AWS Amplify

AWS Amplify is a framework for building modern applications, with a toolchain for easily adding services like authentication, storage, APIs, hosting, and more, all via command line interface.

Customers can now use the Amplify CLI to take advantage of AWS Amplify console features like continuous deployment, instant cache invalidation, custom redirects, and simple configuration of custom domains. This means you can do end-to-end development and deployment of a web application entirely from the command line.

Amazon DynamoDB

You can now easily increase the availability of your existing Amazon DynamoDB tables into additional AWS Regions without table rebuilds by updating to the latest version of global tables. You can benefit from improved replicated write efficiencies without any additional cost.

On-demand capacity mode is now available in the Asia Pacific (Osaka-Local) Region. This is a flexible capacity mode for DynamoDB that can serve thousands of requests per second without requiring capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance cost and performance.

AWS Serverless Application Repository

The AWS Serverless Application Repository (SAR) is a service for packaging and sharing serverless application templates using the AWS Serverless Application Model (SAM). Applications can be customized with parameters and deployed with ease. Previously, applications could only be shared publicly or with specific AWS account IDs. Now, SAR has added sharing for AWS Organizations. These new granular permissions can be added to existing SAR applications. Learn how to take advantage of this feature today to help improve your organizations productivity.

Amazon Cognito

Amazon Cognito, a service for managing identity providers and users, now supports CloudWatch Usage Metrics. This allows you to monitor events in near-real time, such as sign-in and sign-out. These can be turned into metrics or CloudWatch alarms at no additional cost.

Cognito User Pools now supports logging for all API calls with AWS CloudTrail. The enhanced CloudTrail logging improves governance, compliance, and operational and risk auditing capabilities. Additionally, Cognito User Pools now enables customers to configure case sensitivity settings for user aliases, including native user name, email alias, and preferred user name alias.

Serverless posts

Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest published to the AWS Compute Blog this quarter.

January

February

March

Tech Talks and events

We hold AWS Online Tech Talks covering serverless topics throughout the year. You can find these in the serverless section of the AWS Online Tech Talks page. We also delivered talks at conferences and events around the globe, regularly join in on podcasts, and record short videos you can find to learn in quick byte-sized chunks.

Here are the highlights from Q1.

January

February

March

Live streams

Rob Sutter, a Senior Developer Advocate on AWS Serverless, has started hosting Serverless Office Hours every Tuesday at 14:00 ET on Twitch. He’ll be imparting his wisdom on Step Functions, Lambda, Golang, and taking questions on all things serverless.

Check out some past sessions:

Happy Little APIs Season 2 is airing every other Tuesday on the AWS Twitch Channel. Checkout the first episode where Eric Johnson and Ran Ribenzaft, Serverless Hero and CTO of Epsagon, talk about private integrations with HTTP API.

Eric Johnson is also streaming “Sessions with SAM” every Thursday at 10AM PST. Each week Eric shows how to use SAM to solve different problems with serverless and how to leverage SAM templates to build out powerful serverless applications. Catch up on the last few episodes on our Twitch channel.

Relax with a cup of your favorite morning beverage every Friday at 12PM EST with a Serverless Coffee Break with James Beswick. These are chats about all things serverless with special guests. You can catch these live on Twitter or on your own time with these recordings.

AWS Serverless Heroes

This year, we’ve added some new faces to the list of AWS Serverless Heroes. The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.

Still looking for more?

The Serverless landing page has even more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and Getting Started tutorials.