Tag Archives: well architected

Building well-architected serverless applications: Building in resiliency – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-building-in-resiliency-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Reliability question REL2: How do you build resiliency into your serverless application?

Evaluate scaling mechanisms for serverless and non-serverless resources to meet customer demand. Build resiliency into your workload to make your serverless application resilient to withstand partial and intermittent failures across components that may only surface in production.

Required practice: Manage transaction, partial, and intermittent failures

Whenever one service or system calls another, there is a chance that failures can happen. Services or systems often don’t fail as a single unit, but rather suffer partial or transient failures. Applications should be designed to handle component failures as part of the architecture. The system should be designed to detect failure and, ideally, automatically heal itself.

Transaction failures can occur when a component is unavailable or under high load. Partial failures can occur when a percentage of requests succeeds, including during batch processing. Intermittent failures might occur when a request fails for a short period of time due to network or other transient issues.

AWS serverless services, including AWS Lambda, are fault-tolerant and designed to handle failures. If a service invokes a Lambda function and there is a service disruption, Lambda invokes the function in a different Availability Zone.

When you invoke a function directly, you determine the strategy for handling errors. You can retry, send the event to a destination or queue for debugging, or ignore the error. Clients such as the AWS Command Line Interface (CLI) and the AWS SDK retry on client timeouts, throttling errors (429), and other errors that are not caused by a bad request.

When you invoke a function indirectly, you must be aware of the retry behavior of the invoker and any service that the request encounters along the way. For more information, see “Error handling and automatic retries in AWS Lambda”. You can configure Maximum Retry Attempts and Maximum Event Age for asynchronous invocations.

When reading from Amazon Kinesis Data Streams and Amazon DynamoDB Streams, Lambda retries the entire batch of items. Retries continue until the records expire or exceed the maximum age that you configure on the event source mapping. You can also configure the event source mapping to split a failed batch into two batches. Retrying with smaller batches isolates bad records and works around timeout issues.

Partial failures can occur in non-atomic operations. PutRecords for Kinesis and BatchWriteItem for DynamoDB return a successful response if at least one record is ingested successfully. Always inspect the response when using such operations and programmatically deal with partial failures.

Use exponential backoff with jitter

The simplest technique for dealing with failures in a networked environment is to retry calls until they succeed. This technique increases the reliability of the application and reduces operational costs for the developer.

However, it is not always safe to retry. A retry can further increase the load on the system being called if the system is already failing due to an overload. To avoid this problem, use backoff. Instead of retrying immediately and aggressively, the client waits some amount of time between tries. The most common pattern is an exponential backoff, which uses exponentially longer wait times between retries. This is typically capped to a maximum delay and number of retries.

If all backoff retries are still happening at the same time, this can still overload a system or cause contention. To avoid this problem, use jitter. Jitter adds some amount of randomness to the backoff to spread the retries around in time. This can help prevent large bursts by spreading out the rate when clients connect. For more information see the Amazon Builders’ Library article “Timeouts, retries, and backoff with jitter” and AWS Architecture blog post “Exponential Backoff And Jitter”.

Exponential backoff and jitter

Exponential backoff and jitter

When your application responds to callers in fail-fast scenarios and when performance is degraded, inform the caller via headers or metadata when they can retry.

Each AWS SDK implements automatic retry logic including exponential backoff. For downstream calls, you can adjust AWS and third-party SDK retries, backoffs, TCP, and HTTP timeouts. This helps you decide when to stop retrying. For more information, see the documentation and troubleshooting steps for Lambda and the AWS SDK.

Use a dead-letter queue mechanism to retain, investigate and retry failed transactions

There are a number of ways to handle message failures including destinations and dead-letter queues.

You can configure Lambda to send records of asynchronous invocations to another destination service. These include Amazon Simple Queue Service (SQS), Amazon Simple Notification Service (SNS), Lambda, and Amazon EventBridge. You can configure separate destinations for events that fail processing and events that are successfully processed. The invocation record contains details about the event, the response, and the reason that the record was sent.

The following example shows a function that sends a record of a successful invocation to an EventBridge event bus. When an event fails all processing attempts, Lambda sends an invocation record to an SQS queue. It includes the function’s response in the invocation record.

AWS Lambda destinations for asynchronous invocation

AWS Lambda destinations for asynchronous invocation

SNS, SQS, Lambda, and EventBridge support dead-letter queues (DLQs). DLQs make your applications more resilient and durable by storing messages or events that can’t be processed correctly into a dedicated SQS queue. This helps you debug your application by isolating the problematic messages to determine why their processing failed. One you have resolved the issue, re-process the failed message. For more information, see “When should I use a dead-letter queue?” There is an example serverless application to redrive the messages from an SQS DLQ back to its source SQS queue.

For Lambda, DLQs provide an alternative to a failure destination. Lambda destinations is preferable for asynchronous invocations.

Good practice: Orchestrate long-running transactions

Long-running transactions can be processed by one or multiple components. Consider implementing the saga pattern using state machines for these types of transactions.

The saga pattern coordinates transactions between multiple microservices as part of a state machine. Each service that performs a transaction publishes an event to trigger the next transaction in the saga. This continues until the transaction chain is complete. If a transaction fails, saga orchestrates a series of compensating transactions that undo the changes that were made by the preceding transactions.

This is preferable to handling complex or long-running transactions within application code. State machines prevent cascading failures and avoid tightly coupling components with orchestrating logic and business logic.

Use a state machine to visualize distributed transactions, and to separate business logic from orchestration logic.

AWS Step Functions lets you coordinate multiple AWS services into serverless workflows via state machines. Within Step Functions, you can set separate retries, backoff rates, max attempts, intervals, and timeouts. These are set for every step of your state machine using a declarative language.

In the serverless airline example used in this series, Step Functions is used to orchestrate the Booking microservice. The ProcessBooking state machine handles all the necessary steps to create bookings, including payment.

Booking service Step Functions state machine

Booking service Step Functions state machine

The state machine uses a combination of service integrations using DynamoDB, SQS, and Lambda functions to coordinate transactions and handle failures.

For example, the Reserve Booking task invokes a Lambda function. The task has retry and error handling configured as part of the task definition.

"Reserve Booking": {
	"Type": "Task",
	"Resource": "${ReserveBooking.Arn}",
	"TimeoutSeconds": 5,
	"Retry": [
		{
			"ErrorEquals": [
				"BookingReservationException"
			],
			"IntervalSeconds": 1,
			"BackoffRate": 2,
			"MaxAttempts": 2
		}
	],
	"Catch": [
		{
			"ErrorEquals": [
				"States.ALL"
			],
			"ResultPath": "$.bookingError",
			"Next": "Cancel Booking"
		}
	],
	"ResultPath": "$.bookingId",
	"Next": "Collect Payment"
},

Step Functions supports direct service integrations, including DynamoDB. The Reserve Flight task directly updates the flightTable without requiring a Lambda function.

"Reserve Flight": {
	"Type": "Task",
	"Resource": "arn:aws:states:::dynamodb:updateItem",
	"Parameters": {
		"TableName.$": "$.flightTable",
		"Key": {
			"id": {
				"S.$": "$.outboundFlightId"
			}
		},
		"UpdateExpression": "SET seatCapacity = seatCapacity - :dec",
		"ExpressionAttributeValues": {
			":dec": {
				"N": "1"
			},
			":noSeat": {
				"N": "0"
			}
		},
		"ConditionExpression": "seatCapacity > :noSeat"
	},

By default, when a state reports an error, Step Functions causes the execution to fail entirely.

Utilize dead-letter queues in response to failed state machine executions

Any state within the Step Functions workflow can encounter runtime errors. These include state machine definition issues, task failures such as Lambda function exceptions, or transient issues such as network connectivity issues. For more information, see “Error handling in Step Functions”.

Use the Step Functions service integration with SQS to send failed transactions to a DLQ as the final step. This adds a higher level of durability within your state machines.

For example, the airline Notify Failed Booking final task catches failed states from four previous steps. It sends the results to the Booking DLQ.

Booking service Step Functions DLQ

Booking service Step Functions DLQ

The message includes the output of the previous failed states for further troubleshooting.

"Booking DLQ": {
	"Type": "Task",
	"Resource": "arn:aws:states:::sqs:sendMessage",
	"Parameters": {
		"QueueUrl": "${BookingsDLQ}",
		"MessageBody.$": "$"
	},
	"ResultPath": "$.deadLetterQueue",
	"Next": "Booking Failed"
},

The Step Functions documentation has more information on calling SQS.

Conclusion

Build resiliency into your workloads. This makes sure that your application can withstand partial and intermittent failures across components that may only surface in production.

In this post, I cover managing failures using retries, exponential backoff, and jitter. I explain how DLQs can isolate failed messages. I show how to use state machines to orchestrate long running transactions rather than handling these in application code.

This well-architected question continues in part 2 where I look at managing duplicate and unwanted events with idempotency and an event schema. I cover how to consider scaling patterns at burst rates by managing account limits and show relevant metrics to evaluate.

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Regulating inbound request rates – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-regulating-inbound-request-rates-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Reliability question REL1: How do you regulate inbound request rates?

This post continues part 1 of this security question. Previously, I cover controlling inbound request rates using throttling. I go through how to use throttling to control steady-rate and burst rate requests. I show some solutions for performance testing to identify the request rates that your workload can sustain before impacting performance.

Good practice: Use, analyze, and enforce API quotas

API quotas limit the maximum number of requests a given API key can submit within a specified time interval. Metering API consumers provides a better understanding of how different consumers use your workload at sustained and burst rates at any point in time. With this information, you can determine fine-grained rate limiting for multiple quota limits. These can be done according to a group of consumer needs, and can adjust their limits on a regular basis.

Segregate API consumers steady-rate requests and their quota into multiple buckets or tiers

Amazon API Gateway usage plans allow your API consumer to access selected APIs at agreed-upon request rates and quotas. These help your consumers meet their business requirements and budget constraints. Create and attach API keys to usage plans to control access to certain API stages. I show how to create usage plans and how to associate them with API keys in “Building well-architected serverless applications: Controlling serverless API access – part 2”.

API key associated with usage plan

API key associated with usage plan

You can extract utilization data from usage plans to analyze API usage on a per-API key basis. In the example, I show how to use usage plans to see how many requests are made.

View API key usage

View API key usage

This allows you to generate billing documents and determine whether your customers need higher or lower limits. Have a mechanism to allow customers to request higher limits preemptively. When customers anticipate greater API usage, they can take action proactively.

API Gateway Lambda authorizers can dynamically associate API keys to a given request. This can be used where you do not control API consumers, or want to associate API keys based on your own criteria. For more information, see the documentation.

You can also visualize usage plans with Amazon QuickSight using enriched API Gateway access logs.

Visualize usage plans with Amazon QuickSight

Visualize usage plans with Amazon QuickSight

Define whether your API consumers are end users or machines

Understanding your API consumers helps you manage how they connect to your API. This helps you define a request access pattern strategy, which can distinguish between end users or machines.

Machine consumers make automated connections to your API, which may require a different access pattern to end users. You may decide to prioritize end user consumers to provide a better experience. Machine consumers may be able to handle request throttling automatically.

Best practice: Use mechanisms to protect non-scalable resources

Limit component throughput by enforcing how many transactions it can accept

AWS Lambda functions can scale faster than traditional resources, such as relational databases and cache systems. Protect your non-scalable resources by ensuring that components that scale quickly do not exceed the throughput of downstream systems. This can prevent system performance degrading. There are a number of ways to achieve this, either directly or via buffer mechanisms such as queues and streams.

For relational databases such as Amazon RDS, you can limit the number of connections per user, in addition to the global maximum number of connections. With Amazon RDS Proxy, your applications can pool and share database connections to improve their ability to scale.

Amazon RDS Proxy

Amazon RDS Proxy

For additional options for using RDS with Lambda, see the AWS Serverless Hero blog post “How To: Manage RDS Connections from AWS Lambda Serverless Functions”.

Cache results and only connect to, and fetch data from databases when needed. This reduces the load on the downstream database. Adjust the maximum number of connections for caching systems. Include a caching expiration mechanism to prevent serving stale records. For more information on caching implementation patterns and considerations, see “Caching Best Practices”.

Lambda provides managed scaling. When a function is first invoked, the Lambda service creates an instance of the function to process the event. This is called a cold start. After completion, the function remains available for a period of time to process subsequent events. These are called warm starts. If other events arrive while the function is busy, Lambda creates more instances of the function to handle these requests concurrently as cold starts. The following example shows 10 events processed in six concurrent requests.

Lambda concurrency

Lambda concurrency

You can control the number of concurrent function invocations to both reserve and limit the maximum concurrency your function can achieve. You can configure reserved concurrency to set the maximum number of concurrent instances for the function. This can protect downstream resources such as a database by ensuring Lambda can only scale up to the number of connections the database can support.

For example, you may have a traditional database or external API that can only support a maximum of 50 concurrent connections. You can set the maximum number of concurrent Lambda functions using the function concurrency settings. Setting the value to 50 ensures that the traditional database or external API is not overwhelmed.

Edit Lambda concurrency

Edit Lambda concurrency

You can also set the Lambda function concurrency to 0, which disables the Lambda function in the event of anomalies.

Another solution to protect downstream resources is to use an intermediate buffer. A buffer can persistently store messages in a stream or queue until a receiver processes them. This helps you control how fast messages are processed, which can protect the load on downstream resources.

Amazon Kinesis Data Streams allows you to collect and process large streams of data records in real time, and can act as a buffer. Streams consist of a set of shards that contain a sequence of data records. When using Lambda to process records, it processes one batch of records at a time from each shard.

Kinesis Data Streams control concurrency at the shard level, meaning that a single shard has a single concurrent invocation. This can reduce downstream calls to non-scalable resources such as a traditional database. Kinesis Data Streams also support batch windows up to 5 minutes and batch record sizes. These can also be used to control how frequent invocations can occur.

To learn how to manage scaling with Kinesis, see the documentation. To learn more how Lambda works with Kinesis, read the blog series “Building serverless applications with streaming data”.

Lambda and Kinesis shards

Lambda and Kinesis shards

Amazon Simple Queue Service (SQS) is a fully managed serverless message queuing service that enables you to decouple and scale microservices. You can offload tasks from one component of your application by sending them to a queue and processing them asynchronously.

SQS can act as a buffer, using a Lambda function to process the messages. Lambda polls the queue and invokes your Lambda function synchronously with an event that contains queue messages. Lambda reads messages in batches and invokes your function once for each batch. When your function successfully processes a batch, Lambda deletes its messages from the queue.

You can protect downstream resources using the Lambda concurrency controls. This limits the number of concurrent Lambda functions that pull messages off the queue. The messages persist in the queue until Lambda can process them. For more information see, “Using AWS Lambda with Amazon SQS

Lambda and SQS

Lambda and SQS

Conclusion

Regulating inbound requests helps you adapt different scaling mechanisms based on customer demand. You can achieve better throughput for your workloads and make them more reliable by controlling requests to a rate that your workload can support.

In this post, I cover using, analyzing, and enforcing API quotas using usage plans and API keys. I show mechanisms to protect non-scalable resources such as using RDS Proxy to protect downstream databases. I show how to control the number of Lambda invocations using concurrency controls to protect downstream resources. I explain how you can use streams and queues as an intermediate buffer to store messages persistently until a receiver processes them.

In the next post in the series, I cover the second reliability question from the Well-Architected Serverless Lens, building resiliency into serverless applications.

For more serverless learning resources, visit Serverless Land.

Choosing a Well-Architected CI/CD approach: Open Source on AWS

Post Syndicated from Mikhail Vasilyev original https://aws.amazon.com/blogs/devops/choosing-a-well-architected-ci-cd-approach-open-source-on-aws/

Introduction

When building a CI/CD platform, it is important to make an informed decision regarding every underlying tool. This post explores evaluating the criteria for selecting each tool focusing on a balance between meeting functional and non-functional requirements, and maximizing value.

Your first decision: source code management.

Source code is potentially your most valuable asset, and so we start by choosing a source code management tool. These tools normally have high non-functional requirements in order to protect your assets and to ensure they are available to the organization when needed. The requirements usually include demand for high durability, high availability (HA), consistently high throughput, and strong security with role-based access controls.

At the same time, source code management tools normally have many specific functional requirements as well. For example, the ability to provide collaborative code review in the UI, flexible and tunable merge policies including both automated and manual gates (code checks), and out-of-box UI-level integrations with numerous other tools. These kinds of integrations can include enabling monitoring, CI, chats, and agile project management.

Many teams also treat source code management tools as their portal into other CI/CD tools. They make them shareable between teams, and might prefer to stay within one single context and user interface throughout the entire DevOps cycle. Many source code management tools are actually a stack of services that support multiple steps of your CI/CD workflows from within a single UI. This makes them an excellent starting point for building your CI/CD platforms.

The first decision your need to make is whether to go with an open source solution for managing code or with AWS-managed solutions, such as AWS CodeCommit. Open source solutions include (but are not limited to) the following: Gerrit, Gitlab, Gogs, and Phabricator.

You decision will be influenced by the amount of benefit your team can gain from the flexibility provided through open source, and how well your team can support deploying and managing these solutions. You will also need to consider the infrastructure and management overhead cost.

Engineering teams that have the capacity to develop their own plugins for their CI/CD platforms, or whom even contribute directly to open source projects, will often prefer open source solutions for the flexibility they provide. This will be especially true if they are fluent in designing and supporting their own cloud infrastructure. If the team gets more value by trading the flexibility of open source for not having to worry about managing infrastructure (especially if High Availability, Scalability, Durability, and Security are more critical) an AWS-managed solution would be a better choice.

Source Code Management Solution

When the choice is made in favor of an open-source code management solution (such as Gitlab), the next decision will be how to architect the deployment. Will the team deploy to a single instance, or design for high availability, durability, and scalability? Teams that want to design Gitlab for HA can use the following guide to proceed: Installing GitLab on Amazon Web Services (AWS)

By adopting AWS services (such as Amazon RDS, Amazon ElastiCache for Redis, and Autoscaling Groups), you can lower the management burden of supporting the underlying infrastructure in this self-managed HA scenario.

High level overview of self-managed HA Gitlab deployment

Your second decision: Continuous Integration engine

Selecting your CI engine, you might be able to benefit from additional features of previously selected solutions. Gitlab provides both source control services, as well as built-in CI tools, called Gitlab CI. Gitlab Runners are responsible for running CI jobs, and the actual jobs are described as YML files stored in Gitlab’s git repository along with product code. For security and performance reasons, GitLab Runners should be on resources separate from your GitLab instance.

You could manage those resources or you could use one of the AWS services that can support deploying and managing Runners. The use of an on-demand service removes the expense of implementing and managing a capability that is undifferentiated heavy lifting for you. This provides cost optimization and enables operational excellence. You pay for what you use and the service team manages the underlying service.

Continuous Integration engine Solution

In an architecture example (below), Gitlab Runners are deployed in containers running on Amazon EKS. The team has less infrastructure to manage, can start focusing on development faster by not having to implement the capability, and can provision resources in an optimal way for their on-demand needs.

To further optimize costs, you can use EC2 Spot Instances for your EKS nodes. CI jobs are normally compute intensive and limited in run time. The runner jobs can easily be restarted on a different resource with little impact. This makes them tolerant of failure and the use of EC2 Spot instances very appealing. Amazon EKS and Spot Instances are supported out-of-box in Gitlab. As a result there is no integration to develop, only configuration is required.

To support infrastructure as code best practices, Runners are deployed with Helm and are stored and versioned as Helm charts. All of the infrastructure as code information used to implement the CI/CD platform itself is stored in templates such as Terraform.

High level overview of Infrastructure as Code on Gitlab and Gitlab CI

High level overview of Infrastructure as Code on Gitlab and Gitlab CI

Your third decision: Container Registry

You will be unable to deploy Runners if the container images are not available. As a result, the primary non-functional requirements for your production container registry are likely to include high availability, durability, transparent scalability, and security. At the same time, your functional requirements for a container registry might be lower. It might be sufficient to have a simple UI, and simple APIs supporting basic flows. Customers looking for a managed solution can use Amazon ECR, which is OCI compliant and supports Helm Charts.

Container Registry Solution

For this set of requirements, the flexibility and feature velocity of open source tools does not provide an advantage. Self-supporting high availability and strengthened security could be costly in implementation time and long-term management. Based on [Blog post 1 Diagram 1], an AWS-managed solution provides cost advantages and has no management overhead. In this case, an AWS-managed solution is a better choice for your container registry than an open-source solution hosted on AWS. In this example, Amazon ECR is selected. Customers who prefer to go with open-source container registries might consider solutions like Harbor.

High level overview of Gitlab CI with Amazon ECR

High level overview of Gitlab CI with Amazon ECR

Additional Considerations

Now that the main services for the CI/CD platform are selected, we will take a high level look at additional important considerations. You need to make sure you have observability into both infrastructure and applications, that backup tools and policies are in place, and that security needs are addressed.

There are many mechanisms to strengthen security including the use of security groups. Use IAM for granular permission control. Robust policies can limit the exposure of your resources and control the flow of traffic. Implement policies to prevent your assets leaving your CI environment inappropriately. To protect sensitive data, such as worker secrets, encrypt these assets while in transit and at rest. Select a key management solution to reduce your operational burden and to support these activities such as AWS Key Management Service (AWS KMS). To deliver secure and compliant application changes rapidly while running operations consistently with automation, implement DevSecOps.

Amazon S3 is durable, secure, and highly available by design making it the preferred choice to store EBS-level backups by many customers. Amazon S3 satisfies the non-functional requirements for a backup store. It also supports versioning and tiered storage classes, making it a cost-effective as well.

Your observability requirements may emphasize versatility and flexibility for application-level monitoring. Using Amazon CloudWatch to monitor your infrastructure and then extending your capabilities through an open-source solutions such as Prometheus may be advantageous. You can get many of the benefits of both open-source Prometheus and AWS services with Amazon Managed Service for Prometheus (AMP). For interactive visualization of metrics, many customers choose solutions such as open-source Grafana, available as an AWS service Amazon Managed Service for Grafana (AMG).

CI/CD Platform with Gitlab and AWS

CI/CD Platform with Gitlab and AWS

Conclusion

We have covered how making informed decisions can maximize value and synergy between open-source solutions on AWS, such as Gitlab, and AWS-managed services, such as Amazon EKS and Amazon ECR. You can find the right balance of open-source tools and AWS services that will meet your functional and non-functional requirements, and help maximizing the value you get from those resources.

Pete Goldberg, Director of Partnerships at GitLab: “When aligning your development process to AWS Well Architected Framework, GitLab allows customers to build and automate processes to achieve Operational Excellence. As a single tool designed to facilitate collaboration across the organization, GitLab simplifies the process to follow the Fully Separated Operating Model where Engineering and Operations come together via automated processes that remove the historical barriers between the groups. This gives organizations the ability to efficiently and rapidly deploy new features and applications that drive the business while providing the risk mitigation and compliance they require. By allowing operations teams to define infrastructure as code in the same tool that the engineering teams are storing application code, and allowing your automation bring those together for your CI/CD workflows companies can move faster while having compliance and controls built-in, providing the entire organization greater transparency. With GitLab’s integrations with different AWS compute options (EC2, Lambda, Fargate, ECS or EKS), customers can choose the best type of compute for the job without sacrificing the controls required to maintain Operational Excellence.”

 

Author bio

Mikhail is a Solutions Architect for RUS-CIS. Mikhail supports customers on their cloud journeys with Well-architected best practices and adoption of DevOps techniques on AWS. Mikhail is a fan of ChatOps, Open Source on AWS and Operational Excellence design principles.

Building well-architected serverless applications: Regulating inbound request rates – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-regulating-inbound-request-rates-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Reliability question REL1: How do you regulate inbound request rates?

Defining, analyzing, and enforcing inbound request rates helps achieve better throughput. Regulation helps you adapt different scaling mechanisms based on customer demand. By regulating inbound request rates, you can achieve better throughput, and adapt client request submissions to a request rate that your workload can support.

Required practice: Control inbound request rates using throttling

Throttle inbound request rates using steady-rate and burst rate requests

Throttling requests limits the number of requests a client can make during a certain period of time. Throttling allows you to control your API traffic. This helps your backend services maintain their performance and availability levels by limiting the number of requests to actual system throughput.

To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles requests to your API. These limits are applied across all clients using the token bucket algorithm. API Gateway sets a limit on a steady-state rate and a burst of request submissions. The algorithm is based on an analogy of filling and emptying a bucket of tokens representing the number of available requests that can be processed.

Each API request removes a token from the bucket. The throttle rate then determines how many requests are allowed per second. The throttle burst determines how many concurrent requests are allowed. I explain the token bucket algorithm in more detail in “Building well-architected serverless applications: Controlling serverless API access – part 2

Token bucket algorithm

Token bucket algorithm

API Gateway limits the steady-state rate and burst requests per second. These are shared across all APIs per Region in an account. For further information on account-level throttling per Region, see the documentation. You can request account-level rate limit increases using the AWS Support Center. For more information, see Amazon API Gateway quotas and important notes.

You can configure your own throttling levels, within the account and Region limits to improve overall performance across all APIs in your account. This restricts the overall request submissions so that they don’t exceed the account-level throttling limits.

You can also configure per-client throttling limits. Usage plans restrict client request submissions to within specified request rates and quotas. These are applied to clients using API keys that are associated with your usage policy as a client identifier. You can add throttling levels per API route, stage, or method that are applied in a specific order.

For more information on API Gateway throttling, see the AWS re:Invent presentation “I didn’t know Amazon API Gateway could do that”.

API Gateway throttling

API Gateway throttling

You can also throttle requests by introducing a buffering layer using Amazon Kinesis Data Stream or Amazon SQS. Kinesis can limit the number of requests at the shard level while SQS can limit at the consumer level. For more information on using SQS as a buffer with Amazon Simple Notification Service (SNS), read “How To: Use SNS and SQS to Distribute and Throttle Events”.

Identify steady-rate and burst rate requests that your workload can sustain at any point in time before performance degraded

Load testing your serverless application allows you to monitor the performance of an application before it is deployed to production. Serverless applications can be simpler to load test, thanks to the automatic scaling built into many of the services. During a load test, you can identify quotas that may act as a limiting factor for the traffic you expect and take action.

Perform load testing for a sustained period of time. Gradually increase the traffic to your API to determine your steady-state rate of requests. Also use a burst strategy with no ramp up to determine the burst rates that your workload can serve without errors or performance degradation. There are a number of AWS Marketplace and AWS Partner Network (APN) solutions available for performance testing, Gatling Frontline, BlazeMeter, and Apica.

In the serverless airline example used in this series, you can run a performance test suite using Gatling, an open source tool.

To deploy the test suite, follow the instructions in the GitHub repository perf-tests directory. Uncomment the deploy.perftest line in the repository Makefile.

Perf-test makefile

Perf-test makefile

Once the file is pushed to GitHub, AWS Amplify Console rebuilds the application, and deploys an AWS CloudFormation stack. You can run the load tests locally, or use an AWS Step Functions state machine to run the setup and Gatling load test simulation.

Performance test using Step Functions

Performance test using Step Functions

The Gatling simulation script uses constantUsersPerSec and rampUsersPerSec to add users for a number of test scenarios. You can use the test to simulate load on the application. Once the tests run, it generates a downloadable report.

Gatling performance results

Gatling performance results

Artillery Community Edition is another open-source tool for testing serverless APIs. You configure the number of requests per second and overall test duration, and it uses a headless Chromium browser to run its test flows. For Artillery, the maximum number of concurrent tests is constrained by your local computing resources and network. To achieve higher throughput, you can use Serverless Artillery, which runs the Artillery package on Lambda functions. As a result, this tool can scale up to a significantly higher number of tests.

For more information on how to use Artillery, see “Load testing a web application’s serverless backend”. This runs tests against APIs in a demo application. For example, one of the tests fetches 50,000 questions per hour. This calls an API Gateway endpoint and tests whether the AWS Lambda function, which queries an Amazon DynamoDB table, can handle the load.

Artillery performance test

Artillery performance test

This is a synchronous API so the performance directly impacts the user’s experience of the application. This test shows that the median response time is 165 ms with a p95 time of 201 ms.

Performance test API results

Performance test API results

Another consideration for API load testing is whether the authentication and authorization service can handle the load. For more information on load testing Amazon Cognito and API Gateway using Step Functions, see “Using serverless to load test Amazon API Gateway with authorization”.

API load testing with authentication and authorization

API load testing with authentication and authorization

Conclusion

Regulating inbound requests helps you adapt different scaling mechanisms based on customer demand. You can achieve better throughput for your workloads and make them more reliable by controlling requests to a rate that your workload can support.

In this post, I cover controlling inbound request rates using throttling. I show how to use throttling to control steady-rate and burst rate requests. I show some solutions for performance testing to identify the request rates that your workload can sustain before performance degradation.

This well-architected question will be continued where I look at using, analyzing, and enforcing API quotas. I cover mechanisms to protect non-scalable resources.

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Implementing application workload security – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-implementing-application-workload-security-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Security question SEC3: How do you implement application security in your workload?

This post continues part 1 of this security question. Previously, I cover reviewing security awareness documentation such as the Common Vulnerabilities and Exposures (CVE) database. I show how to use GitHub security features to inspect and manage code dependencies. I then show how to validate inbound events using Amazon API Gateway request validation.

Required practice: Store secrets that are used in your code securely

Store secrets such as database passwords or API keys in a secrets manager. Using a secrets manager allows for auditing access, easier rotation, and prevents exposing secrets in application source code. There are a number of AWS and third-party solutions to store and manage secrets.

AWS Partner Network (APN) member Hashicorp provides Vault to keep secrets and application data secure. Vault has a centralized workflow for tightly controlling access to secrets across applications, systems, and infrastructure. You can store secrets in Vault and access them from an AWS Lambda function to, for example, access a database. You can use the Vault Agent for AWS to authenticate with Vault, receive the database credentials, and then perform the necessary queries. You can also use the Vault AWS Lambda extension to manage the connectivity to Vault.

AWS Systems Manager Parameter Store allows you to store configuration data securely, including secrets, as parameter values.

AWS Secrets Manager enables you to replace hardcoded credentials in your code with an API call to Secrets Manager to retrieve the secret programmatically. You can protect, rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can also generate secure secrets. By default, Secrets Manager does not write or cache the secret to persistent storage.

Parameter Store integrates with Secrets Manager. For more information, see “Referencing AWS Secrets Manager secrets from Parameter Store parameters.”

To show how Secrets Manager works, deploy the solution detailed in “How to securely provide database credentials to Lambda functions by using AWS Secrets Manager”.

The AWS Cloud​Formation stack deploys an Amazon RDS MySQL database with a randomly generated password. This is stored in Secrets Manager using a secret resource. A Lambda function behind an API Gateway endpoint returns the record count in a table from the database, using the required credentials. Lambda function environment variables store the database connection details and which secret to return for the database password. The password is not stored as an environment variable, nor in the Lambda function application code.

Lambda environment variables for Secrets Manager

Lambda environment variables for Secrets Manager

The application flow is as follows:

  1. Clients call the API Gateway endpoint
  2. API Gateway invokes the Lambda function
  3. The Lambda function retrieves the database secrets using the Secrets Manager API
  4. The Lambda function connects to the RDS database using the credentials from Secrets Manager and returns the query results

View the password secret value in the Secrets Manager console, which is randomly generated as part of the stack deployment.

Example password stored in Secrets Manager

Example password stored in Secrets Manager

The Lambda function includes the following code to retrieve the secret from Secrets Manager. The function then uses it to connect to the database securely.

secret_name = os.environ['SECRET_NAME']
rds_host = os.environ['RDS_HOST']
name = os.environ['RDS_USERNAME']
db_name = os.environ['RDS_DB_NAME']

session = boto3.session.Session()
client = session.client(
	service_name='secretsmanager',
	region_name=region_name
)
get_secret_value_response = client.get_secret_value(
	SecretId=secret_name
)
...
secret = get_secret_value_response['SecretString']
j = json.loads(secret)
password = j['password']
...
conn = pymysql.connect(
	rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)

Browsing to the endpoint URL specified in the Cloud​Formation output displays the number of records. This confirms that the Lambda function has successfully retrieved the secure database credentials and queried the table for the record count.

Lambda function retrieving database credentials

Lambda function retrieving database credentials

Audit secrets access through a secrets manager

Monitor how your secrets are used to confirm that the usage is expected, and log any changes to them. This helps to ensure that any unexpected usage or change can be investigated, and unwanted changes can be rolled back.

Hashicorp Vault uses Audit devices that keep a detailed log of all requests and responses to Vault. Audit devices can append logs to a file, write to syslog, or write to a socket.

Secrets Manager supports logging API calls with AWS CloudTrail. CloudTrail captures all API calls for Secrets Manager as events. This includes calls from the Secrets Manager console and from code calling the Secrets Manager APIs.

Viewing the CloudTrail event history shows the requests to secretsmanager.amazonaws.com. This shows the requests from the console in addition to the Lambda function.

CloudTrail showing access to Secrets Manager

CloudTrail showing access to Secrets Manager

Secrets Manager also works with Amazon EventBridge so you can trigger alerts when administrator-specified operations occur. You can configure EventBridge rules to alert on deleted secrets or secret rotation. You can also create an alert if anyone tries to use a secret version while it is pending deletion. This can identify and alert when there is an attempt to use an out-of-date secret.

Enforce least privilege access to secrets

Access to secrets must be tightly controlled because the secrets contain sensitive information. Create AWS Identity and Access Management (IAM) policies that enable minimal access to secrets to prevent credentials being accidentally used or compromised. Secrets that have policies that are too permissive could be misused by other environments or developers. This can lead to accidental data loss or compromised systems. For more information, see “Authentication and access control for AWS Secrets Manager”.

Rotate secrets frequently.

Rotating your workload secrets is important. This prevents misuse of your secrets since they become invalid within a configured time period.

Secrets Manager allows you to rotate secrets on a schedule or on demand. This enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise. Secrets Manager creates a CloudFormation stack with a Lambda function to manage the rotation process for you. Secrets Manager has native integrations with Amazon RDS, Amazon Redshift, and Amazon DocumentDB. It populates the function with the Amazon Resource Name (ARN) of the secret. You specify the permissions to rotate the credentials, and how often you want to rotate the secret.

The CloudFormation stack creates a MySecretRotationSchedule resource with a MyRotationLambda function to rotate the secret every 30 days.

MySecretRotationSchedule:
    Type: AWS::SecretsManager::RotationSchedule
    DependsOn: SecretRDSInstanceAttachment
    Properties:
    SecretId: !Ref MyRDSInstanceRotationSecret
    RotationLambdaARN: !GetAtt MyRotationLambda.Arn
    RotationRules:
        AutomaticallyAfterDays: 30
MyRotationLambda:
    Type: AWS::Serverless::Function
    Properties:
    Runtime: python3.7
    Role: !GetAtt MyLambdaExecutionRole.Arn
    Handler: mysql_secret_rotation.lambda_handler
    Description: 'This is a lambda to rotate MySql user passwd'
    FunctionName: 'cfn-rotation-lambda'
    CodeUri: 's3://devsecopsblog/code.zip'      
    Environment:
        Variables:
        SECRETS_MANAGER_ENDPOINT: !Sub 'https://secretsmanager.${AWS::Region}.amazonaws.com'

View and edit the rotation settings in the Secrets Manager console.

Secrets Manager rotation settings

Secrets Manager rotation settings

Manually rotate the secret by selecting Rotate secret immediately. This invokes the Lambda function, which updates the database password and updates the secret in Secrets Manager.

View the updated secret in Secrets Manager, where the password has changed.

Secrets Manager password change

Secrets Manager password change

Browse to the endpoint URL to confirm you can still access the database with the updated credentials.

Access endpoint with updated Secret Manager password

Access endpoint with updated Secret Manager password

You can provide your own code to customize a Lambda rotation function for other databases or services. The code includes the commands required to interact with your secured service to update or add credentials.

Conclusion

Implementing application security in your workload involves reviewing and automating security practices at the application code level. By implementing code security, you can protect against emerging security threats. You can improve the security posture by checking for malicious code, including third-party dependencies.

In this post, I continue from part 1, looking at securely storing, auditing, and rotating secrets that are used in your application code.

In the next post in the series, I start to cover the reliability pillar from the Well-Architected Serverless Lens with regulating inbound request rates.

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Implementing application workload security – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-implementing-application-workload-security-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Security question SEC3: How do you implement application security in your workload?

Review and automate security practices at the application code level, and enforce security code review as part of development workflow. By implementing security at the application code level, you can protect against emerging security threats and reduce the attack surface from malicious code, including third-party dependencies.

Required practice: Review security awareness documents frequently

Stay up to date with both AWS and industry security best practices to understand and evolve protection of your workloads. Having a clear understanding of common threats helps you to mitigate them when developing your workloads.

The AWS Security Blog provides security-specific AWS content. The Open Web Application Security Project (OWASP) Top 10 is a guide for security practitioners to understand the most common application attacks and risks. The OWASP Top 10 Serverless Interpretation provides information specific to serverless applications.

Review and subscribe to vulnerability and security bulletins

Regularly review news feeds from multiple sources that are relevant to the technologies used in your workload. Subscribe to notification services to be informed of critical threats in near-real time.

The Common Vulnerabilities and Exposures (CVE) program identifies, defines, and catalogs publicly disclosed cybersecurity vulnerabilities. You can search the CVE list directly, for example “Python”.

CVE Python search

CVE Python search

The US National Vulnerability Database (NVD) allows you to search by vulnerability type, severity, and impact. You can also perform advanced searching by vendor name, product name, and version numbers. GitHub also integrates with CVE, which allows for advanced searching within the CVEproject/cvelist repository.

AWS Security Bulletins are a notification system for security and privacy events related to AWS services. Subscribe to the security bulletin RSS feed to keep up to date with AWS security announcements.

The US Cybersecurity and Infrastructure Security Agency (CISA) provides alerts about current security issues, vulnerabilities, and exploits. You can receive email alerts or subscribe to the RSS feed.

AWS Partner Network (APN) member Palo Alto Networks provides the “Serverless architectures Security Top 10” list. This is a security awareness and education guide to use while designing, developing, and testing serverless applications to help minimize security risks.

Good practice: Automatically review a workload’s code dependencies/libraries

Regularly reviewing application and code dependencies is a good industry security practice. This helps detect and prevent non-certified application code, and ensure that third-party application dependencies operate as intended.

Implement security mechanisms to verify application code and dependencies before using them

Combine automated and manual security code reviews to examine application code and its dependencies to ensure they operate as intended. Automated tools can help identify overly complex application code, and common security vulnerability exposures that are already cataloged.

Manual security code reviews, in addition to automated tools, help ensure that application code works as intended. Manual reviews can include business contextual information and integrations that automated tools may not capture.

Before adding any code dependencies to your workload, take time to review and certify each dependency to ensure that you are adding secure code. Use third-party services to review your code dependencies on every commit automatically.

OWASP has a code review guide and dependency check tool that attempt to detect publicly disclosed vulnerabilities within a project’s dependencies. The tool has a command line interface, a Maven plugin, an Ant task, and a Jenkins plugin.

GitHub has a number of security features for hosted repositories to inspect and manage code dependencies.

The dependency graph allows you to explore the packages that your repository depends on. Dependabot alerts show information about dependencies that are known to contain security vulnerabilities. You can choose whether to have pull requests generated automatically to update these dependencies. Code scanning alerts automatically scan code files to detect security vulnerabilities and coding errors.

You can enable these features by navigating to the Settings tab, and selecting Security & analysis.

GitHub configure security and analysis features

GitHub configure security and analysis features

Once Dependabot analyzes the repository, you can view the dependencies graph from the Insights tab. In the serverless airline example used in this series, you can view the Loyalty service package.json dependencies.

Serverless airline loyalty dependencies

Serverless airline loyalty dependencies

Dependabot alerts for security vulnerabilities are visible in the Security tab. You can review alerts and see information about how to resolve them.

Dependabot alert

Dependabot alert

Once Dependabot alerts are enabled for a repository, you can also view the alerts when pushing code to the repository from the terminal.

Dependabot terminal alert

Dependabot terminal alert

If you enable security updates, Dependabot can automatically create pull requests to update dependencies.

Dependabot pull requests

Dependabot pull requests

AWS Partner Network (APN) member Snyk has an integration with AWS Lambda to manage the security of your function code. Snyk determines what code and dependencies are currently deployed for Node.js, Ruby, and Java projects. It tests dependencies against their vulnerability database.

If you build your functions using container images, you can use Amazon Elastic Container Registry’s (ECR) image scanning feature. You can manually scan your images, or scan them on each push to your repository.

Elastic Container Registry image scanning example results

Elastic Container Registry image scanning example results

Best practice: Validate inbound events

Sanitize inbound events and validate them against a predefined schema. This helps prevent errors and increases your workload’s security posture by catching malformed events or events intentionally crafted to be malicious. The OWASP Input validation cheat sheet includes guidance for providing input validation security functionality in your applications.

Validate incoming HTTP requests against a schema

Implicitly trusting data from clients could lead to malformed data being processed. Use data type validators or web application frameworks to ensure data correctness. These should include regular expressions, value range, data structure, and data normalization.

You can configure Amazon API Gateway to perform basic validation of an API request before proceeding with the integration request to add another layer of security. This ensures that the HTTP request matches the desired format. Any HTTP request that does not pass validation is rejected, returning a 400 error response to the caller.

The Serverless Security Workshop has a module on API Gateway input validation based on the fictional Wild Rydes unicorn raid hailing service. The example shows a REST API endpoint where partner companies of Wild Rydes can submit unicorn customizations, such as branded capes, to advertise their company. The API endpoint should ensure that the request body follows specific patterns. These include checking the ImageURL is a valid URL, and the ID for Cape is a numeric value.

In API Gateway, a model defines the data structure of a payload, using the JSON schema draft 4. The model ensures that you receive the parameters in the format you expect. You can check them against regular expressions. The CustomizationPost model specifies that the ImageURL and Cape schemas should contain the following valid patterns:

    "imageUrl": {
      "type": "string",
      "title": "The Imageurl Schema",
      "pattern": "^https?:\/\/[[email protected]:%_+.~#?&//=]+$"
    },
    "sock": {
      "type": "string",
      "title": " The Cape Schema ",
      "pattern": "^[0-9]*$"
    },
    …

The model is applied to the /customizations/post method as part of the Method Request. The Request Validator is set to Validate body and the CustomizationPost model is set for the Request Body.

API Gateway request validator

API Gateway request validator

When testing the POST /customizations API with valid parameters using the following input:

{  
   "name":"Cherry-themed unicorn",
   "imageUrl":"https://en.wikipedia.org/wiki/Cherry#/media/File:Cherry_Stella444.jpg",
   "sock": "1",
   "horn": "2",
   "glasses": "3",
   "cape": "4"
}

The result is a valid response:

{"customUnicornId":<the-id-of-the-customization>}

Testing validation to the POST /customizations API using invalid parameters shows the input validation process.

The ImageUrl is not a valid URL:

 {  
    "name":"Cherry-themed unicorn",
    "imageUrl":"htt://en.wikipedia.org/wiki/Cherry#/media/File:Cherry_Stella444.jpg",
    "sock": "1" ,
    "horn": "2" ,
    "glasses": "3",
    "cape": "4"
 }

The Cape parameter is not a number, which shows a SQL injection attempt.

 {  
    "name":"Orange-themed unicorn",
    "imageUrl":"https://en.wikipedia.org/wiki/Orange_(fruit)#/media/File:Orange-Whole-%26-Split.jpg",
    "sock": "1",
    "horn": "2",
    "glasses": "3",
    "cape":"2); INSERT INTO Cape (NAME,PRICE) VALUES ('Bad color', 10000.00"
 }

These return a 400 Bad Request response from API Gateway before invoking the Lambda function:

{"message": "Invalid request body"}

To gain further protection, consider adding an AWS Web Application Firewall (AWS WAF) access control list to your API endpoint. The workshop includes an AWS WAF module to explore three AWS WAF rules:

  • Restrict the maximum size of request body
  • SQL injection condition as part of the request URI
  • Rate-based rule to prevent an overwhelming number of requests
AWS WAF ACL

AWS WAF ACL

AWS WAF also includes support for custom responses and request header insertion to improve the user experience and security posture of your applications.

For more API Gateway security information, see the security overview whitepaper.

Also add further input validation logic to your Lambda function code itself. For examples, see “Input Validation for Serverless”.

Conclusion

Implementing application security in your workload involves reviewing and automating security practices at the application code level. By implementing code security, you can protect against emerging security threats. You can improve the security posture by checking for malicious code, including third-party dependencies.

In this post, I cover reviewing security awareness documentation such as the CVE database. I show how to use GitHub security features to inspect and manage code dependencies. I then show how to validate inbound events using API Gateway request validation.

This well-architected question will be continued where I look at securely storing, auditing, and rotating secrets that are used in your application code.

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Managing application security boundaries – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-managing-application-security-boundaries-part-2/

This series uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Security question SEC2: How do you manage your serverless application’s security boundaries?

This post continues part 1 of this security question. Previously, I cover how to evaluate and define resource policies, showing what policies are available for various serverless services. I show some of the features of AWS Web Application Firewall (AWS WAF) to protect APIs. Then then go through how to control network traffic at all layers. I explain how AWS Lambda functions connect to VPCs, and how to use private APIs and VPC endpoints. I walk through how to audit your traffic.

Required practice: Use temporary credentials between resources and components

Do not share credentials and permissions policies between resources to maintain a granular segregation of permissions and improve the security posture. Use temporary credentials that are frequently rotated and that have policies tailored to the access the resource needs.

Use dynamic authentication when accessing components and managed services

AWS Identity and Access Management (IAM) roles allows your applications to access AWS services securely without requiring you to manage or hardcode the security credentials. When you use a role, you don’t have to distribute long-term credentials such as a user name and password, or access keys. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources. When you create a Lambda function, for example, you specify an IAM role to associate with the function. The function can then use the role-supplied temporary credentials to sign API requests.

Use IAM for authorizing access to AWS managed services such as Lambda or Amazon S3. Lambda also assumes IAM roles, exposing and rotating temporary credentials to your functions. This enables your application code to access AWS services.

Use IAM to authorize access to internal or private Amazon API Gateway API consumers. See this list of AWS services that work with IAM.

Within the serverless airline example used in this series, the loyalty service uses a Lambda function to fetch loyalty points and next tier progress. AWS AppSync acts as the client using an HTTP resolver, via an API Gateway REST API /loyalty/{customerId}/get resource, to invoke the function.

To ensure only AWS AppSync is authorized to invoke the API, IAM authorization is set within the API Gateway method request.

Viewing API Gateway IAM authorization

Viewing API Gateway IAM authorization

The IAM role specifies that appsync.amazonaws.com can perform an execute-api:Invoke on the specific API Gateway resource arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${LoyaltyApi}/*/*/*

For more information, see “Using an IAM role to grant permissions to applications”.

Use a framework such as the AWS Serverless Application Model (AWS SAM) to deploy your applications. This ensures that AWS resources are provisioned with unique per resource IAM roles. For example, AWS SAM automatically creates unique IAM roles for every Lambda function you create.

Best practice: Design smaller, single purpose functions

Creating smaller, single purpose functions enables you to keep your permissions aligned to least privileged access. This reduces the risk of compromise since the function does not require access to more than it needs.

Create single purpose functions with their own IAM role

Single purpose Lambda functions allow you to create IAM roles that are specific to your access requirements. For example, a large multipurpose function might need access to multiple AWS resources such as Amazon DynamoDB, Amazon S3, and Amazon Simple Queue Service (SQS). Single purpose functions would not need access to all of them at the same time.

With smaller, single purpose functions, it’s often easier to identify the specific resources and access requirements, and grant only those permissions. Additionally, new features are usually implemented by new functions in this architectural design. You can specifically grant permissions in new IAM roles for these functions.

Avoid sharing IAM roles with multiple cloud resources. As permissions are added to the role, these are shared across all resources using this role. For example, use one dedicated IAM role per Lambda function. This allows you to control permissions more intentionally. Even if some functions have the same policy initially, always separate the IAM roles to ensure least privilege policies.

Use least privilege access policies with your users and roles

When you create IAM policies, follow the standard security advice of granting least privilege, or granting only the permissions required to perform a task. Determine what users (and roles) must do and then craft policies that allow them to perform only those tasks.

Start with a minimum set of permissions and grant additional permissions as necessary. Doing so is more secure than starting with permissions that are too lenient and then trying to tighten them later. In the unlikely event of misused credentials, credentials will only be able to perform limited interactions.

To control access to AWS resources, AWS SAM uses the same mechanisms as AWS CloudFormation. For more information, see “Controlling access with AWS Identity and Access Management” in the AWS CloudFormation User Guide.

For a Lambda function, AWS SAM scopes the permissions of your Lambda functions to the resources that are used by your application. You add IAM policies as part of the AWS SAM template. The policies property can be the name of AWS managed policies, inline IAM policy documents, or AWS SAM policy templates.

For example, the serverless airline has a ConfirmBooking Lambda function that has UpdateItem permissions to the specific DynamoDB BookingTable resource.

Parameters:
    BookingTable:
        Type: AWS::SSM::Parameter::Value<String>
        Description: Parameter Name for Booking Table
Resources:
    ConfirmBooking:
        Type: AWS::Serverless::Function
        Properties:
            FunctionName: !Sub ServerlessAirline-ConfirmBooking-${Stage}
            Policies:
                - Version: "2012-10-17"
                  Statement:
                      Action: dynamodb:UpdateItem
                      Effect: Allow
                      Resource: !Sub "arn:${AWS::Partition}:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${BookingTable}"

One of the fastest ways to scope permissions appropriately is to use AWS SAM policy templates. You can reference these templates directly in the AWS SAM template for your application, providing custom parameters as required.

The serverless patterns collection allows you to build integrations quickly using AWS SAM and AWS Cloud Development Kit (AWS CDK) templates.

The booking service uses the SNSPublishMessagePolicy. This policy gives permission to the NotifyBooking Lambda function to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.

    BookingTopic:
        Type: AWS::SNS::Topic

    NotifyBooking:
        Type: AWS::Serverless::Function
        Properties:
            Policies:
                - SNSPublishMessagePolicy:
                      TopicName: !Sub ${BookingTopic.TopicName}
        …

Auditing permissions and removing unnecessary permissions

Audit permissions regularly to help you identify unused permissions so that you can remove them. You can use last accessed information to refine your policies and allow access to only the services and actions that your entities use. Use the IAM console to view when last an IAM role was used.

IAM last used

IAM last used

Use IAM access advisor to review when was the last time an AWS service was used from a specific IAM user or role. You can view last accessed information for IAM on the Access Advisor tab in the IAM console. Using this information, you can remove IAM policies and access from your IAM roles.

IAM access advisor

IAM access advisor

When creating and editing policies, you can validate them using IAM Access Analyzer, which provides over 100 policy checks. It generates security warnings when a statement in your policy allows access AWS considers overly permissive. Use the security warning’s actionable recommendations to help grant least privilege. To learn more about policy checks provided by IAM Access Analyzer, see “IAM Access Analyzer policy validation”.

With AWS CloudTrail, you can use CloudTrail event history to review individual actions your IAM role has performed in the past. Using this information, you can detect which permissions were actively used, and decide to remove permissions.

AWS CloudTrail

AWS CloudTrail

To work out which permissions you may need, you can generate IAM policies based on access activity. You configure an IAM role with broad permissions while the application is in development. Access Analyzer reviews your CloudTrail logs. It generates a policy template that contains the permissions that the role used in your specified date range. Use the template to create a policy that grants only the permissions needed to support your specific use case. For more information, see “Generate policies based on access activity”.

IAM Access Analyzer

IAM Access Analyzer

Conclusion

Managing your serverless application’s security boundaries ensures isolation for, within, and between components. In this post, I continue from part 1, looking at using temporary credentials between resources and components. I cover why smaller, single purpose functions are better from a security perspective, and how to audit permissions. I show how to use AWS SAM to create per-function IAM roles.

For more serverless learning resources, visit https://serverlessland.com.

Building well-architected serverless applications: Managing application security boundaries – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-managing-application-security-boundaries-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Security question SEC2: How do you manage your serverless application’s security boundaries?

Defining and securing your serverless application’s boundaries ensures isolation for, within, and between components.

Required practice: Evaluate and define resource policies

Resource policies are AWS Identity and Access Management (IAM) statements. They are attached to resources such as an Amazon S3 bucket, or an Amazon API Gateway REST API resource or method. The policies define what identities have fine-grained access to the resource. To see which services support resource-based policies, see “AWS Services That Work with IAM”. For more information on how resource policies and identity policies are evaluated, see “Identity-Based Policies and Resource-Based Policies”.

Understand and determine which resource policies are necessary

Resource policies can protect a component by restricting inbound access to managed services. Use resource policies to restrict access to your component based on a number of identities, such as the source IP address/range, function event source, version, alias, or queues. Resource policies are evaluated and enforced at IAM level before each AWS service applies it’s own authorization mechanisms, when available. For example, IAM resource policies for API Gateway REST APIs can deny access to an API before an AWS Lambda authorizer is called.

If you use multiple AWS accounts, you can use AWS Organizations to manage and govern individual member accounts centrally. Certain resource policies can be applied at the organizations level, providing guardrail for what actions AWS accounts within the organization root or OU can do. For more information see, “Understanding how AWS Organization Service Control Policies work”.

Review your existing policies and how they’re configured, paying close attention to how permissive individual policies are. Your resource policies should only permit necessary callers.

Implement resource policies to prevent unauthorized access

For Lambda, use resource-based policies to provide fine-grained access to what AWS IAM identities and event sources can invoke a specific version or alias of your function. Resource-based policies can also be used to control access to Lambda layers. You can combine resource policies with Lambda event sources. For example, if API Gateway invokes Lambda, you can restrict the policy to the API Gateway ID, HTTP method, and path of the request.

In the serverless airline example used in this series, the IngestLoyalty service uses a Lambda function that subscribes to an Amazon Simple Notification Service (Amazon SNS) topic. The Lambda function resource policy allows SNS to invoke the Lambda function.

Lambda resource policy document

Lambda resource policy document

API Gateway resource-based policies can restrict API access to specific Amazon Virtual Private Cloud (VPC), VPC endpoint, source IP address/range, AWS account, or AWS IAM users.

Amazon Simple Queue Service (SQS) resource-based policies provide fine-grained access to certain AWS services and AWS IAM identities (users, roles, accounts). Amazon SNS resource-based policies restrict authenticated and non-authenticated actions to topics.

Amazon DynamoDB resource-based policies provide fine-grained access to tables and indexes. Amazon EventBridge resource-based policies restrict AWS identities to send and receive events including to specific event buses.

For Amazon S3, use bucket policies to grant permission to your Amazon S3 resources.

The AWS re:Invent session Best practices for growing a serverless application includes further suggestions on enforcing security best practices.

Best practices for growing a serverless application

Best practices for growing a serverless application

Good practice: Control network traffic at all layers

Apply controls for controlling both inbound and outbound traffic, including data loss prevention. Define requirements that help you protect your networks and protect against exfiltration.

Use networking controls to enforce access patterns

API Gateway and AWS AppSync have support for AWS Web Application Firewall (AWS WAF) which helps protect web applications and APIs from attacks. AWS WAF enables you to configure a set of rules called a web access control list (web ACL). These allow you to block, or count web requests based on customizable web security rules and conditions that you define. These can include specified IP address ranges, CIDR blocks, specific countries, or Regions. You can also block requests that contain malicious SQL code, or requests that contain malicious script. For more information, see How AWS WAF Works.

private API endpoint is an API Gateway interface VPC endpoint that can only be accessed from your Amazon Virtual Private Cloud (Amazon VPC). This is an elastic network interface that you create in a VPC. Traffic to your private API uses secure connections and does not leave the Amazon network, it is isolated from the public internet. For more information, see “Creating a private API in Amazon API Gateway”.

To restrict access to your private API to specific VPCs and VPC endpoints, you must add conditions to your API’s resource policy. For example policies, see the documentation.

By default, Lambda runs your functions in a secure Lambda-owned VPC that is not connected to your account’s default VPC. Functions can access anything available on the public internet. This includes other AWS services, HTTPS endpoints for APIs, or services and endpoints outside AWS. The function cannot directly connect to your private resources inside of your VPC.

You can configure a Lambda function to connect to private subnets in a VPC in your account. When a Lambda function is configured to use a VPC, the Lambda function still runs inside the Lambda service VPC. The function then sends all network traffic through your VPC and abides by your VPC’s network controls. Functions deployed to virtual private networks must consider network access to restrict resource access.

AWS Lambda service VPC with VPC-to-VPT NAT to customer VPC

AWS Lambda service VPC with VPC-to-VPT NAT to customer VPC

When you connect a function to a VPC in your account, the function cannot access the internet, unless the VPC provides access. To give your function access to the internet, route outbound traffic to a NAT gateway in a public subnet. The NAT gateway has a public IP address and can connect to the internet through the VPC’s internet gateway. For more information, see “How do I give internet access to my Lambda function in a VPC?”. Connecting a function to a public subnet doesn’t give it internet access or a public IP address.

You can control the VPC settings for your Lambda functions using AWS IAM condition keys. For example, you can require that all functions in your organization are connected to a VPC. You can also specify the subnets and security groups that the function’s users can and can’t use.

Unsolicited inbound traffic to a Lambda function isn’t permitted by default. There is no direct network access to the execution environment where your functions run. When connected to a VPC, function outbound traffic comes from your own network address space.

You can use security groups, which act as a virtual firewall to control outbound traffic for functions connected to a VPC. Use security groups to permit your Lambda function to communicate with other AWS resources. For example, a security group can allow the function to connect to an Amazon ElastiCache cluster.

To filter or block access to certain locations, use VPC routing tables to configure routing to different networking appliances. Use network ACLs to block access to CIDR IP ranges or ports, if necessary. For more information about the differences between security groups and network ACLs, see “Compare security groups and network ACLs.”

In addition to API Gateway private endpoints, several AWS services offer VPC endpoints, including Lambda. You can use VPC endpoints to connect to AWS services from within a VPC without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Using tools to audit your traffic

When you configure a Lambda function to use a VPC, or use private API endpoints, you can use VPC Flow Logs to audit your traffic. VPC Flow Logs allow you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or S3 to see where traffic is being sent to at a granular level. Here are some flow log record examples. For more information, see “Learn from your VPC Flow Logs”.

Block network access when required

In addition to security groups and network ACLs, third-party tools allow you to disable outgoing VPC internet traffic. These can also be configured to allow traffic to AWS services or allow-listed services.

Conclusion

Managing your serverless application’s security boundaries ensures isolation for, within, and between components. In this post, I cover how to evaluate and define resource policies, showing what policies are available for various serverless services. I show some of the features of AWS WAF to protect APIs. Then I review how to control network traffic at all layers. I explain how Lambda functions connect to VPCs, and how to use private APIs and VPC endpoints. I walk through how to audit your traffic.

This well-architected question will be continued where I look at using temporary credentials between resources and components. I cover why smaller, single purpose functions are better from a security perspective, and how to audit permissions. I show how to use AWS Serverless Application Model (AWS SAM) to create per-function IAM roles.

For more serverless learning resources, visit https://serverlessland.com.

Techniques for writing least privilege IAM policies

Post Syndicated from Ben Potter original https://aws.amazon.com/blogs/security/techniques-for-writing-least-privilege-iam-policies/

In this post, I’m going to share two techniques I’ve used to write least privilege AWS Identity and Access Management (IAM) policies. If you’re not familiar with IAM policy structure, I highly recommend you read understanding how IAM works and policies and permissions.

Least privilege is a principle of granting only the permissions required to complete a task. Least privilege is also one of many Amazon Web Services (AWS) Well-Architected best practices that can help you build securely in the cloud. For example, if you have an Amazon Elastic Compute Cloud (Amazon EC2) instance that needs to access an Amazon Simple Storage Service (Amazon S3) bucket to get configuration data, you should only allow read access to the specific S3 bucket that contains the relevant data.

There are a number of ways to grant access to different types of resources, as some resources support both resource-based policies and IAM policies. This blog post will focus on demonstrating how you can use IAM policies to grant restrictive permissions to IAM principals to meet least privilege standards.

In AWS, an IAM principal can be a user, role, or group. These identities start with no permissions and you add permissions using a policy. In AWS, there are different types of policies that are used for different reasons. In this blog, I only give examples for identity-based policies that attach to IAM principals to grant permissions to an identity. You can create and attach multiple identity-based policies to your IAM principals, and you can reuse them across your AWS accounts. There are two types of managed policies. Customer managed policies are created and managed by you, the customer. AWS managed policies are provided as examples, cannot be modified, but can be copied, enhanced, and saved as Customer managed policies. The main elements of a policy statement are:

  • Effect: Specifies whether the statement will Allow or Deny an action.
  • Action: Describes a specific action or actions that will either be allowed or denied to run based on the Effect entered. API actions are unique to each service. For example, s3:ListBuckets is an Amazon S3 service API action that enables an IAM Principal to list all S3 buckets in the same account.
  • NotAction: Can be used as an alternative to using Action. This element will allow an IAM principal to invoke all API actions to a specific AWS service except those actions specified in this list.
  • Resource: Specifies the resources—for example, an S3 bucket or objects—that the policy applies to in Amazon Resource Name (ARN) format.
  • NotResource: Can be used instead of the Resource element to explicitly match every AWS resource except those specified.
  • Condition: Allows you to build expressions to match the condition keys and values in the policy against keys and values in the request context sent by the IAM principal. Condition keys can be service-specific or global. A global condition key can be used with any service. For example, a key of aws:CurrentTime can be used to allow access based on date and time.

Starting with the visual editor

The visual editor is my default starting place for building policies as I like the wizard and seeing all available services, actions, and conditions without looking at the documentation. If there is a complex policy with many services, I often look at the AWS managed policies as a starting place for the actions that are required, then use the visual editor to fine tune and check the resources and conditions.

The policy I’m going to walk you through creating is to grant an AWS Lambda function permission to get specific objects from Amazon S3, and put items in a specific table in Amazon DynamoDB. You can access the visual editor when you choose Create policy under policies in the IAM console, or add policies when viewing a role, group, or user as shown in Figure 1. If you’re not familiar with creating policies, you can follow the full instructions in the IAM documentation.

Figure 1: Use the visual editor to create a policy

Figure 1: Use the visual editor to create a policy

Begin by choosing the first service—S3—to grant access to as shown in Figure 2. You can only choose one service at a time, so you’ll need to add DynamoDB after.

Figure 2: Select S3 service

Figure 2: Select S3 service

Now you will see a list of access levels with the option to manually add actions. Expand the read access level to show all read actions that are supported by the Amazon S3 service. You can now see all read access level actions. For getting an object, check the box for GetObject. Selecting the ? next to an action expands information including a description, supported resource types, and supported condition keys as shown in Figure 3.

Figure 3: Expand Read in Access level, select GetObject, and select the ? next to GetObject

Figure 3: Expand Read in Access level, select GetObject, and select the ? next to GetObject

Expand Resources, you will see that the visual editor has listed object as that is the only resource supported by the GetObject action as shown in Figure 4.

Figure 4: Expand Resources

Figure 4: Expand Resources

Select Add ARN, which opens a dialogue to help you specify the ARN for the objects. Enter a bucket name—such as doc-example-bucket—and then the object name. For the object name you can use a wildcard (*) as a suffix. For example, to allow objects beginning with alpha you would enter alpha*. This is an important step. For this least privileged policy, you are restricting to a specific bucket, and an object prefix. You could even specify an individual object depending on your use case.

Figure 5: Enter bucket name and object name

Figure 5: Enter bucket name and object name

If you have multiple ARNs (bucket and objects) to allow, you can repeat the step.

Figure 6: ARN added for S3 object

Figure 6: ARN added for S3 object

The final step is to expand the request conditions, and choose Add condition. The Add request condition dialogue will open. Select the drop down next to Condition key to list the global condition keys, then the service level condition keys are listed after. You’ll see that there’s an s3:ExistingObjectTag condition that—as the name suggests—matches an existing object tag. You can use this condition key to allow the GetObject request only when the object tag meets your condition. That means you can tag your objects with a specific tag key and value pair, and your policy condition must match this key-value pair to allow the action to execute. When you’re using condition keys with multiple keys or values, you can use condition operators and evaluation logic. As shown in Figure 7, tag-key is entered directly below the condition key. This is the key of the tag to match. For the Operator, select StringEquals to match the tag exactly. Checking If exists tests at least one member of the set of request values, and at least one member of the set of condition key values. The Value to enter is the actual tag value: tag-value as shown in figure 7.

Figure 7: ARN added for S3 object

Figure 7: ARN added for S3 object

That’s it for adding the S3 action, as shown in figure 8.

Figure 8: S3 GetObject action with resource and conditions configured

Figure 8: S3 GetObject action with resource and conditions configured

Now you need to add the DynamoDB permissions by selecting Add additional permissions. Select Choose a service and then select DynamoDB. For actions, expand the Write access level, then choose PutItem.

Figure 9: Choose write access level

Figure 9: Choose write access level

Expand Resources and then select Add ARN. The dialogue that appears will help you build the ARN just like it did for the Amazon S3 service. Enter the Region, for example the ap-southeast-2 (Sydney) Region, the account ID, and the table name. Choosing Add will add the resource ARN to your policy.

Figure 10: Enter Region, account, and table name

Figure 10: Enter Region, account, and table name

Now it’s time to add conditions. Expand Request conditions and then choose Add condition.

There are many DynamoDB conditions that you could use, however you can choose dynamodb:LeadingKeys to represent the first key, or partition keys in a table. You can see from the documentation that a qualifier of For all values in request is recommend. For the Operator you can use StringEquals as your string is going to exactly match, then a Value can use a prefix with wildcard, such as alpha* as shown in figure 11.

Figure 11: Add request conditions

Figure 11: Add request conditions

Choosing Add will take you back to the main visual editor where you can choose Review policy to continue. Enter a name and description for the policy, and then choose Create policy.

You can now attach it to a role to test.

You can see in this example that a policy can use least privilege by using specific resources and conditions. Note that sometimes when you use the AWS Management Console, it requires additional permissions to provide information for the console experience.

Starting with AWS managed policies

AWS managed policies can be a good starting place to see the actions typically associated with a particular service or job function. For example, you can attach the AmazonS3ReadOnlyAccess policy to a role used by an Amazon EC2 instance that allows read-only access to all Amazon S3 buckets. It has an effect of Allow to allow access, and there are two actions that use wildcards (*) to allow all Get and List actions for S3—for example, s3:GetObject and s3:ListBuckets. The resource is a wildcard to allow all S3 buckets the account has access to. A useful feature of this policy is that it only allows read and list access to S3, but not to any other services or types of actions.

Let’s make our own custom IAM policy to make it least privilege. Starting with the action element, you can use the reference for Amazon S3 to see all actions, a description of what each action does, the resource type for each action, and condition keys for each action. Now let’s imagine this policy is used by an Amazon EC2 instance to fetch an application configuration object from within an S3 bucket. Looking at the descriptions for actions starting with Get you can see that the only action that we really need is GetObject. You can then use the resource element to restrict an action to a set of objects prefixed with config within a specific bucket.

         "Effect": "Allow",
         "Action": "s3:GetObject",
         "Resource": "arn:aws:s3::: <doc-example-bucket>/<config*>"

Now that you’ve reduced the scope of what this policy can do for service actions and resources, you can add a condition element that uses attribute based access control (ABAC) to define conditions based on attributes—in this case, a resource tag. In this example, when you’re reading objects from a single bucket, you can set specific conditions to further reduce the scope of permissions given to an IAM principal. There’s an s3:ExistingObjectTag condition that you can use to allow the GetObject request only when the object tag meets your condition. That means you can tag your objects with a specific tag key and value pair, and your IAM policy condition must match this key-value pair to allow the API action to successfully run. When you’re using condition keys with multiple keys or values, you can use condition operators and evaluation logic. You can see that ForAnyValue tests at least one member of the set of request values, and at least one member of the set of condition key values. Alternatively, you can use global condition keys that apply to all services:

         "Effect": "Allow",
         "Action": "s3:GetObject",
         "Resource": "arn:aws:s3:::<doc-example-bucket>/<config*>",
         "Condition": {
                "ForAnyValue:StringEquals": {
                    "s3:ExistingObjectTag/<tag-key>": "<tag-value>"
            }

In the preceding policy example, the condition element only allows s3:GetObject permissions if the object is tagged with a key of tag-key and a value of tag-value. While you’re experimenting, you can identify errors in your custom policies by using the IAM policy simulator or reviewing the errors messages recorded in AWS CloudTrail logs.

Conclusion

In this post, I’ve shown two different techniques that you can use to create least privilege policies for IAM. You can adapt these methods to create AWS Single Sign-On permission sets and AWS Organizations service control policies (SCPs). Starting with managed policies is a useful strategy when an AWS supplied managed policy already exists for your use case, and then to reduce the scope of what it can do through permissions. I tend to use the visual editor the most for editing policies because it saves looking up the resource and conditions for each action. I suggest that you start by reviewing the policies you’re already using. Start with policies that grant excessive permissions—like the example Administrator policy—and tie them back to the use case of the users or things that need the access. Use the last accessed information, IAM best practices, and look at the AWS Well-Architected best practices and AWS Well-Architected tool.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Potter

Ben is the global security leader for the AWS Well-Architected Framework and is responsible for sharing best practices in security with customers and partners. Ben is also an ambassador for the No More Ransom initiative helping fight cyber crime with Europol, McAfee, and law enforcement across the globe. You can learn more about him in this interview.

Cyber hygiene and MAS Notice 655

Post Syndicated from Darran Boyd original https://aws.amazon.com/blogs/security/cyber-hygiene-and-mas-notice-655/

In this post, I will provide guidance and resources that will help you align to the expectations of the Monetary Authority of Singapore (MAS) Notice 655 – Notice on Cyber Hygiene.

The Monetary Authority of Singapore (MAS) issued Notice 655 – Notice on Cyber Hygiene on 6 Aug 2019. This notice is applicable to all banks in Singapore and takes effect from 6 Aug 2020. The notice sets out the requirements on cyber hygiene for banks across the following six categories: administrative accounts, security patches, security standards, network perimeter defense, malware protection, and multi-factor authentication.

Whilst Notice 655 is specific to all banks in Singapore, the AWS security guidance I provide in this post is based on consistent best practices. As always, it’s important to note that security and compliance is a shared responsibility between AWS and you as our customer. AWS is responsible for the security of the cloud, but you are responsible for your security in the cloud.

To aid in your alignment to Notice 655, AWS has developed a MAS Notice 655 – Cyber Hygiene – Workbook, which is available in AWS Artifact. The workbook covers each of the six categories of cyber hygiene in Notice 655 and maps to the following:

The downloadable workbook contains two embedded formats:

  • Microsoft Excel – coverage includes AWS responsibility control statements, and Well-Architected Framework best practices.
  • Dynamic HTML – same as Microsoft Excel, with the added feature that the Well-Architected Framework best practices are mapped to AWS Config managed rules and Amazon GuardDuty findings, where available or applicable.

Administrative accounts

“4.1. A relevant entity must ensure that every administrative account in respect of any operating system, database, application, security appliance or network device, is secured to prevent any unauthorised access to or use of such account.”

For administrative accounts, it is important to follow best practices for the privileged accounts, keeping in mind both human and programmatic access.

The most privileged user account in an AWS account is the root user. When you first create an AWS account (unless you create it with AWS Organizations), this is the initial user account created. The root user is associated with the provided email address and password used to create the account. The root user account has access to every resource in the account—including the ability to close it. To align to the principle of least privilege, the root user account should not be used for everyday tasks. Instead, AWS Identity and Access Management (IAM) roles should be created and scoped to particular roles and functions within your organization. Furthermore, AWS strongly recommends that you integrate with a centralized identity provider, or a directory service, to authenticate all users in a centralized place. This reduces the requirement for multiple credentials and reduces management complexity.

There are some additional key steps that you should do to further protect your root user account.

Ensure that you have a very long and complex password, and if necessary you should change the root user password to meet this recommendation.

  • Put the root user password in a secure location, and consider a physical or virtual password vault with strong multi-party access protocol.
  • Delete any access keys, and remove any programmatic access keys from the root user account.
  • Enable multi-factor authentication (MFA), and consider a hardware-based token that is stored in a physical vault or safe with a strong multi-party access protocol. Consider using a separate secure vault store for the password and the MFA token, with separate permissions for access.
  • A simple but hugely important step is to ensure your account information is correct, which includes the assigned root email address, so that AWS Support can contact you.

Do keep in mind that there are a few AWS tasks that require root user.

You should use IAM roles for programmatic or system-to-system access to AWS resources that are integrated with IAM. For example, you should use roles for applications that run on Amazon Elastic Compute Cloud (Amazon EC2) instances. Ensure the principle of least privilege is being applied for the IAM policies that are attached to the roles.

For cases where credentials that are not from AWS IAM, such as database credentials, need to be used by your application, you should not hard-code these credentials in the application source code, or stored in an un-encrypted state. It is recommended that you use a secrets management solution. For example, AWS Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources. Secrets Manager enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications can retrieve secrets with a call to Secrets Manager APIs, which eliminates the need to hardcode sensitive information in plain text.

For more information, see 4.1 Administrative Accounts in the MAS Notice 655 workbook on AWS Artifact.

Security Patches

“4.2 (a) A relevant entity must ensure that security patches are applied to address vulnerabilities to every system, and apply such security patches within a timeframe that is commensurate with the risks posed by each vulnerability.”

Consider the various categories of security patches you need to manage, based on the AWS Shared Responsibility Model and the AWS services you are using.

Here are some common examples, but does not represent an exhaustive list:

Operating system

When using services from AWS where you have control over the operating system, it is your responsibility to perform patching on these services. For example, if you use Amazon EC2 with Linux, applying security patches for Linux is your responsibility. To help you with this, AWS publishes security patches for Amazon Linux at the Amazon Linux Security Center.

AWS Inspector allows you to run scheduled vulnerability assessments on your Amazon EC2 instances, and provides a report of findings against rules packages that include operating system configuration benchmarks for common vulnerabilities and exposures (CVEs) and Center for Internet Security (CIS) guidelines. To see if AWS Inspector is available in your AWS Region, see the list of supported Regions for Amazon Inspector.

For managing patching activity at scale, consider AWS Systems Manager Patch Manager to automate the process of patching managed instances with both security-related patches and other types of updates.

Container orchestration and containers

If you are running and managing your own container orchestration capability, it is your responsibility to perform patching for both the master and worker nodes. If you are using Amazon Elastic Kubernetes Service (Amazon EKS), then AWS manages the patching of the control plane, and publishes EKS-optimized Amazon Machine Images (AMIs) that include the necessary worker node binaries (Docker and Kubelet). This AMI is updated regularly and includes the most up to date version of these components. You can update your EKS managed nodes to the latest versions of the EKS-optimized AMIs with a single command in the EKS console, API, or CLI. If you are building your own custom AMIs to use for EKS worker nodes, AWS also publishes Packer scripts that document the AWS build steps, to allow you to identify the binaries included in each version of the AMI.

AWS Fargate provides the option of serverless compute for containers, so you can avoid the operational overhead of scaling, patching, securing, and managing servers.

For the container images, you do need to ensure these are scanned for vulnerabilities and patched. The Amazon Elastic Container Registry (Amazon ECR) service offers the ability to scan container images for common vulnerabilities and exposures (CVEs).

Database engines

If you are running and managing your own databases on top of an AWS service such as Amazon EC2, it is your responsibility to perform patching of the database engine.

If you are using Amazon Relational Database Service (Amazon RDS), then AWS will automatically perform the patching of the database engine. This is done within the configurable Amazon RDS maintenance window, and is your opportunity to control when DB instance modifications, database engine version upgrades, and software patching occurs.

In cases where you are using fully-managed AWS database services such as Amazon DynamoDB, AWS takes care of the underlying patching requirements.

Application

For application code and dependencies that you run on AWS services, you own and manage the patching. This applies to applications that your organization has built themselves, or applications from a third-party software vendor. You should make sure that you have a mechanism for ensuring that vulnerabilities in the application code you run are regularly identified & patched.

For more information, see 4.2 Security Patches in the MAS Notice 655 workbook on AWS Artifact.

Security Standards

“4.3. (b)… a relevant entity must ensure that every system conforms to the set of security standards.”

After you have defined your organizational security standards, the next step is to verify your conformance to these standards. In my consultation with customers, I advise that it is a best practice to enforce these security standards as early in the development lifecycle possible. For example, you may have a standard requiring that data of a specific data classification must be encrypted at rest with a AWS Key Management Service (AWS KMS) customer-managed customer master key (CMK). The way this is typically achieved is by defining your Infrastructure-as-Code (IaC), for example using AWS CloudFormation. As your projects move through the various stages of development in your pipeline, you can automatically and programmatically check your IaC templates against codified security standards that you have defined. AWS has a number of tools that assist you with defining you rules and evaluating your IaC templates.

In the case of AWS CloudFormation, you may want to consider the tools AWS CloudFormation Guard, cfn-lint or cfn_nag. Enforcing your security standards as early in the development lifecycle as possible has some key benefits. It instills a culture and practice of creating deployments that are aligned to your standards from the outset, and allows developers to move fast by using the tools and workflow that work best for their team, while providing feedback early enough so they have time to resolve any issues & meet security standards during the development process.

It’s important to complement this IaC pipeline approach with additional controls to ensure that security standards remain in place after it is deployed. You should make sure to look at both preventative controls and detective controls.

For preventative controls, the focus is on IAM permissions. You can use these fine-grained permissions to enforce at the level of the IAM principal (such as user or role) to control what actions can or cannot be taken on AWS resources. You can make use of AWS Organizations service control policies (SCPs) to enforce permission guardrails globally across the entire organization, across an organizational unit, or across individual AWS accounts. Some example SCPs that may align to your security standards include the following: Prevent any virtual private cloud (VPC) that doesn’t already have internet access from getting it, Prevent users from disabling Amazon GuardDuty or modifying its configuration. Additionally, you can use the SCPs described in the AWS Control Tower Guardrail Reference, which you can implement with or without using AWS Control Tower.

For detective controls, after your infrastructure is deployed, you should make use of the AWS Security Hub and/or AWS Config rules to help you meet your compliance needs. You should ensure that the findings from these services are integrated with your technology operations processes to take action, or you can use automated remediation.

For more information, see 4.3 Security Standards in the MAS Notice 655 workbook on AWS Artifact.

Network Perimeter Defense

“4.4. A relevant entity must implement controls at its network perimeter to restrict all unauthorised network traffic.”

Having a layered security strategy is a best practice, and this applies equally to your network. AWS provides a number of complimentary network configuration options that you can implement to add network protection to your resources. You should consider using all of the options I describe here for your AWS workload. You can implement multiple strategies together where possible, to provide network defense in depth.

For network layer protection, you can use security groups for your VPC. Security groups act as a virtual firewall for members of the group to control inbound and outbound traffic. For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic. You can attach Security groups to EC2 instances and other AWS services that use elastic network interfaces, including RDS instances, VPC endpoints, AWS Lambda functions, and Amazon SageMaker notebooks.

You can also use network access control lists (ACLs) as an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets, and supports allow rules and deny rules. Network ACLs are a good option for controlling traffic at the subnet level.

For application-layer protection against common web exploits, you can use AWS WAF. You use AWS WAF on the Application Load Balancer that fronts your web servers or origin servers that are running on Amazon EC2, on Amazon API Gateway for your APIs, or you can use AWS WAF together with Amazon CloudFront. This allows you to strengthen security at the edge, filtering more of the unwanted traffic out before it reaches your critical content, data, code, and infrastructure.

For distributed denial of service (DDoS) protection, you can use AWS Shield, which is a managed service to provide protection against DDoS attacks for applications running on AWS. AWS Shield is available in two tiers: AWS Shield Standard and AWS Shield Advanced. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks. AWS Shield Advanced provides advanced attack mitigation, visibility and attack notification, DDoS cost protection, and specialist support.

AWS Firewall Manager allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. Also in AWS Firewall Manager, you can centrally manage and deploy security groups, AWS WAF rules, and AWS Shield Advanced protections.

There are many AWS Partner Network (APN) solutions that provide additional capabilities and protection that work alongside the AWS solutions discussed, in categories such as intrusion detection systems (IDS). For more information, find an APN Partner.

For more information, see 4.4 Network Perimeter Defence in the MAS Notice 655 workbook on AWS Artifact for additional information.

Malware protection

“4.5. A relevant entity must ensure that one or more malware protection measures are implemented on every system, to mitigate the risk of malware infection, where such malware protection measures are available and can be implemented.”

Malware protection requires a multi-faceted approach, including all of the following:

  • Training your employees in security awareness
  • Finding and patching vulnerabilities within your AWS workloads
  • Segmenting your networks
  • Limiting access to your critical systems and data
  • Having a comprehensive backup and restore strategy
  • Detection of malware
  • Creating incident response plans

In the previous sections of this post, I covered security patching, network segmentation, and limiting access. Now I’ll review the remaining elements.

Employee security awareness is crucial, because it is generally accepted that the primary vector by which malware is installed within your organization is by phishing (or spear phishing), where an employee is misled into installing malware, or opens an attachment that uses a vulnerability in software to install itself.

For backup and restore, a comprehensive and tested strategy is crucial, especially when the motivation of the malware is deletion, modification, or mass encryption (ransomware). You can review the AWS backup and restore solutions and leverage the various high-durability storage classes provided by Amazon Simple Storage Service (Amazon S3).

For malware protection, as with other security domains, it is important to have complementary detective controls along with the preventative ones, it is important to have systems for early detection of malware, or of the activity indicative of malware presence. Across your AWS account, when you understanding what typical activity looks like, that gives you a baseline of network and user activity that you can continuously monitor for anomalies.

Amazon GuardDuty is a threat detection service that continuously monitors and compares activity within your AWS environment for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. Amazon GuardDuty uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.

Continuing on the topic of malware detection, you should consider other approaches as well, including endpoint detection and response (EDR) solutions. AWS has a number of partners that specialize in this space. For more information, find an APN Partner.

Finally, you should make sure that you have a security incident response plan to help you respond to an incident, communicate during an incident, and recover from it. AWS recommends that you create these response plans in the form of playbooks. A good way to create a playbook is to start off simple and iterate to improve your plan. Before you need to respond to an actual event, you should consider the tasks that you can do ahead of time to improve your recovery timeframes. Some of the issues to consider include pre-provisioning access to your responders, and pre-deploying the tools that the responders or forensic teams will need. Importantly, do not wait for an actual incident to test your response and recovery plans. You should run game days to practice, improve and iterate.

For more information, see 4.5 Malware protection in the MAS Notice 655 workbook on AWS Artifact.

Multi-factor authentication

“4.6. … a relevant entity must ensure that multi-factor authentication is implemented for the following:
(a)all administrative accounts in respect of any operating system, database, application, security appliance or network device that is a critical system; and
(b)all accounts on any system used by the relevant entity to access customer information through the internet.“

When using multi-factor authentication (MFA), it’s important to for you to think of the various layers that you need to implement.

For access to the AWS API, AWS Management Console, and AWS resources that use AWS Identity and Access Management (IAM), you can configure MFA with a number of different form factors, and apply it to users within your AWS accounts. As I mentioned in the Administrative accounts section, AWS recommends that you apply MFA to the root account user. Where possible, you should not use IAM users, but instead use identity federation with IAM roles. By using identity federation with IAM roles, you can apply and enforce MFA at the level of your identity provider, for example Active Directory Federation Services (AD FS) or AWS Single Sign-On. For highly privileged actions, you may want to configure MFA-protected API access to only allow the action if MFA authentication has been performed.

With regards to third-party applications, which includes software as a service (SaaS), you should consider integration with AWS services or third-party services to provide MFA protection. For example, AWS Single Sign-On (SSO) includes built-in integrations to many business applications, including Salesforce, Office 365, and others.

For your own in-house applications, you may want to consider solutions such as Amazon Cognito. Amazon Cognito goes beyond standard MFA (which use SMS or TOTP), and includes the option of adaptive authentication when using the advanced security features. With this feature enabled, when Amazon Cognito detects unusual sign-in activity, such as attempts from new locations and unknown devices, it can challenge the user with additional verification checks.

For more information, see 4.6 Multi-Factor authentication in the MAS Notice 655 workbook on AWS Artifact.

Conclusion

AWS products and services have security features designed to help you improve the security of your workloads, and meet your compliance requirements. Many AWS services are reviewed by independent third-party auditors, and these audit reports are available on AWS Artifact. You can use AWS services, tools, and guidance to address your side of the shared responsibility model to align with the requirements stated in Notice 655 – Notice on Cyber Hygiene.

Review the MAS Notice 655 – Cyber Hygiene – Workbook on AWS Artifact to understand both the AWS control environment (the AWS side of the shared responsibility model) and the guidance AWS provides to help you with your side of the shared responsibility model. You will find AWS guidance in the AWS Well-Architected Framework best practices, and where available or applicable to detective controls, in AWS Config rules and Amazon GuardDuty findings.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond.

Building well-architected serverless applications: Controlling serverless API access – part 3

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-controlling-serverless-api-access-part-3/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Security question SEC1: How do you control access to your serverless API?

This post continues part 2 of this security question. Previously, I cover Amazon Cognito user and identity pools, JSON web tokens (JWT), API keys and usage plans.

Best practice: Scope access based on identity’s metadata

Authenticated users should be separated into logical groups, roles, or tiers. Separation can also be based on custom authentication token attributes included within Security Assertion Markup Language (SAML) or JSON Web Tokens (JWT). Consider using the user’s identity metadata to enable fine-grain control access to resources and actions.

Scoping access based on authentication metadata allows you to provide limited and fine-grained capabilities and access to consumers based on their roles and intent.

Review levels of access, identity metadata, and separate consumers into logical groups/tiers

With JWT or SAML, ensure you have the right level of information available within the token claims to help you develop authorization logic. Use custom private claims along with a unique namespace for non-public information. Private claims are to share custom information specifically with your application client. Unique namespaces are to avoid name collision for custom claims. For more information, see the AWS Partner Network blog post “Understanding JWT Public, Private and Reserved Claims”.

With Amazon Cognito, you can use custom attributes or the Pre Token Generation Lambda Trigger feature. This AWS Lambda trigger allows you to customize a JWT token claim before the token is generated.

To illustrate using Amazon Cognito groups, I use the example from this blog post. The example uses Amplify CLI to create a web application for managing group membership. API Gateway handles authentication using an Amazon Cognito user pool as part of an administrator API. Two Amazon Cognito user pool groups are created using amplify auth update, one for admin, and one for editors.

  1. I navigate to the deployed web application and create two users, an administrator called someadminuser and an editor user called awesomeeditor.
  2. Show Amazon Cognito user creation

    Show Amazon Cognito user creation

  3. I navigate to the Amazon Cognito user pool console, choose Users and groups under General settings, and can see that both users are created.
  4. View Amazon Cognito users created

    View Amazon Cognito users created

  5. I choose the Groups tab and see that there are two user pool groups set up as part of amplify auth update.
  6. I add the someadminuser to the admin group.
  7. View Amazon Cognito user added to group and IAM role

    View Amazon Cognito user added to group and IAM role

  8. There is an AWS Identity and Access Management (IAM) role associated with the administrator group. This IAM role has an associated identity policy that grants permission to access an S3 bucket for some future application functionality.
  9. {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:DeleteObject"
                ],
                "Resource": [
                    "arn:aws:s3:::mystoragebucket194021-dev/*"
                ],
                "Effect": "Allow"
            }
        ]
    }
    
  10. I log on to the web application using both the someadminuser and awesomeeditor accounts and compare the two JWT accessToken Amazon Cognito has generated.

The someadminuser has a cognito:groups claim within the token showing membership of the user pool group admin.

View JWT with group membership

View JWT with group membership

This token with its group claim can be used in a number of ways to authorize access.

Within this example frontend application, the token is used against an API Gateway resource using an Amazon Cognito authorizer.

An Amazon Cognito authorizer is an alternative to using IAM or Lambda authorizers to control access to your API Gateway method. The client first signs in to the user pool, and receives a token. The client then calls the API method with the token which is typically in the request’s Authorization header. The API call only succeeds if a valid is supplied. Without the correct token, the client isn’t authorized to make the call.

In this example, the Amazon Cognito authorizer authorizes access at the API method. Next, the event payload passed to the Lambda function contains the token. The function reads the token information. If the group membership claim includes admin, it adds the awesomeeditor user to the Amazon Cognito user pool group editors.

  1. To see how this is configured, I navigate to the API Gateway console and select the AdminQueries API.
  2. I view the /{proxy+}/ANY resource.
  3. I see that the Integration Request is set to LAMBDA_PROXY. which calls the AdminQueries Lambda function.
  4. View API Gateway Lambda proxy path

    View API Gateway Lambda proxy path

  5. I view the Method Request.
  6. View API Gateway Method Request using Amazon Cognito authorization

    View API Gateway Method Request using Amazon Cognito authorization

  7. Authorization is set to an Amazon Cognito user pool authorizer with an OAuth scope of aws.cognito.signin.user.admin. This scope grants access to Amazon Cognito user pool API operations that require access tokens, such as AdminAddUserToGroup.
  8. I navigate to the Authorizers menu item, and can see the configured Amazon Cognito authorizer.
  9. In the Amazon Cognito user pool details, the Token Source is set to Authorization. This is the name of the header sent to the Amazon Cognito user pool for authorization.
  10. View Amazon Cognito authorizer settings

    View Amazon Cognito authorizer settings

  11. I navigate to the AWS Lambda console, select the AdminQueries function which amplify add auth added, and choose the Permissions tab. I select the Execution role and view its Permissions policies.
  12. I see that the function execution role allows write permission to the Amazon Cognito user pool resource. This allows the function to amend the user pool group membership.
  13. View Lambda execution role permissions including Amazon Cognito write

    View Lambda execution role permissions including Amazon Cognito write

  14. I navigate back to the AWS Lambda console, and view the configuration for the AdminQueries function. There is an environment variable set for GROUP=admin.
Lambda function environment variables

Lambda function environment variables

The Lambda function code checks if the authorizer.claims token includes the GROUP environment variable value of admin. If not, the function returns err.statusCode = 403 and an error message. Here is the relevant section of code within the function.

// Only perform tasks if the user is in a specific group
const allowedGroup = process.env.GROUP;
…..
  // Fail if group enforcement is being used
  if (req.apiGateway.event.requestContext.authorizer.claims['cognito:groups']) {
    const groups = req.apiGateway.event.requestContext.authorizer.claims['cognito:groups'].split(',');
    if (!(allowedGroup && groups.indexOf(allowedGroup) > -1)) {
      const err = new Error(`User does not have permissions to perform administrative tasks`);
      err.statusCode = 403;
      next(err);
    }
  } else {
    const err = new Error(`User does not have permissions to perform administrative tasks`);
    err.statusCode = 403;
    next(err);
  }

This example shows using a JWT to perform authorization within a Lambda function.

If the authorization is successful, the function continues and adds the awesomeeditor user to the editors group.

To show this flow in action:

  1. I log on to the web application using the awesomeeditor account, which is not a member of the admin group. I choose the Add to Group button.
  2. Sign in as editor

    Sign in as editor

  3. Using the browser developer tools I see that the API request has failed, returning the 403 error code from the Lambda function.
  4. Shows 403 access denied

    Shows 403 access denied

  5. I log on to the web application using the someadminuser account and choose the Add to Group button.
  6. Sign in as admin

    Sign in as admin

  7. Using the browser developer tools I see that the API request is now successful as the user is a member of the admin group.
  8. API successful call as admin

    API successful call as admin

  9. I navigate back to the Amazon Cognito user pool console, and view Users and groups. The awesomeeditor user is now a member of the editors group.
User now member of editors group

User now member of editors group

The Lambda function has added the awesomeeditor account to the editors group.

Implement authorization logic based on authentication metadata

Another way to separate users for authorization is using Amazon Cognito to define a resource server with custom scopes.

A resource server is a server for access-protected resources. It handles authenticated requests from an app that has an access token. This API can be hosted in Amazon API Gateway or outside of AWS. A scope is a level of access that an app can request to a resource. For example, if you have a resource server for airline flight details, it might define two scopes. One scope for all customers with read access to view the flight details, and one for airline employees with write access to add new flights. When the app makes an API call to request access and passes an access token, the token has one or more embedded scopes.

JWT with scope

JWT with scope

This allows you to provide different access levels to API resources for different application clients based on the custom scopes. It is another mechanism for separating users during authentication.

For authorizing based on token claims, use an API Gateway Lambda authorizer.

For more information, see “Using Amazon Cognito User Pool Scopes with Amazon API Gateway”.

With AWS AppSync, use GraphQL resolvers. AWS Amplify can also generate fine-grained authorization logic via GraphQL transformers (directives). You can annotate your GraphQL schema to a specific data type, data field, and specific GraphQL operation you want to allow access. These can include JWT groups or custom claims. For more information, see “GraphQL API Security with AWS AppSync and Amplify”, and the AWS AppSync documentation for Authorization Use Cases, and fine-grained access control.

Improvement plan summary:

  1. Review levels of access, identity metadata and separate consumers into logical groups/tiers.
  2. Implement authorization logic based on authentication metadata

Conclusion

Controlling serverless application API access using authentication and authorization mechanisms can help protect against unauthorized access and prevent unnecessary use of resources. In part 1, I cover the different mechanisms for authorization available for API Gateway and AWS AppSync. I explain the different approaches for public or private endpoints and show how to use IAM to control access to internal or private API consumers.

In part 2, I cover using Amplify CLI to add a GraphQL API with an Amazon Cognito user pool handling authentication. I explain how to view JSON Web Token (JWT) claims, and how to use Amazon Cognito identity pools to grant temporary access to AWS services. I also show how to use API keys and API Gateway usage plans for rate limiting and throttling requests.

In this post, I cover separating authenticated users into logical groups. I first show how to use Amazon Cognito user pool groups to separate users with an Amazon Cognito authorizer to control access to an API Gateway method. I also show how JWTs can be passed to a Lambda function to perform authorization within a function. I then explain how to also separate users using custom scopes by defining an Amazon Cognito resource server.

In an upcoming post, I will cover the second security question from the Well-Architected Serverless Lens about managing serverless security boundaries.

Building well-architected serverless applications: Controlling serverless API access – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-controlling-serverless-api-access-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Security question SEC1: How do you control access to your serverless API?

This post continues part 1 of this security question. Previously, I cover the different mechanisms for authentication and authorization available for Amazon API Gateway and AWS AppSync. I explain the different approaches for public or private endpoints and show how to use AWS Identity and Access Management (IAM) to control access to internal or private API consumers.

Required practice: Use appropriate endpoint type and mechanisms to secure access to your API

I continue to show how to implement security mechanisms appropriate for your API endpoint.

Using AWS Amplify CLI to add a GraphQL API

After adding authentication in part 1, I use the AWS Amplify CLI to add a GraphQL AWS AppSync API with the following command:

amplify add api

When prompted, I specify an Amazon Cognito user pool for authorization.

Amplify add Amazon Cognito user pool for authorization

Amplify add Amazon Cognito user pool for authorization

To deploy the AWS AppSync API configuration to the AWS Cloud, I enter:

amplify push

Once the deployment is complete, I view the GraphQL API from within the AWS AppSync console and navigate to Settings. I see the AWS AppSync API uses the authorization configuration added during the part 1 amplify add auth. This uses the Amazon Cognito user pool to store the user sign-up information.

View AWS AppSync authorization settings with Amazon Cognito

View AWS AppSync authorization settings with Amazon Cognito

For a more detailed walkthrough using Amplify CLI to add an AWS AppSync API for the serverless airline, see the build video.

Viewing JWT tokens

When I create a new account from the serverless airline web frontend, Amazon Cognito creates a user within the user pool. It handles the 3-stage sign-up process for new users. This includes account creation, confirmation, and user sign-in.

Serverless airline Amazon Cognito based sign-in process

Serverless airline Amazon Cognito based sign-in process

Once the account is created, I browse to the Amazon Cognito console and choose Manage User Pools. I navigate to Users and groups under General settings and view my user account.

View User Account

View User Account

When I sign in to the serverless airline web app, I authenticate with Amazon Cognito, and the client receives user pool tokens. The client then calls the AWS AppSync API, which authorizes access using the tokens, connects to data sources, and resolves the queries.

Amazon Cognito tokens used by AWS AppSync

Amazon Cognito tokens used by AWS AppSync

During the sign-in process, I can use the browser developer tools to view the three JWT tokens Amazon Cognito generates and returns to the client. These are the accesstoken, idToken, and refreshToken.

View tokens with browser developer tools

View tokens with browser developer tools

I copy the .idToken value and use the decoder at https://jwt.io/ to view the contents.

JSON web token decoded

JSON web token decoded

The decoded token contains claims about my identity. Claims are pieces of information asserted about my identity. In this example, these include my Amazon Cognito username, email address, and other sign-up fields specified in the user pool. The client can use this identity information inside the application.

The ID token expires one hour after I authenticate. The client uses the Amazon Cognito issued refreshToken to retrieve new ID and access tokens. By default, the refresh token expires after 30 days, but can be set to any value between 1 and 3650 days. When using the mobile SDKs for iOS and Android, retrieving new ID and access tokens is done automatically with a valid refresh token.

For more information, see “Using Tokens with User Pools”.

Accessing AWS services

An Amazon Cognito user pool is a managed user directory to provide access for a user to an application. Amazon Cognito has a feature called identity pools (federated identities), which allow you to create unique identities for your users. These can be from user pools, or other external identity providers.

These unique identities are used to get temporary AWS credentials to directly access other AWS services, or external services via API Gateway. The Amplify client libraries automatically expire, rotate, and refresh the temporary credentials.

Identity pools have identities that are either authenticated or unauthenticated. Unauthenticated identities typically belong to guest users. Authenticated identities belong to authenticated users who have received a token by a login provider, such as a user pool. The Amazon Cognito issued user pool tokens are exchanged for AWS access credentials from an identity pool.

JWT-tokens-from-Amazon-Cognito-user-pool-exchanged-for-AWS-credentials-from-Amazon-Cognito-identity-pool

JWT-tokens-from-Amazon-Cognito-user-pool-exchanged-for-AWS-credentials-from-Amazon-Cognito-identity-pool

API keys

For public content and unauthenticated access, both Amazon API Gateway and AWS AppSync provide API keys that can be used to track usage. API keys should not be used as a primary authorization method for production applications. Instead, use these for rate limiting and throttling. Unauthenticated APIs require stricter throttling than authenticated APIs.

API Gateway usage plans specify who can access API stages and methods, and also how much and how fast they can access them. API keys are then associated with the usage plans to identify API clients and meter access for each key. Throttling and quota limits are enforced on individual keys.

Throttling limits determine how many requests per second are allowed for a usage plan. This is useful to prevent a client from overwhelming a downstream resource. There are two API Gateway values to control this, the throttle rate and throttle burst, which use the token bucket algorithm. The algorithm is based on an analogy of filling and emptying a bucket of tokens representing the number of available requests that can be processed. The bucket in the algorithm has a fixed size based on the throttle burst and is filled at the token rate. Each API request removes a token from the bucket. The throttle rate then determines how many requests are allowed per second. The throttle burst determines how many concurrent requests are allowed and is shared across all APIs per Region in an account.

Token bucket algorithm

Token bucket algorithm

Quota limits allow you to set a maximum number of requests for an API key within a fixed time period. When billing for usage, this also allows you to enforce a limit when a client pays by monthly volume.

API keys are passed using the x-api-key header. API Gateway rejects requests without them.

For example, within the serverless airline, the loyalty service uses an AWS Lambda function to fetch loyalty points and next tier progress via an API Gateway REST API /loyalty/{customerId}/get resource.

I can use this API to simulate the effect of usage plans with API keys.

  1. I navigate to the airline-loyalty API /loyalty/{customerId}/get resource in API Gateway console.
  2. I change the API Key Required value to be true.
  3. Setting API Key Required on API Gateway method

    Setting API Key Required on API Gateway method

  4. I choose Deploy API from the Actions menu.
  5. I create a usage plan in the Usage Plans section of the API Gateway Console.
  6. I choose Create and enter a name for the usage plan.
  7. I select Enable throttling and set the rate to one request per second and the burst to two requests. These are artificially low numbers to simulate the effect.
  8. I select Enable quota and set the limit to 10 requests per day.
  9. Create API Gateway usage plan

    Create API Gateway usage plan

  10. I click Next.
  11. I associate an API Stage by choosing Add API Stage, and selecting the airline Loyalty API and Prod Stage.
  12. Associate usage plan to API Gateway stage

    Associate usage plan to API Gateway stage

  13. I click Next, and choose Create API Key and add to Usage Plan
  14. Create API key and add to usage plan.

    Create API key and add to usage plan.

  15. I name the API Key and ensure it is set to Auto Generate.
  16. Name API Key

  17. I choose Save then Done to associate the API key with the usage plan.
API key associated with usage plan

API key associated with usage plan

I test the API authentication, in addition to the throttles and limits using Postman.

I issue a GET request against the API Gateway URL using a customerId from the airline Airline-LoyaltyData Amazon DynamoDB table. I don’t specify any authorization or API key.

Postman unauthenticated GET request

Postman unauthenticated GET request

I receive a Missing Authentication Token reply, which I expect as the API uses IAM authentication and I haven’t authenticated.

I then configure authentication details within the Authorization tab, using an AWS Signature. I enter my AWS user account’s AccessKey and SecretKey, which has an associated IAM identity policy to access the API.

Postman authenticated GET request without access key

Postman authenticated GET request without access key

I receive a Forbidden reply. I have successfully authenticated, but the API Gateway method rejects the request as it requires an API key, which I have not provided.

I retrieve and copy my previously created API key from the API Gateway console API Keys section, and display it by choosing Show.

Retrieve API key.

Retrieve API key.

I then configure an x-api-key header in the Postman Headers section and paste the API key value.

Having authenticated and specifying the required API key, I receive a response from the API with the loyalty points value.

Postman successful authenticated GET request with access key

Postman successful authenticated GET request with access key

I then call the API with a number of quick successive requests.

When I exceed the throttle rate limit of one request per second, and the throttle burst limit of two requests, I receive:

{"message": "Too Many Requests"}

When I then exceed the quota of 10 requests per day, I receive:

{"message": "Limit Exceeded"}

I view the API key usage within the API Gateway console Usage Plan section.

I select the usage plan, choose the API Keys section, then choose Usage. I see how many requests I have made.

View API key usage

View API key usage

If necessary, I can also grant a temporary rate extension for this key.

For more information on using API Keys for unauthenticated access for AWS AppSync, see the documentation.

API Gateway also has support for AWS Web Application Firewall (AWS WAF) which helps protect web applications and APIs from attacks. It is another mechanism to apply rate-based rules to prevent public API consumers exceeding a configurable request threshold. AWS WAF rules are evaluated before other access control features, such as resource policies, IAM policies, Lambda authorizers, and Amazon Cognito authorizers. For more information, see “Using AWS WAF with Amazon API Gateway”.

AWS AppSync APIs have built-in DDoS protection to protect all GraphQL API endpoints from attacks.

Improvement plan summary:

  1. Determine your API consumer and choose an API endpoint type.
  2. Implement security mechanisms appropriate to your API endpoint

Conclusion

Controlling serverless application API access using authentication and authorization mechanisms can help protect against unauthorized access and prevent unnecessary use of resources.

In this post, I cover using Amplify CLI to add a GraphQL API with an Amazon Cognito user pool handling authentication. I explain how to view JSON Web Token (JWT) claims, and how to use identity pools to grant temporary access to AWS services. I also show how to use API keys and API Gateway usage plans for rate limiting and throttling requests.

This well-architected question will be continued where I look at segregating authenticated users into logical groups. I will first show how to use Amazon Cognito user pool groups to separate users with an Amazon Cognito authorizer to control access to an API Gateway method. I will also show how to pass JWTs to a Lambda function to perform authorization within a function. I will then explain how to also segregate users using custom scopes by defining an Amazon Cognito resource server.

Building well-architected serverless applications: Controlling serverless API access – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-controlling-serverless-api-access-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Security question SEC1: How do you control access to your serverless API?

Use authentication and authorization mechanisms to prevent unauthorized access, and enforce quota for public resources. By controlling access to your API, you can help protect against unauthorized access and prevent unnecessary use of resources.

AWS has a number of services to provide API endpoints including Amazon API Gateway and AWS AppSync.

Use Amazon API Gateway for RESTful and WebSocket APIs. Here is an example serverless web application architecture using API Gateway.

Example serverless application architecture using API Gateway

Example serverless application architecture using API Gateway

Use AWS AppSync for managed GraphQL APIs.

AWS AppSync overview diagram

AWS AppSync overview diagram

The serverless airline example in this series uses AWS AppSync to provide the frontend, user-facing public API. The application also uses API Gateway to provide backend, internal, private REST APIs for the loyalty and payment services.

Good practice: Use an authentication and an authorization mechanism

Authentication and authorization are mechanisms for controlling and managing access to a resource. In this well-architected question, that is a serverless API. Authentication is verifying who a client or user is. Authorization is deciding whether they have the permission to access a resource. By enforcing authorization, you can prevent unauthorized access to your workload from non-authenticated users.

Integrate with an identity provider that can validate your API consumer’s identity. An identity provider is a system that provides user authentication as a service. The identity provider may use the XML-based Security Assertion Markup Language (SAML), or JSON Web Tokens (JWT) for authentication. It may also federate with other identity management systems. JWT is an open standard that defines a way for securely transmitting information between parties as a JSON object. JWT uses frameworks such as OAuth 2.0 for authorization and OpenID Connect (OIDC), which builds on OAuth2, and adds authentication.

Only authorize access to consumers that have successfully authenticated. Use an identity provider rather than API keys as a primary authorization method. API keys are more suited to rate limiting and throttling.

Evaluate authorization mechanisms

Use AWS Identity and Access Management (IAM) for authorizing access to internal or private API consumers, or other AWS Managed Services like AWS Lambda.

For public, user facing web applications, API Gateway accepts JWT authorizers for authenticating consumers. You can use either Amazon Cognito or OpenID Connect (OIDC).

App client authenticates and gets tokens

App client authenticates and gets tokens

For custom authorization needs, you can use Lambda authorizers.

A Lambda authorizer (previously called a custom authorizer) is an AWS Lambda function which API Gateway calls for an authorization check when a client makes a request to an API method. This means you do not have to write custom authorization logic in a function behind an API. The Lambda authorizer function can validate a bearer token such as JWT, OAuth, or SAML, or request parameters and grant access. Lambda authorizers can be used when using an identity provider other than Amazon Cognito or AWS IAM, or when you require additional authorization customization.

Lambda authorizers

Lambda authorizers

For more information, see the AWS Hero blog post, “The Complete Guide to Custom Authorizers with AWS Lambda and API Gateway”.

The AWS documentation also has a useful section on “Understanding Lambda Authorizers Auth Workflow with Amazon API Gateway”.

Enforce authorization for non-public resources within your API

Within API Gateway, you can enable native authorization for users authenticated using Amazon Cognito or AWS IAM. For authorizing users authenticated by other identity providers, use Lambda authorizers.

For example, within the serverless airline, the loyalty service uses a Lambda function to fetch loyalty points and next tier progress. AWS AppSync acts as the client using an HTTP resolver, via an API Gateway REST API /loyalty/{customerId}/get resource, to invoke the function.

To ensure only AWS AppSync is authorized to invoke the API, IAM authorization is set within the API Gateway method request.

Viewing API Gateway IAM authorization

Viewing API Gateway IAM authorization

The serverless airline uses the AWS Serverless Application Model (AWS SAM) to deploy the backend infrastructure as code. This makes it easier to know which IAM role has access to the API. One of the benefits of using infrastructure as code is visibility into all deployed application resources, including IAM roles.

The loyalty service AWS SAM template contains the AppsyncLoyaltyRestApiIamRole.

AppsyncLoyaltyRestApiIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
  AppsyncLoyaltyRestApiIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service: appsync.amazonaws.com
            Action: sts:AssumeRole
      Path: /
      Policies:
        - PolicyName: LoyaltyApiInvoke
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - execute-api:Invoke
                # arn:aws:execute-api:region:account-id:api-id/stage/METHOD_HTTP_VERB/Resource-path
                Resource: !Sub arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${LoyaltyApi}/*/*/*

The IAM role specifies that appsync.amazonaws.com can perform an execute-api:Invoke on the specific API Gateway resource arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${LoyaltyApi}/*/*/*

Within AWS AppSync, you can enable native authorization for users authenticating using Amazon Cognito or AWS IAM. You can also use any external identity provider compliant with OpenID Connect (OIDC).

Improvement plan summary:

  1. Evaluate authorization mechanisms.
  2. Enforce authorization for non-public resources within your API

Required practice: Use appropriate endpoint type and mechanisms to secure access to your API

APIs may have public or private endpoints. Consider public endpoints to serve consumers where they may not be part of your network perimeter. Consider private endpoints to serve consumers within your network perimeter where you may not want to expose the API publicly. Public and private endpoints may have different levels of security.

Determine your API consumer and choose an API endpoint type

For providing public content, use Amazon API Gateway or AWS AppSync public endpoints.

For providing content with restricted access, use Amazon API Gateway with authorization to specific resources, methods, and actions you want to restrict. For example, the serverless airline application uses AWS IAM to restrict access to the private loyalty API so only AWS AppSync can call it.

With AWS AppSync providing a GraphQL API, restrict access to specific data types, data fields, queries, mutations, or subscriptions.

You can create API Gateway private REST APIs that you can only access from your AWS Virtual Private Cloud(VPC) by using an interface VPC endpoint.

API Gateway private endpoints

API Gateway private endpoints

For more information, see “Choose an endpoint type to set up for an API Gateway API”.

Implement security mechanisms appropriate to your API endpoint

With Amazon API Gateway and AWS AppSync, for both public and private endpoints, there are a number of mechanisms for access control.

For providing content with restricted access, API Gateway REST APIs support native authorization using AWS IAM, Amazon Cognito user pools, and Lambda authorizers. Amazon Cognito user pools is a feature that provides a managed user directory for authentication. For more detailed information, see the AWS Hero blog post, “Picking the correct authorization mechanism in Amazon API Gateway“.

You can also use resource policies to restrict content to a specific VPC, VPC endpoint, a data center, or a specific AWS Account.

API Gateway resource policies are different from IAM identity policies. IAM identity policies are attached to IAM users, groups, or roles. These policies define what that identity can do on which resources. For example, in the serverless airline, the IAM role AppsyncLoyaltyRestApiIamRole specifies that appsync.amazonaws.com can perform an execute-api:Invoke on the specific API Gateway resource arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${LoyaltyApi}/*/*/*

Resource policies are attached to resources such as an Amazon S3 bucket, or an API Gateway resource or method. The policies define what identities can access the resource.

IAM access is determined by a combination of identity policies and resource policies.

For more information on the differences, see “Identity-Based Policies and Resource-Based Policies”. To see which services support resource-based policies, see “AWS Services That Work with IAM”.

API Gateway HTTP APIs support JWT authorizers as a part of OpenID Connect (OIDC) and OAuth 2.0 frameworks.

API Gateway WebSocket APIs support AWS IAM and Lambda authorizers.

With AWS AppSync public endpoints, you can enable authorization with the following:

  • AWS IAM
  • Amazon Cognito User pools for email and password functionality
  • Social providers (Facebook, Google+, and Login with Amazon)
  • Enterprise federation with SAML

Within the serverless airline, AWS Amplify Console hosts the public user facing site. Amplify Console provides a git-based workflow for building, deploying, and hosting serverless web applications. Amplify Console manages the hosting of the frontend assets for single page app (SPA) frameworks in addition to static websites, along with an optional serverless backend. Frontend assets are stored in S3 and the Amazon CloudFront global edge network distributes the web app globally.

The AWS Amplify CLI toolchain allows you to add backend resources using AWS CloudFormation.

Using Amplify CLI to add authentication

For the serverless airline, I use the Amplify CLI to add authentication using Amazon Cognito with the following command:

amplify add auth

When prompted, I specify the authentication parameters I require.

Amplify add auth

Amplify add auth

Amplify CLI creates a local CloudFormation template. Use the following command to deploy the updated authentication configuration to the cloud:

amplify push

Once the deployment is complete, I view the deployed authentication nested stack resources from within the CloudFormation Console. I see the Amazon Cognito user pool.

View Amplify authentication CloudFormation nested stack resources

View Amplify authentication CloudFormation nested stack resources

For a more detailed walkthrough using Amplify CLI to add authentication for the serverless airline, see the build video.

For more information on Amplify CLI and authentication, see “Authentication with Amplify”.

Conclusion

To help protect against unauthorized access and prevent unnecessary use of serverless API resources, control access using authentication and authorization mechanisms.

In this post, I cover the different mechanisms for authorization available for API Gateway and AWS AppSync. I explain the different approaches for public or private endpoints and show how to use IAM to control access to internal or private API consumers. I walk through how to use the Amplify CLI to create an Amazon Cognito user pool.

This well-architected question will be continued in a future post where I continue using the Amplify CLI to add a GraphQL API. I will explain how to view JSON Web Tokens (JWT) claims, and how to use Cognito identity pools to grant temporary access to AWS services. I will also show how to use API keys and API Gateway usage plans for rate limiting and throttling requests.

Building well-architected serverless applications: Approaching application lifecycle management – part 3

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-approaching-application-lifecycle-management-part-3/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Question OPS2: How do you approach application lifecycle management?

This post continues part 2 of this Operational Excellence question where I look at deploying to multiple stages using temporary environments, and rollout deployments. In part 1, I cover using infrastructure as code with version control to deploy applications in a repeatable manner.

Good practice: Use configuration management

Use environment variables and configuration management systems to make and track configuration changes. These systems reduce errors caused by manual processes, reduce the level of effort to deploy changes, and help isolate configuration from business logic.

Environment variables are suited for infrequently changing configuration options such as logging levels, and database connection strings. Configuration management systems are for dynamic configuration that might change frequently or contain sensitive data such as secrets.

Environment variables

The serverless airline example used in this series uses AWS Amplify Console environment variables to store application-wide settings.

For example, the Stripe payment keys for all branches, and names for individual branches, are visible within the Amplify Console in the Environment variables section.

AWS Amplify environment variables

AWS Amplify environment variables

AWS Lambda environment variables are set up as part of the function configuration stored using the AWS Serverless Application Model (AWS SAM).

For example, the airline booking ReserveBooking AWS SAM template sets global environment variables including the LOG_LEVEL with the following code.

Globals:
    Function:
        Environment:
            Variables:
                LOG_LEVEL: INFO

This is visible in the AWS Lambda console within the function configuration.

AWS Lambda environment variables in console

AWS Lambda environment variables in console

See the AWS Documentation for more information on using AWS Lambda environment variables and also how to store sensitive data. Amazon API Gateway can also pass stage-specific metadata to Lambda functions.

Dynamic configuration

Dynamic configuration is also stored in configuration management systems to specify external values and is unique to each environment. This configuration may include values such as an Amazon Simple Notification Service (Amazon SNS) topic, Lambda function name, or external API credentials. AWS System Manager Parameter Store, AWS Secrets Manager, and AWS AppConfig have native integrations with AWS CloudFormation to store dynamic configuration. For more information, see the examples for referencing dynamic configuration from within AWS CloudFormation.

For the serverless airline application, dynamic configuration is stored in AWS Systems Manager Parameter Store. During CloudFormation stack deployment, a number of parameters are stored in Systems Manager. For example, in the booking service AWS SAM template, the booking SNS topic ARN is stored.

BookingTopicParameter:
    Type: "AWS::SSM::Parameter"
    Properties:
        Name: !Sub /${Stage}/service/booking/messaging/bookingTopic
        Description: Booking SNS Topic ARN
        Type: String
        Value: !Ref BookingTopic

View the stored SNS topic value by navigating to the Parameter Store console, and search for BookingTopic.

Finding Systems Manager Parameter Store values

Finding Systems Manager Parameter Store values

Select the Parameter name and see the Amazon SNS ARN.

Viewing SNS topic value

Viewing SNS topic value

The loyalty service then references this value within another stack.

When the Amplify Console Makefile deploys the loyalty service, it retrieves this value for the booking service from Parameter Store, and references it as a parameter-override. The deployment is also parametrized with the $${AWS_BRANCH} environment variable if there are multiple environments within the same AWS account and Region.

sam deploy \
	--parameter-overrides \
	BookingSNSTopic=/$${AWS_BRANCH}/service/booking/messaging/bookingTopic

Environment variables and configuration management systems help with managing application configuration.

Improvement plan summary

  1. Use environment variables for configuration options that change infrequently such as logging levels, and database connection strings.
  2. Use a configuration management system for dynamic configuration that might change frequently or contain sensitive data such as secrets.

Best practice: Use CI/CD including automated testing across separate accounts

Continuous integration/delivery/deployment is one of the cornerstones of cloud application development and a vital part of a DevOps initiative.

Explanation of CI/CD stages

Explanation of CI/CD stages

Building CI/CD pipelines increases software delivery quality and feedback time for detecting and resolving errors. I cover how to deploy multiple stages in isolated environments and accounts, which helps with creating separate testing CI/CD pipelines in part 2. As the serverless airline example is using AWS Amplify Console, this comes with a built-in CI/CD pipeline.

Automate the build, deployment, testing, and rollback of the workload using KPI and operational alerts. This eases troubleshooting, enables faster remediation and feedback time, and enables automatic and manual rollback/roll-forward should an alert trigger.

I cover metrics, KPIs, and operational alerts in this series in the Application Health part 1, and part 2 posts. I cover rollout deployments with traffic shifting based on metrics in this question’s part 2.

CI/CD pipelines should include integration, and end-to-end tests. I cover local unit testing for Lambda and API Gateway in part 2.

Add an optional testing stage to Amplify Console to catch regressions before pushing code to production. Use the test step to run any test commands at build time using any testing framework of your choice. Amplify Console has deeper integration with the Cypress test suite that allows you to generate a UI report for your tests. Here is an example to set up end-to-end tests with Cypress.

Cypress testing example

Cypress testing example

There are a number of AWS and third-party solutions to host code and create CI/CD pipelines for serverless applications.

AWS Code Suite

AWS Code Suite

For more information on how to use the AWS Code* services together, see the detailed Quick Start deployment guide Serverless CI/CD for the Enterprise on AWS.

All these AWS services have a number of integrations with third-party products so you can integrate your serverless applications with your existing tools. For example, CodeBuild can build from GitHub and Atlassian Bitbucket repositories. CodeDeploy integrates with a number of developer tools and configuration management systems. CodePipeline has a number of pre-built integrations to use existing tools for your serverless applications. For more information specifically on using CircleCI for serverless applications, see Simplifying Serverless CI/CD with CircleCI and the AWS Serverless Application Model.

Improvement plan summary

  1. Use a continuous integration/continuous deployment (CI/CD) pipeline solution that deploys multiple stages in isolated environments/accounts.
  2. Automate testing including but not limited to unit, integration, and end-to-end tests.
  3. Favor rollout deployments over all-at-once deployments for more resilience, and gradually learn what metrics best determine your workload’s health to appropriately alert on.
  4. Use a deployment system that supports traffic shifting as part of your pipeline, and rollback/roll-forward traffic to previous versions if an alert is triggered.

Good practice: Review function runtime deprecation policy

Lambda functions created using AWS provided runtimes follow official long-term support deprecation policies. Third-party provided runtime deprecation policy may differ from official long-term support. Review your runtime deprecation policy and have a mechanism to report on runtimes that, if deprecated, may affect your workload to operate as intended.

Review the AWS Lambda runtime policy support page to understand the deprecation schedule for your runtime.

AWS Health provides ongoing visibility into the state of your AWS resources, services, and accounts. Use the AWS Personal Health Dashboard for a personalized view and automate custom notifications to communication channels other than your AWS Account email.

Use AWS Config to report on AWS Lambda function runtimes that might be near their deprecation. Run compliance and operational checks with AWS Config for Lambda functions.

If you are unable to migrate to newer runtimes within the deprecation schedule, use AWS Lambda custom runtimes as an interim solution.

Improvement plan summary

  1. Identify and report runtimes that might deprecate and their support policy.

Conclusion

Introducing application lifecycle management improves the development, deployment, and management of serverless applications. In part 1, I cover using infrastructure as code with version control to deploy applications in a repeatable manner. This reduces errors caused by manual processes and gives you more confidence your application works as expected. In part 2, I cover prototyping new features using temporary environments, and rollout deployments to gradually shift traffic to new application code.

In this post I cover configuration management, CI/CD for serverless applications, and managing function runtime deprecation.

In an upcoming post, I will cover the first Security question from the Well-Architected Serverless Lens – Controlling access to serverless APIs.

Building well-architected serverless applications: Approaching application lifecycle management – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-approaching-application-lifecycle-management-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Question OPS2: How do you approach application lifecycle management?

This post continues part 1 of this Operational Excellence question. Previously, I covered using infrastructure as code with version control to deploy applications in a repeatable manner.

Good practice: Prototype new features using temporary environments

Storing application configuration as infrastructure as code allows deployment of multiple, repeatable, isolated versions of an application.

Create multiple temporary environments for new features you may need to prototype, and tear them down as you complete them. Temporary environments enable fine grained feature isolation and higher fidelity development when interacting with managed services. This allows you to gain confidence your workload integrates and operates as intended.

These environments can also be in separate accounts which help isolate limits, access to data, and resiliency. For best practices on multi-account deployments, see the AWS Partner Network blog post: Best Practices Guide for Multi-Account AWS Deployments.

There are a number of ways to deploy separate environments for an application. To make the deployment simpler, it is good practice to separate dynamic configuration from your infrastructure logic.

For an application managed via the AWS Serverless Application Model (AWS SAM), use an AWS SAM CLI parameter to specify a new stack-name which deploys a new copy of the application as a separate stack.

For example, there is an existing AWS SAM application with a stack-name of app-test. To deploy a new copy, specify a new stack-name of app-newtest with the following command line:

sam deploy --stack-name app-newtest

This deploys a whole new copy of the application in the same account as a separate stack.

For the serverless airline example used in this series, deploy a whole new copy of the application following the deployment instructions, either into the same AWS account, or a completely different account. This is useful when each developer in a team has a sandbox environment. In this example, you only need to configure payment provider credentials as environment variables and seed the database with possible flights as these are currently manual post installation tasks.

However, maintaining an entirely separate codebase copy of an application becomes difficult to manage and reconcile over time.

As the airline application code is stored in a fork in a GitHub account, use git branches for separate environments. In typical development teams, developers may deploy a main branch to production, have a dev branch as staging, and create feature branches when working on new functionality. This allows safe prototyping in sandbox environments without affecting the main codebase, and use git as a mechanism to merge code and resolve conflicts. Changes are automatically pushed to production once they are merged into the main (or production) branch.

Git branching flow

Git branching flow

As the airline example is using AWS Amplify Console, there are a few different options to create a new environment linked to a feature branch.

You can create a whole new Amplify Console app deployment, either in a separate Region, or in a separate AWS account, which then connects to a feature branch by following the deployment instructions. Create a new branch called new-feature in GitHub and in the Amplify Console, select Connect App, and navigate to the repository and the new-feature branch. Configure the payment provider credentials as environment variables.

Deploy new application pointing to feature branch

You can also connect the existing Amplify Console deployment to a git branch, deploying the new-feature branch into the same AWS account and Region.

Amplify Environments

Amplify Environments

In the Amplify Console, navigate to the existing app, select Connect Branch, and choose the new-feature branch. Create a new Backend environment to deploy the full stack. If the feature branch is only frontend code changes, you can choose to use the same backend components.

Connect Amplify Console to feature branch

Connect Amplify Console to feature branch

Amplify Console then deploys a new stack in addition to the develop branch based on the code in the feature-branch.

New feature branch deploying within existing deployment.

New feature branch deploying within existing deployment.

You do not need to add the payment provider environment variables as these are stored per application, per Region, for all branches.

Amplify environment variables for All Branches.

Amplify environment variables for All Branches.

Using git and branching with Amplify Console, you have automatic deployments when any changes are pushed to the GitHub repository. If there are any issues with a particular deployment, you can revert the changes in git which will kick off a redeploy to a known good version. Once you are happy with the feature, you can merge the changes into the production branch which will again kick off another deployment.

As it is simple to set up multiple test environments, make sure to practice good application hygiene, as well as cost management, by identifying and deleting any temporary environments that are no longer required. It may be helpful to include the stack owner’s contact details via CloudFormation tags. Use Amazon CloudWatch scheduled tasks to notify and tag temporary environments for deletion, and provide a mechanism to delay its deletion if needed.

Prototyping locally

With AWS SAM or a third-party framework, you can run API Gateway, and invoke Lambda function code locally for faster development iteration. Local debugging and testing can help for quick confirmation that function code is working, and is also useful for some unit tests. Local testing cannot duplicate the full functionality of the cloud. It is suited to testing services with custom logic, such as Lambda, rather than trying to duplicate all cloud managed services such as Amazon SNS, or Amazon S3 locally. Don’t try to bring the cloud to the test, rather bring the testing to the cloud.

Here is an example of executing a function locally.

I use AWS SAM CLI to invoke the Airline-GetLoyalty Lambda function locally to test some functionality. AWS SAM CLI uses Docker to simulate the Lambda runtime. As the function only reads from DynamoDB, I use stubbed data, or can set up DynamoDB Local.

1. I pass a JSON event to the function to simulate the event from API Gateway, as well as passing in environment variables as JSON. Create sample events using sam local generate-event.

2. I run sam build GetFunc to build the function dependencies, in this case NodeJS.

$ sam build GetFunc
Building resource 'GetFunc'
Running NodejsNpmBuilder:NpmPack
Running NodejsNpmBuilder:CopyNpmrc
Running NodejsNpmBuilder:CopySource
Running NodejsNpmBuilder:NpmInstall
Running NodejsNpmBuilder:CleanUpNpmrc

Build Succeeded

3. I run sam local invoke passing in the event payload and environment variables. This spins up a Docker container, executes the function, and returns the result.

$ sam local invoke --event src/get/event.json --env-vars local-env-vars.json GetFunc
Invoking index.handler (nodejs10.x)

Fetching lambci/lambda:nodejs10.x Docker container image......
Mounting /home/ec2-user/environment/samples/aws-serverless-airline-booking/src/backend/loyalty/.aws-sam/build/GetFunc as /var/task:ro,delegated inside runtime container
START RequestId: 7be7e9a5-9f2f-1520-fbd1-a013485105d3 Version: $LATEST
END RequestId: 7be7e9a5-9f2f-1520-fbd1-a013485105d3
REPORT RequestId: 7be7e9a5-9f2f-1520-fbd1-a013485105d3 Init Duration: 249.89 ms Duration: 76.40 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 54 MB

{"statusCode": 200,"body": "{\"points\":0,\"level\":\"bronze\",\"remainingPoints\":50000}"}

For more information on using AWS SAM to run API Gateway, and invoke Lambda functions locally, see the AWS Documentation. For third-part framework solutions, see Invoking AWS Lambda functions locally with Serverless framework and Develop locally against cloud services with Stackery.

Improvement plan summary:

  1. Use a serverless framework to deploy temporary environments named after a feature.
  2. Implement a process to identify temporary environments that may not have been deleted over an extended period of time
  3. Prototype application code locally and test integrations directly with managed services

Good practice: Use a rollout deployment mechanism

Use a rollout deployment for production workloads as opposed to all-at-once mechanisms. Rollout deployments reduce the risk of a failed deployment by gradually deploying application changes to a limited set of customers. Use all-at-once deployments to deploy the entire application. This is best suited for non-production systems.

AWS Lambda versions and aliases

For production Lambda functions, it is best to deploy a new function version for every deployment. Versions can represent the stable version or reflect particular features. Create Lambda aliases which are pointers to particular function versions. Invoke Lambda functions using the aliases, with a specific alias for the stable production version. If an alias is not specified, the latest application code deployment is invoked which may not reflect a stable version or a desired feature. Use the new feature alias version for testing without affecting users of the stable production version.

AWS Lambda function versions and aliases

AWS Lambda function versions and aliases

See AWS Documentation to manage Lambda function versions and aliases using the AWS Management Console, or Lambda API.

Alias routing

Use Lambda alias’ routing configuration to introduce traffic shifting to send a small percentage of traffic to a second function alias or version for a rolling deployment. This is commonly called a canary release.

For example, configure Lambda alias named stable to point to function version 2. A new function version 3 is deployed with alias new-feature. Use the new-feature alias to test the new deployment without impacting production traffic to the stable version.

During production rollout, use alias routing. For example, 90% of invocations route to the stable version while 10% route to alias new-feature pointing to version 3. If the 10% is successful, deployment can continue until all traffic is migrated to version 3, and the stable alias is then pointed to version 3.

AWS Lambda alias routing

AWS Lambda alias routing

AWS SAM supports gradual Lambda deployments with a feature called Safe Lambda deployments using AWS CodeDeploy. This creates new versions of a Lambda function, and automatically creates aliases pointing to the new version. Customer traffic gradually shifts to the new version or rolls back automatically if any specified CloudWatch alarms trigger. AWS SAM supports canary, linear, and all-at-once deployments preference types.

Pre-traffic and post-traffic Lambda functions can also verify if the newly deployed code is working as expected.

In the airline example, create a safe deployment for the ReserveBooking Lambda function by adding the example AWS SAM template code specified in the instructions. This migrates 10 percent of traffic every 10 minutes with CloudWatch alarms to check for any function errors. You could also alarm on latency, or any custom metric.

During the Amplify Console build phase, the safe deployment is initiated. Navigate to the CodeDeploy console and see the deployment in progress.

AWS CodeDeploy deployment in progress

AWS CodeDeploy deployment in progress

Selecting the deployment, you can see the Traffic shifting progress and the Deployment details.

AWS CodeDeploy traffic shifting in progress.

AWS CodeDeploy traffic shifting in progress.

Within Deployment details, select the DeploymentGroup, and view the CloudWatch Alarms CodeDeploy is using to test the rollout.

Amazon CloudWatch Alarms AWS CodeDeploy is using to test the rollout

Amazon CloudWatch Alarms AWS CodeDeploy is using to test the rollout

Within Deployment details, select the Application, select the Revisions tab, and select the latest Revision location and view the CurrentVersion and TargetVersion for this deployment.

View deployment versions

View deployment versions

View Deployment status and see the traffic has now shifted to the new version. The Amplify Console build also continues.

Traffic shifting complete

Traffic shifting complete

View the Lambda function versions and aliases in the Lambda console, selecting Qualifiers.

Viewing Lambda function version and aliases

Viewing Lambda function version and aliases

Amazon API Gateway also supports canary release deployments at the API layer.

A rollout deployment provides traffic shifting, A/B testing, and the ability to roll back to any version at any point in time. AWS SAM makes it simple to add safe deployments to serverless applications.

Improvement plan summary

  1. For production systems, use a linear deployment strategy to gradually rollout changes to customers.
  2. For high volume production systems, use a canary deployment strategy when you want to limit changes to a fixed percentage of customers for an extended period of time.

Conclusion

Introducing application lifecycle management improves the development, deployment, and management of serverless applications. In this post I cover a number of methods to prototype new features using temporary environments. I show how to use rollout deployments to gradually shift traffic to new application code.

This well-architected question will continue in an upcoming post where I look at configuration management, CI/CD for serverless applications, and managing function runtime deprecation.

Building well-architected serverless applications: Approaching application lifecycle management – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-approaching-application-lifecycle-management-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explanation of the example application.

Question OPS2: How do you approach application lifecycle management?

Adopt lifecycle management approaches that improve the flow of changes to production with higher fidelity, fast feedback on quality, and quick bug fixing. These practices help you rapidly identify, remediate, and limit changes that impact customer experience. By having an approach to application lifecycle management, you can reduce errors caused by manual process and increase the levels of control to gain confidence your workload operates as intended.

Required practice: Use infrastructure as code and stages isolated in separate environments

Infrastructure as code is a process of provisioning and managing cloud resources by storing application configuration in a template file. Using infrastructure as code helps to deploy applications in a repeatable manner, reducing errors caused by manual processes such as creating resources in the AWS Management Console.

Storing code in a version control system enables tracking and auditing of changes and releases over time. This is used to roll back changes safely to a known working state if there is an issue with an application deployment.

Infrastructure as code

For AWS Cloud development the built-in choice for infrastructure as code is AWS CloudFormation. The template file, written in JSON or YAML, contains a description of the resources an application needs. CloudFormation automates the deployment and ongoing updates of the resources by creating CloudFormation stacks.

CloudFormation code example creating infrastructure

CloudFormation code example creating infrastructure

There are a number of higher-level tools and frameworks that abstract and then generate CloudFormation. A serverless specific framework helps model the infrastructure necessary for serverless workloads, providing either declarative or imperative mechanisms to define event sources for functions. It wires permissions between resources automatically, adds resource configuration, code packaging, and any infrastructure necessary for a serverless application to run.

The AWS Serverless Application Model (AWS SAM) is an AWS open-source framework optimized for serverless applications. The AWS Cloud Development Kit allows you to provision cloud resources using familiar programming languages such as TypeScript, JavaScript, Python, Java, and C#/.Net. There are also third-party solutions for creating serverless cloud resources such as the Serverless Framework.

The AWS Amplify Console provides a git-based workflow for building, deploying, and hosting serverless applications including both the frontend and backend. The AWS Amplify CLI toolchain enables you to add backend resources using CloudFormation.

For a large number of resources, consider breaking common functionality such as monitoring, alarms, or dashboards into separate infrastructure as code templates. With CloudFormation, use nested stacks to help deploy them as part of your serverless application stack. When using AWS SAM, import these nested stacks as nested applications from the AWS Serverless Application Repository.

AWS CloudFormation nested stacks

AWS CloudFormation nested stacks

Here is an example AWS SAM template using nested stacks. There are two AWS::Serverless::Application nested resources, api.template.yaml and database.template.yaml. For more information on nested stacks, see the AWS Partner Network blog post: CloudFormation Nested Stacks Primer.

Version control

The serverless airline example application used in this series uses Amplify Console to provide part of the backend resources, including authentication using Amazon Cognito, and a GraphQL API using AWS AppSync.

The airline application code is stored in GitHub as a version control system. Fork, or copy, the application to your GitHub account. Configure Amplify Console to connect to the GitHub fork.

When pushing code changes to a fork, Amplify Console automatically deploys these backend resources along with the rest of the application. It hosts the application at the Production branch URL, and you can also configure a custom domain name if needed.

AWS Amplify Console App details

AWS Amplify Console App details

The Amplify Console configuration to create the API and Authentication backend resources is found in the backend-config.json file. The resources are provisioned during the Amplify Console build phase.

To view the deployed resources, within the Amplify Console, navigate to the awsserverlessairline application. Select Backend environments and then select an environment, in this example sampledev.

Select the API and Authentication tabs to view the created backend resources.

AWS Amplify Console deployed backend resources

AWS Amplify Console deployed backend resources

Using multiple tools

Applications can use multiple tools and frameworks even within a single project to manage the infrastructure as code. Within the airline application, AWS SAM is also used to provision the rest of the serverless infrastructure using nested stacks. During the Amplify Console build process, the Makefile contains the AWS SAM build instructions for each application service.

For example, the AWS SAM build instructions to deploy the booking service are as follows:

deploy.booking: ##=> Deploy booking service using SAM
	$(info [*] Packaging and deploying Booking service...)
	cd src/backend/booking && \
		sam build && \
		sam package \
			--s3-bucket $${DEPLOYMENT_BUCKET_NAME} \
			--output-template-file packaged.yaml && \
		sam deploy \
			--template-file packaged.yaml \
			--stack-name $${STACK_NAME}-booking-$${AWS_BRANCH} \
			--capabilities CAPABILITY_IAM \
			--parameter-overrides \
	BookingTable=/$${AWS_BRANCH}/service/amplify/storage/table/booking \
	FlightTable=/$${AWS_BRANCH}/service/amplify/storage/table/flight \
	CollectPaymentFunction=/$${AWS_BRANCH}/service/payment/function/collect \
	RefundPaymentFunction=/$${AWS_BRANCH}/service/payment/function/refund \
	AppsyncApiId=/$${AWS_BRANCH}/service/amplify/api/id \
	Stage=$${AWS_BRANCH}

Each service has its own AWS SAM template.yml file. The files contain the resources for each of the booking, catalog, log-processing, loyalty, and payment services. This means that the services can be managed independently within the application as separate stacks. In larger applications, these services may be managed by separate teams, or be in separate repositories, environments or AWS accounts. It may make sense to split out some common functionality such as alarms, or dashboards into separate infrastructure as code templates.

AWS SAM can also use IAM roles to assume temporary credentials and deploy a serverless application to separate AWS accounts.

For more information on managing serverless code, see Best practices for organizing larger serverless applications.

View the deployed resources in the AWS CloudFormation Console. Select Stacks from the left-side navigation bar, and select the View nested toggle.

Viewing CloudFormation nested stacks

Viewing CloudFormation nested stacks

The serverless airline application is a more complex example application comprising multiple services composed of multiple CloudFormation stacks. Some stacks are managed via Amplify Console and others via AWS SAM. Using infrastructure as code is not only for large and complex applications. As a best practice, we suggest using SAM or another framework for even simple, small serverless applications with a single stack. For a getting started tutorial, see the example Deploying a Hello World Application.

Improvement plan summary:

  1. Use a serverless framework to help you execute functions locally, build and package application code. Separate packaging from deployment, deploy to isolated stages in separate environments, and support secrets via configuration management systems.
  2. For a large number of resources, consider breaking common functionalities such as alarms into separate infrastructure as code templates.

Conclusion

Introducing application lifecycle management improves the development, deployment, and management of serverless applications. In this post I cover using infrastructure as code with version control to deploy applications in a repeatable manner. This reduces errors caused by manual processes and gives you more confidence your application works as expected.

This well-architected question will continue in an upcoming post where I look further at deploying to multiple stages using temporary environments, and rollout deployments.

Building well-architected serverless applications: Understanding application health – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-understanding-application-health-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explaining the example application.

Question OPS1: How do you evaluate your serverless application’s health?

This post continues part 1 of this Operational Excellence question. Previously, I covered using Amazon CloudWatch out-of-the-box standard metrics and alerts, and structured and centralized logging with the Embedded Metrics Format.

Best practice: Use application, business, and operations metrics

Identifying key performance indicators (KPIs), including business, customer, and operations outcomes, in additional to application metrics, helps to show a higher-level view of how the application and business is performing.

Business KPIs measure your application performance against business goals. For example, if fewer flight reservations are flowing through the system, the business would like to know.

Customer experience KPIs highlight the overall effectiveness of how customers use the application over time. Examples are perceived latency, time to find a booking, make a payment, etc.

Operational metrics help to see how operationally stable the application is over time. Examples are continuous integration/delivery/deployment feedback time, mean-time-between-failure/recovery, number of on-call pages and time to resolution, etc.

Custom Metrics

Embedded Metrics Format can also emit custom metrics to help understand your workload health’s impact on business.

The airline booking service uses AWS Step Functions, AWS Lambda, Amazon SNS, and Amazon DynamoDB.

In the confirm booking module function handler, I add a new namespace and dimension to associate this set of logs with this application and service.

metrics.set_namespace("ServerlessAirlineEMF")
metrics.put_dimensions({"service":"confirm_booking"})

Within the try statement within the try/except block, I emit a metric for a successful booking:

metrics.put_metric("BookingSuccessful", 1, "Count")

And within the except statement within the try/except block, I emit a metric for a failed booking:

metrics.put_metric("BookingFailed", 1, "Count")

Once I make a booking, within the CloudWatch console, I navigate to Logs | Log groups and select the Airline-ConfirmBooking-develop group. I select a log stream and find the custom metric as part of the structured log entry.

structured-log-entry

Custom metric structured log entry

I can also create a custom metric graph. Within the CloudWatch console, I navigate to Metrics. I see the ServerlessAirlineEMF Custom Namespace is available.

custom-namespace

Custom metric namespace

I select the Namespace and the available metric.

available-namespace

Available metric

I select a Line graph, add a name, and see the successful booking now plotted on the graph.

custom-metric-plotted

Plotted CloudWatch metric

I can also visualize and analyze this metric using CloudWatch Insights.

Once a booking is made, within the CloudWatch console, I navigate to Logs | Insights. I select the /aws/lambda/Airline-ConfirmBooking-develop log group. I choose Run query which shows a list of discovered fields on the right of the console.

I can search for discovered booking related fields.

cloudwatch-insights-discovered-fieldsI then enter the following in the query pane to search the logs and plot the sum of BookingReference and choose Run Query:

fields @timestamp, @message
| stats sum(@BookingReference)

cloudwatch-insights-displayed-bookingreference

CloudWatch Insights query

There are a number of other component metrics that are important to measure. Track interactions between upstream and downstream components such as message queue length, integration latency, and throttling.

Improvement plan summary:

  1. Identify user journeys and metrics from each customer transaction.
  2. Create custom metrics asynchronously instead of synchronously for improved performance, cost, and reliability outcomes.
  3. Emit business metrics from within your workload to measure application performance against business goals.
  4. Create and analyze component metrics to measure interactions with upstream and downstream components.
  5. Create and analyze operational metrics to assess the health of your continuous delivery pipeline and operational processes.

Good practice: Use distributed tracing and code is instrumented with additional context

Logging provides information on the individual point in time events the application generates. Tracing provides a wider continuous view of an application. Tracing helps to follow a user journey or transaction through the application.

AWS X-Ray is an example of a distributed tracing solution, there are a number of third-party options as well. X-Ray collects data about the requests that your application serves and also builds a service graph, which shows all the components of an application. This provides visibility to understand how upstream/downstream services may affect workload health.

For the most comprehensive view, enable X-Ray across as many services as possible and include X-Ray tracing instrumentation in code. This is the list of AWS Services integrated with X-Ray.

X-Ray receives data from services as segments. Segments for a common request are then grouped into traces. Segments can be further broken down into more granular subsegments. Custom data key-value pairs are added to segments and subsegments with annotations and metadata. Traces can also be grouped which helps with filter expressions.

AWS Lambda instruments incoming requests for all supported languages. Lambda application code can be further instrumented to emit information about its status, correlation identifiers, and business outcomes to determine transaction flows across your workload.

X-Ray tracing for a Lambda function is enabled in the Lambda Console. Select a Lambda function. I select the Airline-ReserveBooking-develop function. In the Configuration pane, I select to enable X-Ray Active tracing.

x-ray-enable

X-Ray tracing enabled

X-Ray can also be enabled via CloudFormation with the following code:

TracingConfig:
  Mode: Active

Lambda IAM permissions to write to X-Ray are added automatically when active tracing is enabled via the console. When using CloudFormation, Allow the Actions xray:PutTraceSegments and xray:PutTelemetryRecords

It is important to understand what invocations X-Ray does trace. X-Ray applies a sampling algorithm. If an upstream service, such as API Gateway with X-Ray tracing enabled, has already sampled a request, the Lambda function request is also sampled. Without an upstream request, X-Ray traces data for the first Lambda invocation each second, and then 5% of additional invocations.

For the airline application, X-Ray tracing is initiated within the shared library with the code:

from aws_xray_sdk.core import models, patch_all, xray_recorder

Segments, subsegments, annotations, and metadata are added to functions with the following example code:

segment = xray_recorder.begin_segment('segment_name')
# Start a subsegment
subsegment = xray_recorder.begin_subsegment('subsegment_name')
# Add metadata and annotations
segment.put_metadata('key', dict, 'namespace')
subsegment.put_annotation('key', 'value')
# Close the subsegment and segment
xray_recorder.end_subsegment()
xray_recorder.end_segment()

For example, within the collect payment module, an annotation is added for a successful payment with:

tracer.put_annotation("PaymentStatus", "SUCCESS")

CloudWatch ServiceLens

Once a booking is made and payment is successful, the tracing is available in the X-Ray console.

I explore how Amazon CloudWatch ServiceLens connects metrics, logs and the X-Ray tracing. Within the CloudWatch console, I navigate to ServiceLens | Service Map.

I can visualize all application resources and dependencies where X-Ray is enabled. I can trace performance or availability issues. If there was an issue connecting to SNS for example, this would be shown.

I select the Airline-CollectPayment-develop node and can view the out-of-the-box standard Lambda metrics.

I can select View Logs to jump to the CloudWatch Logs Insights console.

cloudwatch-insights-service-map-view

CloudWatch Insights Service map

I select View dashboard to see the function metrics, node map, and function details.

cloudwatch-insights-service-map-dashboard

CloudWatch Insights Service Map dashboard

I select View traces and can filter by the custom metric PaymentStatus. I select SUCCESS and chose Add to filter. I then select a trace.

cloudwatch-insights-service-map-select-trace

CloudWatch Insights Filtered traces

I see the full trace details, which show the full application transaction of a payment collection.

cloudwatch-insights-service-map-view-trace

Segments timeline

Selecting the Lambda handler subsegment – ## lambda_handler, I can view the trace Annotations and Metadata, which include the business transaction details such as Customer and PaymentStatus.

cloudwatch-insights-service-map-view-trace-annotations

X-Ray annotations

Trace groups are another feature of X-Ray and ServiceLens. Trace groups use filter expressions such as Annotation.PaymentStatus = "FAILED" which are used to view traces that match the particular group. Service graphs can also be viewed, and CloudWatch alarms created based on the group.

CloudWatch ServiceLens provides powerful capabilities to understand application performance bottlenecks and issues, helping determine how users are impacted.

Improvement plan summary:

  1. Identify common business context and system data that are commonly present across multiple transactions.
  2. Instrument SDKs and requests to upstream/downstream services to understand the flow of a transaction across system components.

Recent announcements

There have been a number of recent announcements for X-Ray and CloudWatch to improve how to evaluate serverless application health.

Conclusion

Evaluating application health helps you identify which services should be optimized to improve your customer’s experience. In part 1, I cover out-of-the-box standard metrics and alerts, as well as structured and centralized logging. In this post, I explore custom metrics and distributed tracing and show how to use ServiceLens to view logs, metrics, and traces together.

In an upcoming post, I will cover the next Operational Excellence question from the Well-Architected Serverless Lens – Approaching application lifecycle management.

Building well-architected serverless applications: Understanding application health – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-understanding-application-health-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explaining the example application.

Question OPS1: How do you evaluate your serverless application’s health?

Evaluating your metrics, distributed tracing, and logging gives you insight into business and operational events, and helps you understand which services should be optimized to improve your customer’s experience. By understanding the health of your Serverless Application, you will know whether it is functioning as expected, and can proactively react to any signals that indicate it is becoming unhealthy.

Required practice: Understand, analyze, and alert on metrics provided out of the box

It is important to understand metrics for every AWS service used in your application so you can decide how to measure its behavior. AWS services provide a number of out-of-the-box standard metrics to help monitor the operational health of your application.

As these metrics are generated automatically, it is a simple way to start monitoring your application and can also be augmented with custom metrics.

The first stage is to identify which services the application uses. The airline booking component uses AWS Step Functions, AWS Lambda, Amazon SNS, and Amazon DynamoDB.

When I make a booking, as shown in the Introduction post, AWS services emit metrics to Amazon CloudWatch. These are processed asynchronously without impacting the application’s performance.

There are two default CloudWatch dashboards to visualize key metrics quickly: per service and cross service.

Per service

To view the per service metrics dashboard, I open the CloudWatch console.

per-service-metrics-dashboardI select a service where Overview is shown, such as Lambda. Now I can view the metrics for all Lambda functions in the account.

per-service-metrics-lambdaCross service

To see an overview of key metrics across all AWS services, open the CloudWatch console and choose View cross service dashboard.

cross-service-metrics-dashboardI see a list of all services with one or two key metrics displayed. This provides a good overview of all services your application uses.

Alerting

The next stage is to identify the key metrics for comparison and set up alerts for under- and over-performing services. Here are some recommended metrics to alarm on for a number of AWS services.

Alerts can be configured manually or via infrastructure as code tools such as the AWS Serverless Application Model, AWS CloudFormation, or third-party tools.

To configure a manual alert for Lambda function errors using CloudWatch Alarms:

  1. I open the CloudWatch console and select Alarms and select Create Alarm.
  2. I choose Select Metric and from AWS Namespaces, select Lambda, Across All Functions and select Errors and select Select metric.add-metric-to-alert
  3. I change the Statistic to Sum and the Period to 1 minute.metric-values
  4. Under Conditions, I select a Static threshold Greater than 1 and select Next.

Alarms can also be created using anomaly detection rather than static values if there is a discernible pattern or trend. Anomaly detection looks at past metric data and uses machine learning to create a model of expected values. Alerts can then be configured if they fall outside this band of “normal” values. I use a Static threshold for this alarm.

  1. For the notification, I set the trigger to alarm to an existing SNS topic with my email address, then choose Next.metric-notification
  2. I enter a descriptive alarm name such as serverlessairline-lambda-prod-errors > 1, select Next, and choose Create alarm.

I have now manually set up an alarm.

Use CloudWatch composite alarms to combine multiple alarms to reduce noise and focus on critical issues. For example, a single alarm could trigger if there are both Lambda function errors as well as high Lambda concurrent executions.

It is simpler and more scalable to include alerting within infrastructure as code. Here is an example of alerting programmatically using CloudFormation.

I view the out of the box standard metrics and in this example, manually create an alarm for Lambda function errors.

Improvement plan summary:

  1. Understand what metrics and dimensions each managed service used provides.
  2. Configure alerts on relevant metrics for when services are unhealthy.

Good practice: Use structured and centralized logging

Central logging provides a single place to search and analyze logs. Structured logging means selecting a consistent log format and content structure to simplify querying across multiple components.

To identify a business transaction across components, such as a particular flight booking, log operational information from upstream and downstream services. Add information such as customer_id along with business outcomes such as order=accepted or order=confirmed. Make sure you are not logging any sensitive or personal identifying data in any logs.

Use JSON as your logging output format. Log multiple fields in a single object or dictionary rather than many one line messages for simpler searching.

Here is an example of a structured logging format.

The airline booking component, which is written in Python, currently uses a shared library with a separate log processing stack.

Embedded Metrics Format is a simpler mechanism to replace the shared library and use structured logging. CloudWatch Embedded Metrics adds environmental metadata such as Lambda Function version and also automatically extracts custom metrics so you can visualize and alarm on them. There are open-source client libraries available for Node.js and Python.

I then add embedded metrics to the individual confirm booking module with the following steps:

  1. I install the aws-embedded-metrics library using the instructions.
  2. In the function init code, I import the module and create a metric_scope with the following code

from aws_embedded_metrics import metric_scope
@metric_scope

  1. In the function handler, I log the generated bookingReference with the following code.

metrics.set_property("BookingReference", ret["bookingReference"])

In this example I also log the entire incoming event details.

metrics.set_property("event", event)

It is best practice to only log what is required to avoid unnecessary costs. Ensure the event does not have any sensitive or personal identifying data which is available to anyone who has access to the logs.

To avoid the duplicate logging in this example airline application which adds cost, I remove the existing shared library logger.*() lines.

When I make a booking, the CloudWatch log message is in structured JSON format. It contains the properties I set event, BookingReference, as well as function metadata.

I can then search for all log activity related to a specific booking across multiple functions with booking_id. I can track customer activity across multiple bookings using customer_id.

Logging is often created as a shared library resource which all functions reference. Another option is using Lambda Layers, which lets functions import additional code such as external libraries. Multiple functions can share this code.

Improvement plan summary:

  1. Log request identifiers from downstream services, component name, component runtime information, unique correlation identifiers, and information that helps identify a business transaction.
  2. Use JSON as the logging output. Prefer logging entire objects/dictionaries rather than many one line messages. Mask or remove sensitive data when logging.
  3. Minimize logging debugging information to a minimum as they can incur both costs and increase noise to signal ratio

Conclusion

Evaluating serverless application health helps understand which services should be optimized to improve your customer’s experience. I cover out of the box metrics and alerts, as well as structured and centralized logging.

This well-architected question will be continued in an upcoming post where I look at custom metrics and distributed tracing.

Building well-architected serverless applications: Introduction

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-introduction/

Customers building production applications in the cloud are looking to build and operate their applications following best practices. Best practices are useful throughout the software development lifecycle and help to answer the question “am I well-architected?”

Serverless technologies provide a solid foundation for building well-architected applications with a goal to reduce and minimize the impact of issues that can happen. What does it mean to be “well architected”?

In 2015, AWS released the AWS Well-Architected Framework.  It’s a structured way to compare applications against AWS architectural best practices with advice on how to improve. This formalized framework publicly documents the approach our solutions architects use when conducting architectural reviews. The Well-Architected Framework can be used by customers and partners to evaluate applications. It’s currently based on five pillars:

  • Operational Excellence
  • Security
  • Reliability
  • Performance Efficiency
  • Cost Optimization

In 2017, AWS extended the framework’s general advice to be more application domain specific with the concept of a “lens”. A lens adds additional questions for specific technology areas, which focus on what is different from the generic advice. Today, there are three technology domain lenses:

In 2018, AWS added the AWS Well-Architected Tool within the AWS Management Console. The tool allows you to define a Workload and answer a number of questions based on each of the five pillars. Each question has context to explain what the question means and why it is important, and then provides a number of best practices.

well-architected-tool-questionThe tool is used to assess risks, and find opportunities for improvement for workloads. It can also provide a broader view across multiple applications for a central team. Each question has a number of best practices to follow, categorized as high or medium risk. This can help you decide where to focus.

Serverless Lens

In February, we added functionality to apply lenses to the Well-Architected Tool. The first one is the Serverless Lens. This includes nine questions, each with additional best practice recommendations to follow.

serverless-lens-in-consoleApplying best practices to production applications may need some more time and effort. The Well-Architected Tool and Serverless Lens are available to help and are not intended to only be a one-time check.

We suggest at a minimum reading the whitepapers and being familiar with the questions before the design phase. It’s a good idea to check throughout the application lifecycle; halfway (or sooner), close to launch, and post-launch for each iteration. Although it is never too late to add best practices to your applications, starting early helps reduce the remediation process as well as making it part of the full application lifecycle journey.

The following posts are a multi-part series addressing each of the questions within the Serverless Lens of the Well-Architected Tool.

Series Episodes: Building well-architected serverless applications

  1. Introduction
  2. Operational Excellence: Understanding serverless application health
    1. Out of the box metrics and alerts; structured and centralized logging
    2. Custom metrics and distributed tracing (upcoming)
  3. Operational Excellence – Understanding serverless application health (upcoming)
  4. Operational Excellence – Approaching application lifecycle management (upcoming)
  5. Security – Controlling access to serverless APIs (upcoming)
  6. Security – Managing serverless security boundaries (upcoming)
  7. Security – Implementing application security (upcoming)
  8. Reliability – Regulating inbound request rates (upcoming)
  9. Reliability – Building resiliency into serverless applications (upcoming)
  10. Performance Efficiency – Optimizing your Serverless Application Performance (upcoming)
  11. Cost Optimization – Optimizing your Serverless Application Costs (upcoming)

Example application

For this series, I am looking at an example application, AWS Serverless Airline Booking. This is a complete web application for a fictional airline booking service built using serverless technologies that provide Flight Search, Payment, Booking, and Loyalty Point features.

I show how to use the Well-Architected Tool with the Serverless Lens to help apply serverless best practices to the application.

airline-mobile-exampleThis web application is the theme of Season 2 of Build on Serverless, which is a video series on Twitch.tv, and presented at AWS re:Invent 2019.

You can view this application directly on GitHub or deploy into an AWS account using the Getting Started instructions. This is not required to follow this Building Well-Architected Serverless Applications series but allows you to explore the code.

The application architecture consists of a frontend web application, which interacts with a number of backend microservices to provide the airline booking functionality.

airline-architectureThere are four backend services that make up the application:

Service Description
Catalog Provides flight search
Booking Provides new and list bookings. Business workflow implemented in Python.
Payment Provides payment authorization, collection, and refund workflows using Stripe. Business workflow implemented in Python
Loyalty Provides loyalty points for customers including tiers. Implemented in TypeScript.

Once the application is deployed, I can create a booking by searching for flights.

Once I select an available flight, I enter payment details using a test Stripe credit card.

The application authorizes the payment, confirms, and stores the booking and then updates the loyalty points.

Next steps

Building serverless applications using best practices can give you more confidence in the architecture and operations of your workloads.

Using the Well-Architected Tool and Serverless Lens can give you more visibility into your applications. They can help pinpoint and rank areas to improve. Well-Architected works best when integrated into your application lifecycle processes.

Continue learning in the next post, which dives into the first Well-Architected Serverless Lens question: Operational Excellence – Understanding Serverless Application Health – part1.

Five Talent Collaborates with Customers Using the AWS Well-Architected Tool

Post Syndicated from Scott Sprinkel original https://aws.amazon.com/blogs/architecture/five-talent-collaborates-with-customers-using-the-aws-well-architected-tool/

Since its launch at re:Invent 2018, the AWS Well-Architected Tool (AWS W-A Tool) has provided a consistent process for documenting and measuring architecture workloads using the best practices from the AWS Well-Architected Framework. However, sharing workload reports for collaborative work experience was time consuming.

Well-Architected Tool

The new workload sharing feature solves these issues by offering a simple way to share workloads with other AWS accounts and AWS Identity and Account Management (IAM) users. Companies can leverage workload sharing to securely and efficiently collaborate and provide feedback about architecture implementation and design without sharing confidential account details through emails and PDFs. Multiple people across multiple organizations can now review a workload simultaneously and provide feedback and input.

Five Talent, a partner on the AWS Partner Network (APN) uses the workload sharing feature for a more collaborative Well-Architected review experience. With workload sharing, the company provides its clients with improved efficiency, transparency, and security.

“The new sharing feature has increased the efficiency across client and partner teams, which decreases the average time to remediate the high risks. By sharing the reviews in the AWS Console, we can protect sensitive customer data while staying informed in real time.” – Ryan Comingdeer, Chief Talent Officer, Five Talent.

Previously, Five Talent defined workloads in its AWS account and asked its clients to submit their workload information in a custom-built webform. Five Talent then generated and sent PDF reports via email. This was problematic for several reasons: Five Talent and its clients couldn’t control who had access to the report PDFs, it had no way to expedite high risk issue (HRI) remediation, and its recommendations could easily get lost in email correspondence. The workload sharing feature solves these problems and builds customer confidence through the ability for multiple people to work on reviews collaboratively.

Five Talent added extra security by customizing its workload sharing access controls based on its customer needs. The company is using the notes sections in the workload for secure and accurate communication. It shares links to documentation that can help clients take initiative and remediate HRIs — enabling quicker remediation and more transparent review cycles. Five Talent also highlights milestones in the AWS W-A Tool, enabling customers to prioritize HRIs without sorting through lengthy PDFs and email threads, which ultimately expedites the review and revision process.

The workload sharing feature has helped Five Talent drive down HRIs without requiring direct access to the AWS account where the workload is defined. This transparency and ability to work simultaneously helps keep all teams accountable while reinforcing the principles of the Well-Architected Framework.

Sign in to the Console and check out the new Shares tab in the AWS Well-Architected Tool, or visit the workload shares documentation to learn more.