All posts by Ben Peven

Best practices for handling EC2 Spot Instance interruptions

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/best-practices-for-handling-ec2-spot-instance-interruptions/

This post is contributed by Scott Horsfield – Sr. Specialist Solutions Architect, EC2 Spot

Amazon EC2 Spot Instances are spare compute capacity in the AWS Cloud available to you at steep discounts compared to On-Demand Instance prices. The only difference between an On-Demand Instance and a Spot Instance is that a Spot Instance can be interrupted by Amazon EC2 with two minutes of notification when EC2 needs the capacity back. Fortunately, there is a lot of spare capacity available, and handling interruptions in order to build resilient workloads with EC2 Spot is simple and straightforward. In this post, I introduce and walk through several best practices that help you design your applications to be fault tolerant and resilient to interruptions.

By using EC2 Spot Instances, customers can access additional compute capacity between 70%-90% off of On-Demand Instance pricing. This allows customers to run highly optimized and massively scalable workloads that would not otherwise be possible. These benefits make interruptions an acceptable trade-off for many workloads.

When you follow the best practices, the impact of interruptions is insignificant because interruptions are infrequent and don’t affect the availability of your application. At the time of writing this blog post, less than 5% of Spot Instances are interrupted by EC2 before being terminated intentionally by a customer, because they are automatically handled through integrations with AWS services.

Many applications can run on EC2 Spot Instances with no modifications, especially applications that already adhere to modern software development best practices such as microservice architecture. With microservice architectures, individual components are stateless, scalable, and fault tolerant by design. Other applications may require some additional automation to properly handle being interrupted. Examples of these applications include those that must gracefully remove themselves from a cluster before termination to avoid impact to availability or performance of the workload.

In this post, I cover best practices around handing interruptions so that you too can access the massive scale and steep discounts that Spot Instances can provide.

Architect for fault tolerance and reliability

Ensuring your applications are architected to follow modern software development best practices is the first step in successfully adopting Spot Instances. Software development best practices, such as graceful degradation, externalizing state, and microservice architecture already enforce the same best practices that can allow your workloads to be resilient to Spot interruptions.

A great place to start when designing new workloads, or evaluating existing workloads for fault tolerance and reliability, is the Well-Architected Framework. The Well-Architected Framework is designed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. Based on five pillars — Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization — the Framework provides a consistent approach for customers to evaluate architectures, and implement designs that will scale over time.

The Well-Architected Tool is a free tool available in the AWS Management Console. Using this tool, you can create self-assessments to identify and correct gaps in your current architecture that might affect your toleration to Spot interruptions.

Use Spot integrated services

Some AWS services that you might already use, such as Amazon ECS, Amazon EKS, AWS Batch, and AWS Elastic Beanstalk have built-in support for managing the lifecycle of EC2 Spot Instances. These services do so through integration with EC2 Auto Scaling. This service allows you to build fleets of compute using On-Demand Instance and Spot Instances with a mixture of instance types and launch strategies. Through these services, tasks such as replacing interrupted instances are handled automatically for you. For many fault-tolerant workloads, simply replacing the instance upon interruption is enough to ensure the reliability of your service. For advanced use cases, EC2 Auto Scaling groups provide notifications, auto-replacement of interrupted instances, lifecycle hooks, and weights. These more advanced features allow for more control by the user over the composition and lifecycle of your compute infrastructure.

Spot-integrated services automate processes for handling interruptions. This allows you to stay focused on building new features and capabilities, and avoid the additional cost that custom automation may accrue over time.

Most examples in this post use EC2 Auto Scaling groups to demonstrate best practices because of the built-in integration with other AWS services. Also, Auto Scaling brings flexibility when building scalable fault-tolerant applications with EC2 Spot Instances. However, these same best practices can also be applied to other services such as Amazon EMR, EC2 fleet, and your own custom automation and frameworks.

Choose the right Spot Allocation Strategy

It is a best practice to use the capacity-optimized Spot Allocation Strategy when configuring your EC2 Auto Scaling group. This allows Auto Scaling groups to launch instances from Spot Instance pools with the most available capacity. Since Spot Instances can be interrupted when EC2 needs the capacity back, launching instances optimized for available capacity is a key best practice for reducing the possibility of interruptions.

You can simply select the Spot allocation strategy when creating new Auto Scaling groups, or modifying an existing Auto Scaling group from the console.

capacity-optimized allocation

Choose instance types across multiple instance sizes, families, and Availability Zones

It is important that the Auto Scaling group has a diverse set of options to choose from so that services launch instances optimally based on capacity. You do this by configuring the Auto Scaling group to launch instances of multiple sizes, and families, across multiple Availability Zones. Each instance type of a particular size, family, and Availability Zone in each Region is a separate Spot capacity pool. When you provide the Auto Scaling group and the capacity-optimized Spot Allocation Strategy a diverse set of Spot capacity pools, your instances are launched from the deepest pools available.

When you create a new Auto Scaling group, or modify an existing Auto Scaling group, you can specify a primary instance type and secondary instance types. The Auto Scaling group also provides recommended options. The following image shows what this configuration looks like in the console.

instance type selection

When using the recommended options, the Auto Scaling group is automatically configured with a diverse list of instance types across multiple instance families, generations, or sizes. We recommend leaving a many of these instance types in place as possible.

Handle interruption notices

For many workloads, the replacement of interrupted instances from a diverse set of instance choices is enough to maintain the reliability of your application. In other cases, you can gracefully decommission an application on an instance that is being interrupted. You can do this by knowing that an instance is going to be interrupted and responding through automation to react to the interruption. The good news is, there are several ways you can capture an interruption warning, which is published two minutes before EC2 reclaims the instance.

Inside an instance

The Instance Metadata Service is a secure endpoint that you can query for information about an instance directly from the instance. When a Spot Instance interruption occurs, you can retrieve data about the interruption through this service. This can be useful if you must perform an action on the instance before the instance is terminated, such as gracefully stopping a process, and blocking further processing from a queue. Keep in mind that any actions you automate must be completed within two minutes.

To query this service, first retrieve an access token, and then pass this token to the Instance Metadata Service to authenticate your request. Information about interruptions can be accessed through http://169.254.169.254/latest/meta-data/spot/instance-action. This URI returns a 404 response code when the instance is not marked for interruption. The following code demonstrates how you could query the Instance Metadata Service to detect an interruption.

TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" –v http://169.254.169.254/latest/meta-data/spot/instance-action

If the instance is marked for interruption, you receive a 200 response code. You also receive a JSON formatted response that includes the action that is taken upon interruption (terminate, stop or hibernate) and a time when that action will be taken (ie: the expiration of your 2-minute warning period).

{"action": "terminate", "time": "2020-05-18T08:22:00Z"}

The following sample script demonstrates this pattern.

#!/bin/bash

TOKEN=`curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`

while sleep 5; do

    HTTP_CODE=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -s -w %{http_code} -o /dev/null http://169.254.169.254/latest/meta-data/spot/instance-action)

    if [[ "$HTTP_CODE" -eq 401 ]] ; then
        echo 'Refreshing Authentication Token'
        TOKEN=`curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 30"`
    elif [[ "$HTTP_CODE" -eq 200 ]] ; then
        # Insert Your Code to Handle Interruption Here
    else
        echo 'Not Interrupted'
    fi

done

Developing automation with the Amazon EC2 Metadata Mock

When developing automation for handling Spot Interruptions within an EC2 Instance, it’s useful to mock Spot Interruptions to test your automation. The Amazon EC2 Metadata Mock (AEMM) project allows for running a mock endpoint of the EC2 Instance Metadata Service, and simulating Spot interruptions. You can use AEMM to serve a mock endpoint locally as you develop your automation, or within your test environment to support automated testing.

1. Download the latest Amazon EC2 Metadata Mock binary from the project repository.

2. Run the Amazon EC2 Metadata Mock configured to mock Spot Interruptions.

amazon-ec2-metadata-mock spotitn --instance-action terminate --imdsv2 

3. Test the mock endpoint with curl.

TOKEN=`curl -X PUT "http://localhost:1338/latest/api/token" -H "X-aws-ec2-  metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" –v http://localhost:1338/latest/meta-data/spot/instance-action

With default configuration, you receive a response for a simulated Spot Interruption with a time set to two minutes after the request was received.

{"action": "terminate", "time": "2020-05-13T08:22:00Z"}

Outside an instance

When a Spot Interruption occurs, a Spot instance interruption notice event is generated. You can create rules using Amazon CloudWatch Events or Amazon EventBridge to capture these events, and trigger a response such as invoking a Lambda Function. This can be useful if you need to take action outside of the instance to respond to an interruption, such as graceful removal of an interrupted instance from a load balancer to allow in-flight requests to complete, or draining containers running on the instance.

The generated event contains useful information such as the instance that is interrupted, in addition to the action (terminate, stop, hibernate) that is taken when that instance is interrupted. The following example event demonstrates a typical EC2 Spot Instance Interruption Warning.

{
    "version": "0",
    "id": "12345678-1234-1234-1234-123456789012",
    "detail-type": "EC2 Spot Instance Interruption Warning",
    "source": "aws.ec2",
    "account": "123456789012",
    "time": "yyyy-mm-ddThh:mm:ssZ",
    "region": "us-east-2",
    "resources": ["arn:aws:ec2:us-east-2:123456789012:instance/i-1234567890abcdef0"],
    "detail": {
        "instance-id": "i-1234567890abcdef0",
        "instance-action": "action"
    }
}

Capturing this event is as simple as creating a new event rule that matches the pattern of the event, where the source is aws.ec2 and the detail-type is EC2 Spot Interruption Warning.

aws events put-rule --name "EC2SpotInterruptionWarning" \
    --event-pattern "{\"source\":[\"aws.ec2\"],\"detail-type\":[\"EC2 Spot Interruption Warning\"]}" \  
    --role-arn "arn:aws:iam::123456789012:role/MyRoleForThisRule"

When responding to these events, it’s common to retrieve additional details about the instance that is being interrupted, such as Tags. When triggering additional API calls in response to Spot Interruption Warnings, keep in mind that AWS APIs implement Request Throttling. So, ensure that you are implementing exponential backoffs and retries as needed and using paginated API calls.

For example, if multiple EC2 Spot Instances were interrupted, you could aggregate the instance-ids by temporarily storing them in DynamoDB and then combine the instance-ids into a single DescribeInstances API call. This allows you to retrieve details about multiple instances rather than implementing individual DescribeInstances API calls that may exceed API limits and result in throttling.

The following script demonstrates describing multiple EC2 Instances with a reusable paginator that can accept API-specific arguments. Additional examples can be found here.

import boto3

ec2 = boto3.client('ec2')

def paginate(method, **kwargs):
    client = method.__self__
    paginator = client.get_paginator(method.__name__)
    for page in paginator.paginate(**kwargs).result_key_iters():
        for item in page:
            yield item

def describe_instances(instance_ids):
    described_instances = []

    response = paginate(ec2.describe_instances, InstanceIds=instance_ids)

    for item in response:
        described_instances.append(item)

    return described_instances

if __name__ == "__main__":

    instance_ids = ['instance1','instance2', '...']

    print(describe_instances(instance_ids))

Container services

When running containerized workloads, it’s common to want to let the container orchestrator know when a node is going to be interrupted. This allows for the node to be marked so that the orchestrator knows to stop placing new containers on an interrupted node, and drain running containers off of the interrupted node. Amazon EKS and Amazon ECS are two popular services that manage containers on AWS, and each have simple integrations that allow for handling interruptions.

Amazon EKS/Kubernetes

With Kubernetes workloads, including self-managed clusters and those running on Amazon EKS, use the AWS maintained AWS Node Termination Handler to monitor for Spot Interruptions and make requests to the Kubernetes API to mark the node as non-schedulable. This project runs as a Daemonset on your Kubernetes nodes. In addition to handling Spot Interruptions, it can also be configured to handle Scheduled Maintenance Events.

You can use kubectl to apply the termination handler with default options to your cluster with the following command.

kubectl apply -f https://github.com/aws/aws-node-termination-handler/releases/download/v1.5.0/all-resources.yaml

Amazon ECS

With Amazon ECS workloads you can enable Spot Instance draining by passing a configuration parameter to the ECS container agent. Once enabled, when a container instance is marked for interruption, ECS receives the Spot Instance interruption notice and places the instance in DRAINING status. This prevents new tasks from being scheduled for placement on the container instance. If there are container instances in the cluster that are available, replacement service tasks are started on those container instances to maintain your desired number of running tasks.

You can enable this setting by configuring the user data on your container instances to apply this the setting at boot time.

#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=MyCluster
ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
ECS_CONTAINER_STOP_TIMEOUT=120s
EOF

Inside a Container Running on Amazon ECS

When an ECS Container Instance is interrupted, and the instance is marked as DRAINING, running tasks are stopped on the instance. When these tasks are stopped, a SIGTERM signal is sent to the running task, and ECS waits up to 2 minutes before forcefully stopping the task, resulting in a SIGKILL signal sent to the running container. This 2 minute window is configurable through a stopTimeout container timeout option, or through ECS Agent Configuration, as shown in the prior code, giving you flexibility within your container to handle the interruption. If you set this value to be greater than 120 seconds, it will not prevent your instance from being interrupted after the 2 minute warning. So, I recommend setting to be less than or equal to 120 seconds.

You can capture the SIGTERM signal within your containerized applications. This allows you to perform actions such as preventing the processing of new work, checkpointing the progress of a batch job, or gracefully exiting the application to complete tasks such as ensuring database connections are properly closed.

The following example Golang application shows how you can capture a SIGTERM signal, perform cleanup actions, and gracefully exit an application running within a container.

package main

import (
    "os"
    "os/signal"
    "syscall"
    "time"
    "log"
)

func cleanup() {
    log.Println("Cleaning Up and Exiting")
    time.Sleep(10 * time.Second)
    os.Exit(0)
}

func do_work() {
    log.Println("Working ...")
    time.Sleep(10 * time.Second)
}

func main() {
    log.Println("Application Started")

    signalChannel := make(chan os.Signal)

    interrupted := false
    for interrupted == false {

        signal.Notify(signalChannel, os.Interrupt, syscall.SIGTERM)
        go func() {
            sig := <-signalChannel
            switch sig {
                case syscall.SIGTERM:
                    log.Println("SIGTERM Captured - Calling Cleanup")
                    interrupted = true
                    cleanup()
            }
        }()

        do_work()
    }

}

EC2 Spot Instances are a great fit for containerized workloads because containers can flexibly run on instances from multiple instance families, generations, and sizes. Whether you’re using Amazon ECS, Amazon EKS, or your own self-hosted orchestrator, use the patterns and tools discussed in this section to handle interruptions. Existing projects such as the AWS Node Termination Handler are thoroughly tested and vetted by multiple customers, and remove the need for investment in your own customized automation.

Implement checkpointing

For some workloads, interruptions can be very costly. An example of this type of workload is if the application needs to restart work on a replacement instance, especially if your applications needs to restart work from the beginning. This is common in batch workloads, sustained load testing scenarios, and machine learning training.

To lower the cost of interruption, investigate patterns for implementing checkpointing within your application. Checkpointing means that as work is completed within your application, progress is persisted externally. So, if work is interrupted, it can be restarted from where it left off rather than from the beginning. This is similar to saving your progress in a video game and restarting from the last save point vs. the start of the level. When working with an interruptible service such as EC2 Spot, resuming progress from the latest checkpoint can significantly reduce the cost of being interrupted, and reduce any delays that interruptions could otherwise incur.

For some workloads, enabling checkpointing may be as simple as setting a configuration option to save progress externally (Amazon S3 is often a great place to store checkpointing data). For example, when running an object detection model training job on Amazon SageMaker with Managed Spot Training, enabling checkpointing is as simple as passing the right configuration to the training job.

The following example demonstrates how to configure a SageMaker Estimator to train an object detection model with checkpointing enabled. You can access the complete example notebook here.

od_model = sagemaker.estimator.Estimator(
    image_name=training_image,
    base_job_name='SPOT-object-detection-checkpoint',
    role=role, 
    train_instance_count=1, 
    train_instance_type='ml.p3.2xlarge',
    train_volume_size = 50,
    train_use_spot_instances=True,
    train_max_wait=3600,
    train_max_run=3600,
    input_mode= 'File',
    checkpoint_s3_uri="s3://{}/{}".format(s3_checkpoint_path, uuid.uuid4()),
    output_path=s3_output_location,
    sagemaker_session=sess)

For other workloads, enabling checkpointing may require extending your custom framework, or a public frameworks to persist data externally, and then reload that data when an instance is replaced and work is resumed.

For example, MXNet, a popular open source library for deep-learning, has a mechanism for executing Custom Callbacks during the execution of deep learning jobs. Custom Callbacks can be invoked during the completion of each training epoch, and can perform custom actions such as saving checkpointing data to S3. Your deep-learning training container could then be configured to look for any saved checkpointing data once restarted. Then load that training data locally so that training can be continued from the latest checkpoint.

By enabling checkpointing, you ensure that your workload is resilient to any interruptions that occur. Checkpointing can help to minimize data loss and the impact of an interruption on your workload by saving state and recovering from a saved state. When using existing frameworks, or developing your own framework, look for ways to integrate checkpointing so that your workloads have the flexibility to run on EC2 Spot Instances.

Track the right metrics

The best practices discussed in this blog can help you build reliable and scalable architectures that are resilient to Spot Interruptions. Architecting for fault tolerance is the best way you can ensure that your applications are resilient to interruptions, and is where you should focus most of your energy. Monitoring your services is important because it helps ensure that best practices are being effectively implemented. It’s critical to pick metrics that reflect availability and reliability, and not get caught up tracking metrics that do not directly correlate to service availability and reliability.

Some customers choose to track Spot Interruptions, which can be useful when done for the right reasons, such as evaluating tolerance to interruptions or conducing testing. However, frequency of interruption, or number of interruptions of a particular instance type, are examples of metrics that do not directly reflect the availability or reliability of your applications. Since Spot interruptions can fluctuate dynamically based on overall Spot Instance availability and demand for On-Demand Instances, tracking interruptions often results in misleading conclusions.

Rather than tracking interruptions, look to track metrics that reflect the true reliability and availability of your service including:

Conclusion

The best practices we’ve discussed, when followed, can help you run your stateless, scalable, and fault tolerant workloads at significant savings with EC2 Spot Instances. Architect your applications to be fault tolerant and resilient to failure by using the Well-Architected Framework. Lean on Spot Integrated Services such as EC2 Auto Scaling, AWS Batch, Amazon ECS, and Amazon EKS to handle the provisioning and replacement of interrupted instances. Be flexible with your instance selections by choosing instance types across multiple families, sizes, and Availability Zones. Use the capacity-optimized allocation strategy to launch instances from the Spot Instance pools with the most available capacity. Lastly, track the right metrics that represent the true availability of your application. These best practices help you safely optimize and scale your workloads with EC2 Spot Instances.

Introducing Instance Refresh for EC2 Auto Scaling

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/introducing-instance-refresh-for-ec2-auto-scaling/

This post is contributed to by: Ran Sheinberg – Principal EC2 Spot SA, and Isaac Vallhonrat – Sr. EC2 Spot Specialist SA

Today, we are launching Instance Refresh. This is a new feature in EC2 Auto Scaling that enables automatic deployments of instances in Auto Scaling Groups (ASGs), in order to release new application versions or make infrastructure updates.

Amazon EC2 Auto Scaling is used for a wide variety of workload types and applications. EC2 Auto Scaling helps you maintain application availability through a rich feature set. This feature set includes integration into Elastic Load Balancing, automatically replacing unhealthy instances, balancing instances across Availability Zones, provisioning instances across multiple pricing options and instance types, dynamically adding and removing instances, and more.

Many customers use an immutable infrastructure approach. This approach encourages replacing EC2 instances to update the application or configuration, instead of deploying into EC2 instances that are already running. This can be done by baking code and software in golden Amazon Machine Images (AMIs), and rolling out new EC2 Instances that use the new AMI version. Another common pattern for rolling out application updates is changing the package version that the instance pulls when it boots (via updates to instance user data). Or, keeping that pointer static, and pushing a new version to the code repository or another type of artifact (container, package on Amazon S3) to be fetched by an instance when it boots and gets provisioned.

Until today, EC2 Auto Scaling customers used different methods for replacing EC2 instances inside EC2 Auto Scaling groups when a deployment or operating system update was needed. For example, UpdatePolicy within AWS CloudFormation, create_before_destroy lifecycle in Hashicorp Terraform, using AWS CodeDeploy, or even custom scripts that call the EC2 Auto Scaling API.

Customers told us that they want native deployment functionality built into EC2 Auto Scaling to take away the heavy lifting of custom solutions, or deployments that are initiated from outside of Auto Scaling groups.

Introducing Instance Refresh in EC2 Auto Scaling

You can trigger an Instance Refresh using the EC2 Auto Scaling groups Management Console, or use the new StartInstanceRefresh API in AWS CLI or any AWS SDK. All you need to do is specify the percentage of healthy instances to keep in the group while ASG terminates and launches instances. Also specify the warm-up time, which is the time period that ASG waits between groups of instances, that refreshes via Instance Refresh. If your ASG is using Health Checks, the ASG waits for the instances in the group to be healthy before it continues to the next group of instances.

Instance Refresh in action

To get started with Instance Refresh in the AWS Management Console, click on an existing ASG in the EC2 Auto Scaling Management Console. Then click the Instance refresh tab.

When clicking the Start instance refresh button, I am presented with the following options:

start instance refresh

With the default configuration, ASG works to keep 90% of the instances running and does not proceed to the next group of instances if that percentage is not kept. After each group, ASG waits for the newly launched instances to transition into the healthy state, in addition to the 300-second warm-up time to pass before proceeding to the next group of instances.

I can also initiate the same action from the AWS CLI by using the following code:

aws autoscaling start-instance-refresh --auto-scaling-group-name ASG-Instance-Refresh —preferences MinHealthyPercentage=90,InstanceWarmup=300

After initializing the instance refresh process, I can see ongoing instance refreshes in the console:

initialize instance refresh

The following image demonstrates how an active Instance refresh looks in the EC2 Instances console. Moreover, ASG strives to keep the capacity balanced between Availability Zones by terminating and launching instances in different Availability Zones in each group.

active instance refresh

Automate your workflow with Instance Refresh

You can now use this new functionality to create automations that work for your use-case.

To get started quickly, we created an example solution based on AWS Lambda. Visit the solution page on Github and see the deployment instructions.

Here’s an overview of what the solution contains and how it works:

  • An EC2 Auto Scaling group with two instances running
  • An EC2 Image Builder pipeline, set up to build and test an AMI
  • An SNS topic that would get notified when the image build completes
  • A Lambda function that is subscribed to the SNS topic, which gets triggered when the image build completes
  • The Lambda function gets the new AMI ID from the SNS notification, creates a new Launch Template version, and then triggers an Instance Refresh in the ASG, which starts the rolling update of instances.
  • Because you can configure the ASG with LaunchTemplateVersion = $Latest, every new instance that is launched by the Instance Refresh process uses the new AMI from the latest version of the Launch Template.

See the automation flow in the following diagram.

instance refresh automation flow

Conclusion

We hope that the new Instance Refresh functionality in your ASGs allow for a more streamlined approach to launching and updating your application deployments running on EC2. You can now create automations that fit your use case. This allows you to more easily refresh the EC2 instances running in your Auto Scaling groups, when deploying a new version of your application or when you must replace the AMI being used. Visit the user-guide to learn more and get started.

Building for Cost optimization and Resilience for EKS with Spot Instances

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/cost-optimization-and-resilience-eks-with-spot-instances/

This post is contributed by Chris Foote, Sr. EC2 Spot Specialist Solutions Architect

Running your Kubernetes and containerized workloads on Amazon EC2 Spot Instances is a great way to save costs. Kubernetes is a popular open-source container management system that allows you to deploy and manage containerized applications at scale. AWS makes it easy to run Kubernetes with Amazon Elastic Kubernetes Service (EKS) a managed Kubernetes service to run production-grade workloads on AWS. To cost optimize these workloads, run them on Spot Instances. Spot Instances are available at up to a 90% discount compared to On-Demand prices. These instances are best used for various fault-tolerant and instance type flexible applications. Spot Instances and containers are an excellent combination, because containerized applications are often stateless and instance flexible.

In this blog, I illustrate the best practices of using Spot Instances such as diversification, automated interruption handling, and leveraging Auto Scaling groups to acquire capacity. You then adapt these Spot Instance best practices to EKS with the goal of cost optimizing and increasing the resilience of container-based workloads.

Spot Instances Overview

Spot Instances are spare Amazon EC2 capacity that allows customers to save up to 90% over On-Demand prices. Spot capacity is split into pools determined by instance type, Availability Zone (AZ), and AWS Region. The Spot Instance price changes slowly determined by long-term trends in supply and demand of a particular Spot capacity pool, as shown below:

Spot Instance pricing

Prices listed are an example, and may not represent current prices. Spot Instance pricing is illustrated in orange blocks, and On-Demand is illustrated in dark-blue.

When EC2 needs the capacity back, the Spot Instance service arbitrarily sends Spot interruption notifications to instances within the associated Spot capacity pool. This Spot interruption notification lands in both the EC2 instance metadata and Eventbridge. Two minutes after the Spot interruption notification, the instance is reclaimed. You can set up your infrastructure to automate a response to this two-minute notification. Examples include draining containers, draining ELB connections, or post-processing.

Instance flexibility is important when following Spot Instance best practices, because it allows you to provision from many different pools of Spot capacity. Leveraging multiple Spot capacity pools help reduce interruptions depending on your defined Spot Allocation Strategy, and decrease time to provision capacity. Tapping into multiple Spot capacity pools across instance types and AZs, allows you to achieve your desired scale — even for applications that require 500K concurrent cores:

Spot capacity pools = (Availability Zones) * (Instance Types)

If your application is deployed across two AZs and uses only an c5.4xlarge then you are only using (2 * 1 = 2) two Spot capacity pools. To follow Spot Instance best practices, consider using six AZs and allowing your application to use c5.4xlarge, c5d.4xlarge, c5n.4xlarge, and c4.4xlarge. This gives us (6 * 4 = 24) 24 Spot capacity pools, greatly increasing the stability and resilience of your application.

Auto Scaling groups support deploying applications across multiple instance types, and automatically replace instances if they become unhealthy, or terminated due to Spot interruption. To decrease the chance of interruption, use the capacity-optimized Spot allocation strategy. This automatically launches Spot Instances into the most available pools by looking at real-time capacity data, and identifying which are the most available.

Now that I’ve covered Spot best practices, you can apply them to Kubernetes and build an architecture for EKS with Spot Instances.

Solution architecture

The goals of this architecture are as follows:

  • Automatically scaling the worker nodes of Kubernetes clusters to meet the needs of the application
  • Leveraging Spot Instances to cost-optimize workloads on Kubernetes
  • Adapt Spot Instance best practices (like diversification) to EKS and Cluster Autoscaler

You achieve these goals via the following components:

ComponentRoleDetailsDeployment Method
Cluster AutoscalerScales EC2 instances automatically according to pods running in the clusterOpen SourceA DaemonSet via Helm on On-Demand Instances
EC2 Auto Scaling groupProvisions and maintains EC2 instance capacityAWSCloudformation via eksctl
AWS Node Termination HandlerDetects EC2 Spot interruptions and automatically drains nodesOpen SourceA DaemonSet via Helm on Spot Instances

The architecture deploys the EKS worker nodes over three AZs, and leverages three Auto Scaling groups – two for Spot Instances, and one for On-Demand. The Kubernetes Cluster Autoscaler is deployed on On-Demand worker nodes, and the AWS Node Termination Handler is deployed on all worker nodes.

EKS worker node architecture

Additional detail on Kubernetes interaction with Auto Scaling group:

  • Cluster Autoscaler can be used to control scaling activities by changing the DesiredCapacity of the Auto Scaling group, and directly terminating instances. Auto Scaling groups can be used to find capacity, and automatically replace any instances that become unhealthy, or terminated through Spot Instance interruptions.
  • The Cluster Autoscaler can be provisioned as a Deployment of one pod to an instance of the On-Demand Auto Scaling group. The proceeding diagram shows the pod in AZ1, however this may not necessarily be the case.
  • Each node group maps to a single Auto Scaling group. However, Cluster Autoscaler requires all instances within a node group to share the same number of vCPU and amount of RAM. To adhere to Spot Instance best practices and maximize diversification, you use multiple node groups. Each of these node groups is a mixed-instance Auto Scaling group with capacity-optimized Spot allocation strategy.

Kuberentes Node Groups

Autoscaling in Kubernetes Clusters

There are two common ways to scale Kubernetes clusters:

  1. Horizontal Pod Autoscaler (HPA) scales the pods in deployment or a replica set to meet the demand of the application. Scaling policies are based on observed CPU utilization or custom metrics.
  2. Cluster Autoscaler (CA) is a standalone program that adjusts the size of a Kubernetes cluster to meet the current needs. It increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources. It attempts to remove underutilized nodes, when its pods can run elsewhere.

When a pod cannot be scheduled due to lack of available resources, Cluster Autoscaler determines that the cluster must scale out and increases the size of the node group. When multiple node groups are used, Cluster Autoscaler chooses one based on the Expander configuration. Currently, the following strategies are supported: random, most-pods, least-waste, and priority

You use random placement strategy in this example for the Expander in Cluster Autoscaler. This is the default expander, and arbitrarily chooses a node-group when the cluster must scale out. The random expander maximizes your ability to leverage multiple Spot capacity pools. However, you can evaluate the others and may find another more appropriate for your workload.

Atlassian Escalator:

An alternative to Cluster Autoscaler for batch workloads is Escalator. It is designed for large batch or job-based workloads that cannot be force-drained and moved when the cluster needs to scale down.

Auto Scaling group

Following best practices for Spot Instances means deploying a fleet across a diversified set of instance families, sizes, and AZs. An Auto Scaling group is one of the best mechanisms to accomplish this. Auto Scaling groups automatically replace Spot Instances that have been terminated due to a Spot interruption with an instance from another capacity pool.

Auto Scaling groups support launching capacity from multiple instances types, and using multiple purchase options. For the example in this blog post, I’m maintaining a 1:4 vCPU to memory ratio for all instances chosen. For your application, there may be a different set of requirements. The instances chosen for the two Auto Scaling groups are below:

  • 4vCPU / 16GB ASG: xlarge, m5d.xlarge, m5n.xlarge, m5dn.xlarge, m5a.xlarge, m4.xlarge
  • 8vCPU / 32GB ASG: 2xlarge, m5d.2xlarge, m5n.2xlarge, m5dn.2xlarge, m5a.2xlarge, m4.2xlarge

In this example I use a total of 12 different instance types and three different AZs, for a total of (12 * 3 = 36) 36 different Spot capacity pools. The Auto Scaling group chooses which instance types to deploy based on the Spot allocation strategy. To minimize the chance of Spot Instance interruptions, you use the capacity-optimized allocation strategy. The capacity-optimized strategy automatically launches Spot Instances into the most available pools by looking at real-time capacity data and predicting which are the most available.

capacity-optimized Spot allocation strategy

Example of capacity-optimized Spot allocation strategy.

For Cluster Autoscaler, other cluster administration/management pods, and stateful workloads that run on EKS worker nodes, you create a third Auto Scaling group using On-Demand Instances. This ensures that Cluster Autoscaler is not impacted by Spot Instance interruptions. In Kubernetes, labels and nodeSelectors can be used to control where pods are placed. You use the nodeSelector to place Cluster Autoscaler on an instance in the On-Demand Auto Scaling group.

Note: The Auto Scaling groups continually work to balance the number of instances in each AZ they are deployed over. This may cause worker nodes to be terminated while ASG is scaling in the number of instances within an AZ. You can disable this functionality by suspending the AZRebalance process, but this can result in capacity becoming unbalanced across AZs. Another option is running a tool to drain instances upon ASG scale-in such as the EKS Node Drainer. The code provides an AWS Lambda function that integrates as an Amazon EC2 Auto Scaling Lifecycle Hook. When called, the Lambda function calls the Kubernetes API to cordon and evicts all evictable pods from the node being terminated. It then waits until all pods are evicted before the Auto Scaling group continues to terminate the EC2 instance.

Spot Instance Interruption Handling

To mitigate the impact of potential Spot Instance interruptions, leverage the ‘node termination handler’. The DaemonSet deploys a pod on each Spot Instance to detect the Spot Instance interruption notification, so that it can both terminate gracefully any pod that was running on that node, drain from load balancers and allow the Kubernetes scheduler to reschedule the evicted pods elsewhere on the cluster.

The workflow can be summarized as:

  • Identify that a Spot Instance is about to be interrupted in two minutes.
  • Use the two-minute notification window to gracefully prepare the node for termination.
  • Taint the node and cordon it off to prevent new pods from being placed on it.
  • Drain connections on the running pods.

Consequently:

  • Controllers that manage K8s objects like Deployments and ReplicaSet will understand that one or more pods are not available and create a new replica.
  • Cluster Autoscaler and AWS Auto Scaling group will re-provision capacity as needed.

Walkthrough

Getting started (launch EKS)

First, you use eksctl to create an EKS cluster with the name spotcluster-eksctl in combination with a managed node group. The managed node group will have two On-Demand t3.medium nodes and it will bootstrap with the labels lifecycle=OnDemand and intent=control-apps. Be sure to replace <YOUR REGION> with the region you’ll be launching your cluster into.

eksctl create cluster --version=1.15 --name=spotcluster-eksctl --node-private-networking --managed --nodes=3 --alb-ingress-access --region=<YOUR REGION> --node-type t3.medium --node-labels="lifecycle=OnDemand" --asg-access

This takes approximately 15 minutes. Once cluster creation is complete, test the node connectivity:

kubectl get nodes

Provision the worker nodes

You use eksctl create nodegroup and eksctl configuration files to add the new nodes to the cluster. First, create the configuration file spot_nodegroups.yml. Then, paste the code and replace <YOUR REGION> with the region you launched your EKS cluster in.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
    name: spotcluster-eksctl
    region: <YOUR REGION>
nodeGroups:
    - name: ng-4vcpu-16gb-spot
      minSize: 0
      maxSize: 5
      desiredCapacity: 1
      instancesDistribution:
        instanceTypes: ["m5.xlarge", "m5n.xlarge", "m5d.xlarge", "m5dn.xlarge","m5a.xlarge", "m4.xlarge"] 
        onDemandBaseCapacity: 0
        onDemandPercentageAboveBaseCapacity: 0
        spotAllocationStrategy: capacity-optimized
      labels:
        lifecycle: Ec2Spot
        intent: apps
        aws.amazon.com/spot: "true"
      tags:
        k8s.io/cluster-autoscaler/node-template/label/lifecycle: Ec2Spot
        k8s.io/cluster-autoscaler/node-template/label/intent: apps
      iam:
        withAddonPolicies:
          autoScaler: true
          albIngress: true
    - name: ng-8vcpu-32gb-spot
      minSize: 0
      maxSize: 5
      desiredCapacity: 1
      instancesDistribution:
        instanceTypes: ["m5.2xlarge", "m5n.2xlarge", "m5d.2xlarge", "m5dn.2xlarge","m5a.2xlarge", "m4.2xlarge"] 
        onDemandBaseCapacity: 0
        onDemandPercentageAboveBaseCapacity: 0
        spotAllocationStrategy: capacity-optimized
      labels:
        lifecycle: Ec2Spot
        intent: apps
        aws.amazon.com/spot: "true"
      tags:
        k8s.io/cluster-autoscaler/node-template/label/lifecycle: Ec2Spot
        k8s.io/cluster-autoscaler/node-template/label/intent: apps
      iam:
        withAddonPolicies:
          autoScaler: true
          albIngress: true

This configuration file adds two diversified Spot Instance node groups with 4vCPU/16GB and 8vCPU/32GB instance types. These node groups use the capacity-optimized Spot allocation strategy as described above. Last, you label all nodes created with the instance lifecycle “Ec2Spot” and later use nodeSelectors, to guide your application front-end to your Spot Instance nodes. To create both node groups, run:

eksctl create nodegroup -f spot_nodegroups.yml

This takes approximately three minutes. Once done, confirm these nodes were added to the cluster:

kubectl get nodes --show-labels --selector=lifecycle=Ec2Spot

Install the Node Termination Handler

You can install the .yaml file from the official GitHub site.

kubectl apply -f https://github.com/aws/aws-node-termination-handler/releases/download/v1.3.1/all-resources.yaml

This installs the Node Termination Handler to both Spot Instance and On-Demand nodes, which is helpful because the handler responds to both EC2 maintenance events and Spot Instance interruptions. However, if you are interested in limiting deployment to just Spot Instance nodes, the site has additional instructions to accomplish this.

Verify the Node Termination Handler is running:

kubectl get daemonsets --all-namespaces

Deploy the Cluster Autoscaler

For additional detail, see the EKS page here. Export the Cluster Autoscaler into a configuration file:

curl -LO https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Open the file created and edit the cluster-autoscaler container command to replace <YOUR CLUSTER NAME> with your cluster’s name, and add the following options.

--balance-similar-node-groups
--skip-nodes-with-system-pods=false

You also need to change the expander configuration. Search for - --expander= and replace least-waste with random

Example:

    spec:
        containers:
        - command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=random
            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/<YOUR CLUSTER NAME>
            - --balance-similar-node-groups
            - --skip-nodes-with-system-pods=false

Save the file and then deploy the Cluster Autoscaler:

kubectl apply -f cluster-autoscaler-autodiscover.yaml

Next, add the cluster-autoscaler.kubernetes.io/safe-to-evictannotation to the deployment with the following command:

kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"

Open the Cluster Autoscaler releases page in a web browser and find the latest Cluster Autoscaler version that matches your cluster’s Kubernetes major and minor version. For example, if your cluster’s Kubernetes version is 1.16 find the latest Cluster Autoscaler release that begins with 1.16.

Set the Cluster Autoscaler image tag to this version using the following command, replacing 1.15.n with your own value. You can replace us with asia or eu:

kubectl -n kube-system set image deployment.apps/cluster-autoscaler cluster-autoscaler=us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.15.n

To view the Cluster Autoscaler logs, use the following command:

kubectl -n kube-system logs -f deployment.apps/cluster-autoscaler

Deploy the sample application

Create a new file web-app.yaml, paste the following specification into it and save the file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-stateless
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: web-stateless
        resources:
          limits:
            cpu: 1000m
            memory: 1024Mi
          requests:
            cpu: 1000m
            memory: 1024Mi
      nodeSelector:    
        lifecycle: Ec2Spot
--- 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-stateful
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        service: redis
        app: redis
    spec:
      containers:
      - image: redis:3.2-alpine
        name: web-stateful
        resources:
          limits:
            cpu: 1000m
            memory: 1024Mi
          requests:
            cpu: 1000m
            memory: 1024Mi
      nodeSelector:
        lifecycle: OnDemand

This deploys three replicas, which land on one of the Spot Instance node groups due to the nodeSelector choosing lifecycle: Ec2Spot. The “web-stateful” nodes are not fault-tolerant and not appropriate to be deployed on Spot Instances. So, you use nodeSelector again, and instead choose lifecycle: OnDemand. By guiding fault-tolerant pods to Spot Instance nodes, and stateful pods to On-Demand nodes, you can even use this to support multi-tenant clusters.

To deploy the application:

kubectl apply -f web-app.yaml

Confirm that both deployments are running:

kubectl get deployment/web-stateless
kubectl get deployment/web-stateful

Now, scale out the stateless application:

kubectl scale --replicas=30 deployment/web-stateless

Check to see that there are pending pods. Wait approximately 5 minutes, then check again to confirm the pending pods have been scheduled:

kubectl get pods

Clean-Up

Remove the AWS Node Termination Handler:

kubectl delete daemonset aws-node-termination-handler -n kube-system

Remove the two Spot node groups (EC2 Auto Scaling group) that you deployed in the tutorial.

eksctl delete nodegroup ng-4vcpu-16gb-spot --cluster spotcluster-eksctl
eksctl delete nodegroup ng-8vcpu-32gb-spot --cluster spotcluster-eksctl

If you used a new cluster and not your existing cluster, delete the EKS cluster.

eksctl confirms the deletion of the cluster’s CloudFormation stack immediately but the deletion could take up to 15 minutes. You can optionally track it in the CloudFormation Console.

eksctl delete cluster --name spotcluster-eksctl

Conclusion

By following best practices, Kubernetes workloads can be deployed onto Spot Instances, achieving both resilience and cost optimization. Instance and Availability Zone flexibility are the cornerstones of pulling from multiple capacity pools and obtaining the scale your application requires. In addition, there are pre-built tools to handle Spot Instance interruptions, if they do occur. EKS makes this even easier by reducing operational overhead through offering a highly-available managed control-plane and managed node groups. You’re now ready to begin integrating Spot Instances into your Kubernetes clusters to reduce workload cost, and if needed, achieve massive scale.

Running Web Applications on Amazon EC2 Spot Instances

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/running-web-applications-on-amazon-ec2-spot-instances/

Author: Isaac Vallhonrat, Sr. EC2 Spot Specialist SA

Amazon EC2 Spot Instances allow customers to save up to 90% compared to On-Demand pricing by leveraging spare EC2 capacity. Spot Instances are a perfect fit for fault tolerant workloads that are flexible to run on multiple instance types such as batch jobs, code builds, load tests, containerized workloads, web applications, big data clusters, and High Performance Computing clusters. In this blog post, I walk you through how-to and best practices to run web applications on Spot Instances, so you can benefit from the scale and savings that they provide.

As Spot Instances are interruptible, your web application should be stateless, fault tolerant, loosely coupled, and rely on external data stores for persistent data, like Amazon ElastiCache, Amazon RDS, Amazon DynamoDB, etc.

Spot Instances recap

While Spot Instances have been around since 2009, recent updates and service integrations have made them easier than ever to integrate with your workloads. Before I get into the details of how to use them to host a web application, here is a brief recap of how Spot Instances work:

First, Spot Instances are a purchasing option within EC2. There is no difference in hardware when compared to On-Demand or Reserved Instances. The only difference is that these instances can be reclaimed with a two-minute notice if EC2 needs the capacity back.

A set of unused EC2 instances with the same instance type, operating system, and Availability Zone is a Spot capacity pool. Spot Instance pricing is set by Amazon EC2 and adjusts gradually based on long-term trends in supply and demand of EC2 instances in each pool. You can expect Spot pricing to be stable over time, meaning no sudden spikes or swings. You can view historical pricing data for the last three months in both the EC2 console and via the API. The following image is an example of the pricing history for m5.xlarge instances in the N. Virginia (us-east-1).

spot instance pricing history

Figure 1. Spot Instance pricing history for an m5.xlarge Linux/UNIX instance in us-east-1. Price changes independently for each Availability Zone and in small increments over time based on long term supply and demand.

As each capacity pool is independent from each other, I recommend you spread the fleet of instances that power your workload across multiple Availability Zones (AZs) and be flexible to use multiple instance types. This way, you effectively increase the amount of spare capacity you can use to launch Spot Instances and are able to replace interrupted instances with instances from other pools.

The best way to adhere to Spot Instance flexibility best practices and managing your fleet of instances is using an Amazon EC2 Auto Scaling group. This allows you to combine multiple instance types and purchase options. Once you identify a list of suitable instance types for your workload, you can use Auto Scaling features to launch a combination of On-Demand and Spot Instances from optimal Spot Instance pools in each Availability Zone.

Stateless web applications are a natural fit for Spot Instances as they are fault tolerant, used to handle interruptions through scale-in events in an Auto Scaling group, and commonly run across multiple Availability Zones. It’s also normally easy to implement instance type flexibility on web applications, so they tap on Spot Instance capacity from multiple pools.

Identifying suitable instance types for your web application

Being instance type flexible is key to being successful with Spot Instances. At the time of this writing, AWS provides more than 270 instance types, so it’s likely that your web application can run on multiple of them. For example, if your application runs on the m5.xlarge instance type, it can also run on the m5d.xlarge instance type as it is the same instance type, but with local SSDs. Also, running your application on the r5.xlarge or r5d.xlarge performs similarly since they use the same processor family. Instead, your application runs on an instance with more memory and you benefit from the savings Spot Instances provide.

You may be able to qualify additional similarly-sized instance types from other families or generations, like the AMD-based variants m5a.xlarge and r5a.xlarge, or the higher bandwidth variants m5n.xlarge and r5n.xlarge. These may present some performance differences compared to other types due to their hardware and/or the virtualization system. However, Application Load Balancer has mechanisms in place to spread the load across your instances according to the number of outstanding requests they are processing. This means, instances handling longer requests or that have lower processing power are not overloaded. This allows you to acquire capacity from even more Spot capacity pools regardless of hardware differences. Later in the blog post, I dive into the details on the load balancing mechanisms.

Configure your Auto Scaling group

Now that you have qualified multiple instance types to run your web application, you need to create an EC2 Auto Scaling group.

The first step is to create a Launch Template where you specify configuration for your fleet of instances including the AMI, Security Groups, Tags, user data scripts, etc. You configure a single instance type on the template, here you configure the instance type you commonly use to run your web application. Also, do not mark the Request Spot Instances checkbox on the template. You configure multiple instance types and the Spot Instance settings when configuring the Auto Scaling group. Here’s my Launch Template:

launch templates console

Figure 2. Launch templates console

Now, go to the EC2 Auto Scaling console to create an Auto Scaling group. In the wizard, select the launch template you created and then click next. Then, on the second step of the wizard, select Combine purchase options and instance types. You can see the configuration options in the following image.

combine EC2 purchase options

Figure 3. Combine purchase options and instance types configuration.

Here is a detailed look at the configuration details:

  • Optional On-Demand Base: This parameter specifies a baseline of On-Demand Instances within your Auto Scaling group. This is also handy if you have Reserved Instances or Savings Plans to cover your compute baseline, since On-Demand Instances apply towards your Savings Plans or the instances matching your reservation are billed as Reserved Instances.
  • On-Demand percentage above base: Once the size of the Auto Scaling group grows over the (optional) On-Demand base capacity, this parameter defines the percentage of instances that are On-Demand and Spot Instances. The default settings are a good starting point, set at 70% On-Demand and 30% Spot. You can adjust these percentages as suitable for your workload.
  • Spot Allocation Strategy per Availability Zone: This defines the logic that Auto Scaling uses to select which instance type to launch in each Availability Zone when launching Spot Instances. The possible Spot allocation strategies are:
    • Capacity Optimized: This strategy allocates your instances from the Spot Instance pool with optimal capacity for the number of instances that are launching. This is the default selection and the recommended one, as launching from capacity pools with optimal capacity can lower the chance of interruption. Every time EC2 Auto Scaling scales out, the strategy is evaluated, so as your Auto Scaling group scales out and scales in, your instances are recycled and you make use of Spot Instances in the optimal pools.
    • Lowest price: This strategy allocates your instances from the number (N) of Spot Instance pools that you specify and from the pools with the lowest price per unit at the time of fulfillment.

Next, you choose your instance types from the Instance Types section of the wizard. Here, there is a primary instance type inherited from your Launch Template. The console then automatically populates a list of same size instance types across other families and generations that could also fit your web application requirements. You can add or remove instance types as you see fit.

Selection of instance types

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 4. Selection of instance types

Note: There is an option to combine instance types of different sizes which is very helpful for queue worker nodes and similar workloads. We won’t cover this in this blog post as it is not relevant for web applications. You can read more about this feature here.

Once you select your instance types, your Auto Scaling group uses the configured Spot Instance allocation strategy to decide which Spot Instance types to launch on each Availability Zone. By specifying multiple instance types, EC2 Auto Scaling replenishes capacity from other Spot Instance pools applying your configured allocation strategy if some of your instances get interrupted. This maintains the size of your fleet and keeps your service available.

For the On-Demand part of your Auto Scaling group, the allocation strategy is prioritized. This means that EC2 Auto Scaling launches the primary instance type, and moves to the next types in the order you selected, in the unlikely event it cannot launch the capacity. If you have Reserved Instances to cover your compute baseline, set the instance type you reserved as the primary instance type in your Auto Scaling group, so the On-Demand Instances get the reservation discounts. You can learn more about how Reserved Instances are applied here.

After completing this step, complete the Auto Scaling group creation wizard, and you can move on to set up your Application Load Balancer.

Load balancing with Application Load Balancer

To balance the load across the fleet of instances running your web application you use an Application Load Balancer (ALB). EC2 Auto Scaling groups are fully integrated with ALB and Target Groups that manage the fleet of instances fronted by ALB.

Target groups are set up by default with a deregistration delay of 300 seconds. This means that when EC2 Auto Scaling needs to remove an instance out of the fleet, it first puts the instance in draining state, informs ALB to stop sending new requests to it, and allows the configured time for in-flight requests to complete before the instance is finally deregistered. This way, removing an instance from the fleet is not perceptible to your end users.

As Spot Instances are interrupted with a 2 minute warning, you need to adjust this setting to a slightly lower time. Recommended values are 90 seconds or less. This allows time for in-flight requests to complete and gracefully close existing connections before the instance is interrupted.

At the time of this writing, EC2 Auto Scaling is not aware of Spot Instance interruptions and does not trigger draining automatically when a Spot Instance is going to be interrupted. However, you can put the instance in draining state by catching the interruption notice and calling the EC2 Auto Scaling API. I cover this in more detail in the next section.

Target Groups allow you to configure a routing algorithm, which by default is Round Robin. As with Spot Instances you combine multiple instance types in your fleet, your web application may see some performance differences if they use different hardware and in some cases a different virtualization platform (Xen for the older generation instances and AWS Nitro for the newer ones). While the impact to response time may be negligible to the end user, you need to ensure your instances handle load fairly accounting for the processing power differences of each instance type.

With the Least Outstanding Requests load balancing algorithm, instead of distributing the requests across your fleet in a round-robin fashion, as a new request comes in, the load balancer sends it to the target with the least number of outstanding requests. This way backend instances with lower processing capabilities or processing long-standing requests are not burdened with more requests and the load is spread evenly across targets. This also helps the new targets effectively take load from overloaded targets.

These settings can be configured on the Target Groups section of the EC2 Console and editing the attributes of your target group.
application load balancer attributes

Figure 5. ALB Target group attribute configuration.

Your architecture will look like the following image.

stateless web application architecture

Figure 6. Stateless web application hosted on a combination of On-Demand and Spot Instances

Leveraging Instance Termination Notices for graceful termination

When a Spot Instance is going to be interrupted, EC2 triggers a Spot Instance interruption notice that is presented via the EC2 Metadata endpoint on the instance. It can also be caught by an Amazon EventBridge rule, for example, trigger an AWS Lambda function. You can find the specific documentation here.

By catching the interruption notice, you can take actions to gracefully take the instance out of the fleet without impacting your end users. For example, stop receiving new requests from the Load Balancer, allowing in-flight requests to finish and launch replacement capacity.

Below is a sample Spot Instance Interruption Notice caught by Amazon EventBridge:

{
  "version": "0",
  "id": "72d6566c-e6bd-117d-5bbc-2779c679abd5",
  "detail-type": "EC2 Spot Instance Interruption Warning",
  "source": "aws.ec2",
  "account": "12345678910",
  "time": "2020-03-06T08:25:40Z",
  "region": "eu-west-1",
  "resources": [
    "arn:aws:ec2:eu-west-1a:instance/i-0cefe48f36d7c281a"
  ],
  "detail": {
    "instance-id": "i-0cefe48f36d7c281a",
    "instance-action": "terminate"
  }
}

Upon intercepting the interruption event, you can leverage the DetachInstances API call of EC2 Auto Scaling. Invoking this API puts the instance in Draining state, which makes the configured Load Balancer stop sending new requests to the instance that is going to be interrupted. The instance remains in this state during the configured deregistration delay on the Target Group to allow time for in-flight requests to complete.

Target Group console

Figure 7. Target Group console showing instance draining.

Also, if you set the ShouldDecrementDesiredCapacity parameter of the API call to False, EC2 Auto Scaling attempts to launch replacement Spot Instance capacity according to your instance type selection and allocation strategy. Below is an example code snippet calling the Auto Scaling API using Python Boto3.

def detach_instance_from_asg(instance_id,as_group_name):
   try:
       # detach instance from ASG and launch replacement instance
       response = asgclient.detach_instances(
           InstanceIds=[instance_id],
           AutoScalingGroupName=as_group_name,
           ShouldDecrementDesiredCapacity=False)
       logger.info(response['Activities'][0]['Cause'])
   except ClientError as e:
       error_message = "Unable to detach instance {id} from AutoScaling Group {asg_name}. ".format(
           id=instance_id,asg_name=as_group_name)
       logger.error( error_message + e.response['Error']['Message'])
       raise e

In some cases, you may also want to take actions at the operating system level when a Spot Instance is going to be interrupted. For example, you may want to gracefully stop your application so it closes open connections to a database, deregister agents running on the instance or other clean-up activities. You can leverage AWS Systems Manager Run Command to invoke commands on the instance that is going to be interrupted to perform these actions.

You may also want to delay the execution of commands to capitalize on the two minute warning. As a first step, you can check the instance termination time on the instance metadata and wait, for example, until 30 seconds before the interruption. Then, gracefully stop your application and issue other termination commands. Note that there are no charges for using Run Command (limits apply) and the API call is asynchronous so it doesn’t hold your Lambda function running.

Note: To use Run Command you must have the AWS Systems Manager agent running on your instance and meet a set of pre-requisites, like configuring an IAM instance profile. You can find more information here.

We provide you with a sample solution that handles all the steps outlined above here that you can easily deploy on your AWS account.

The sample solution also allows you to optionally execute commands upon Spot Instance interruption by creating an AWS Systems Manager parameter in Parameter Store specific to each Auto Scaling group with the commands you want to run, so you can easily customize termination commands to the needs of each application. You can find detailed configuration instructions in the README.md file. The high-level architecture of this solution is detailed below:

serverless architecture spot interruptions

Figure 8. Serverless architecture to handle Spot Instance interruptions

Tying it all together

Now, that I covered all the relevant features to run web applications on Spot Instances, let’s put them all together and summarize:

  • Using mixed instance types and purchase options in EC2 Auto Scaling groups you can configure a combination of On-Demand and Spot Instances. Leverage this feature to configure multiple instance types to run your web application on Spot Instances, so you can leverage unused capacity from multiple Spot capacity pools and replace interrupted instances.
  • Using the capacity-optimized Spot Instance allocation strategy you can launch Spot Instances from your instance type selection based on the most available Spot Instance capacity pools. This reduces the chances of interruptions. Auto Scaling then replenishes Spot Instance capacity from the optimal pools if some of your instances are interrupted as Spot Instance capacity fluctuates.
  • With Application Load Balancer you can adjust the deregistration delay of your Target Group to a slightly lower time than the Spot Instance termination notice time (e.g. 90 seconds), so when an instance is going to be interrupted, you stop sending new requests to it and leverage the notice time to let in-flight requests complete and avoid impact to your end users. You can also configure the Least Outstanding Requests load balancing algorithm in your ALB Target Group so the load is distributed evenly across your fleet accounting for differences in processing power for different instance families.
  • You can leverage Amazon EventBridge, AWS Lambda and AWS Systems Manager Run Command to take interruption handling actions like put the instance in draining state, request EC2 Auto Scaling to launch a replacement instance and run commands to gracefully shutdown your application or trigger clean-up activities. You can also subscribe an SNS topic or other Lambda functions to the interruption event for alarming, logging or other purposes.

I hope you found this blog post useful, and if you are not using Spot Instances to run your stateless web applications today, I encourage you to try them to get the cost savings and scale they provide.


Isaac Vallhonrat is a Sr. Specialist Solutions Architect for EC2 Spot at AWS. He helps customers build resilient, fault tolerant and cost-effective architectures with EC2 Spot Instances across multiple workload types like web applications, containers, big data, CI/CD, etc.

 

 

 

Deep dive into Fargate Spot to run your ECS Tasks for up to 70% less

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/deep-dive-into-fargate-spot-to-run-your-ecs-tasks-for-up-to-70-less/

Author: Pritam Pal, Sr. EC2 Spot Specialist SA

AWS launched AWS Fargate Spot during late 2019 for customers looking for a cost effective way to run containers. This blog dives deep into how to use ECS Fargate Spot and Fargate Tasks to lower the cost of your workloads. I explain existing concepts like Container Stop Timeout, catching SIGTERM for graceful shut down, and introduce you to new Amazon Elastic Container Service (Amazon ECS) concepts like capacity providers and service events.

Product overview

In 2017, AWS launched AWS Fargate for Amazon ECS. Fargate allows you to spend less time managing Amazon EC2 instances and more time building. Fargate Spot is a new purchase option that allows customers to launch Tasks on spare capacity with a steep discount. A Spot task is almost indistinguishable from an On-Demand Task with the following exceptions:

Price (per CPU-Hour and GB-Hour) of a Spot Task is variable, ranging between 50% to 70% off the price of an On-Demand Task, and a Fargate Spot Task may be interrupted (i.e stopped) when AWS needs the capacity back.

Fargate Spot runs on the same principle as Amazon EC2 Spot Instances. Your tasks run on spare capacity in the AWS Cloud. If you request to run your Task on Fargate Spot, your Tasks will run when capacity for Fargate Spot is available. As these Tasks run on spare capacity, you receive a two-minute notification when AWS needs capacity back, just like Spot Instances.

In this blog, I walk through how to launch a Fargate Spot Task using the AWS Management Console and command line interface (CLI), how to handle termination notices, what the Service Task Placement Failure Event looks like and other best practices to make sure you are a Fargate Spot champion.

ECS Fargate concepts

Before we go deep, let’s explore some ECS concepts, which we will use for Fargate Spot in this blog post.

StopTask and stopTimeout

Because Fargate Spot task may be interrupted with two minutes notice, you need to make sure you gracefully exit. For this, you can use concepts like stopTimeout parameters. When StopTask is called on a task, the equivalent of Docker stop is issued to the containers running in the task. If the container handles the SIGTERM value gracefully and exits within 30 seconds from receiving it, no SIGKILL signal is sent. You typically use stopTimeout parameter of the task definition to control this behavior. stopTimeout is time duration (in seconds) to wait before the container is forcefully stopped if it doesn’t exit normally on its own. For Fargate 1.3.0 or later, the max stop timeout value is 120 seconds. If the parameter is not specified, the default value of 30 seconds is used.

Capacity providers

Capacity providers are a new way to manage compute capacity for containers. This tool allows the application to define its requirements for how it uses the capacity. With capacity providers, you can define flexible rules for how containerized workloads run on different types of compute capacity, and manage the scaling of the capacity. Capacity providers improve the availability, scalability, and cost of running tasks and services on ECS. As of now, each cluster can have up to six capacity providers and an optional default capacity provider strategy, which determines how the tasks are spread across the capacity providers. To run your tasks, you can either use the default capacity provider strategy or specify one of your own.

AWS Fargate and AWS Fargate Spot capacity providers do not need to be created. They are available to all accounts, and only need to be associated with a cluster to be available for use. When a new cluster is created using the Amazon ECS console, along with the Networking only cluster template, the FARGATE and FARGATE_SPOT capacity providers are associated with the new cluster automatically.

A cluster may contain a mix of FARGATE, FARGATE_SPOT and Auto Scaling Group (ASG) capacity providers, however at this moment, a capacity provider strategy may only contain either FARGATE or Auto Scaling Group capacity providers, but not both.

Now that I covered ECS Fargate concepts, lets jump into the technical walk through.

Launch ECS Fargate Spot Task using AWS Management Console

1. Open the Amazon ECS console

2. From the navigation bar, select the Region to use

3. In the navigation pane, choose Clusters

4. On the Clusters page, choose Create Cluster

5. Create a Networking only Cluster

ECS create clusterWith this option, you can launch a cluster with a new VPC to use for Fargate tasks. The FARGATE and FARGATE_SPOT capacity providers are automatically associated with the cluster, as shown in the following image.ECS default capacity providers

6. Click on Update Cluster on top right-hand side to set a capacity provider strategy

In this example, I use a combination of FARGATE_SPOT and FARGATE capacity providers. I selected a Weight of 4 for FARGATE_SPOT and 1 for FARGATE. This means that for every five Tasks, four are started on FARGATE_SPOT and one on FARGATE. You can distribute this however you want. More tasks on Fargate Spot means more savings. But, if your workload requires high availability and you are not comfortable with interruptions, start with a ratio that works for you.

ECS update capacity provider strategy

7. Let’s create a Task Definition first. Here are some great Task definitions to start with. Find the Task Definition link of left navigation panel, click Create a New Task Definition, Choose Fargate launch type, scroll down, near the bottom of the page find the Configure via JSON button. Delete the pre-populated JSON entry, copy the sample Fargate WebApp task definition from below and paste. Click Save. Click Create.

{
    "family": "webapp-fargate-task", 
    "networkMode": "awsvpc", 
    "containerDefinitions": [
        {
            "name": "fargate-app", 
            "image": "httpd:2.4", 
            "portMappings": [
                {
                    "containerPort": 80, 
                    "hostPort": 80, 
                    "protocol": "tcp"
                }
            ], 
            "essential": true, 
            "entryPoint": [
                "sh",
		"-c"
            ], 
            "command": [
                "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""
            ]
        }
    ], 
    "requiresCompatibilities": [
        "FARGATE"
    ], 
    "cpu": "256", 
    "memory": "512"
}

8. Now our Task definition is ready, we will run the same Task definition. Select the Task definition you just created, click Action, Run Task. Enter the number of Tasks you want to run. In this case I chose 10. After configuring the VPC and security groups, click Run Task.

Task definition

ECS run tasks listThe Run Task command from the last step starts ten Tasks, out of which eight Tasks launch on FARGATE_SPOT and two launch on FARGATE (The ratio I setup is 4:1). You can see the ratio by clicking on any Task, and finding the “Capacity provider” value for that Task under the details tab. Currently, there is no option to view all Tasks with particular “Capacity provider” in the Run Task console.

ECS Fargate Spot Task detailThis Task ran on FARGATE Capacity Provider, whereas the earlier mentioned Task ran on FARGATE_SPOT.ECS Fargate Task detail

 

In next section, I explore how to Launch Fargate Spot Tasks using the AWS CLI.

Launch ECS Fargate Spot Task using AWS CLI

Create a cluster with capacity providers

While creating a new Cluster using CLI, you must specify capacity providers. In the following example, I specified two capacity providers FARGATE and FARGATE_SPOT.

Enter the following code to specify the capacity providers:

aws ecs create-cluster \
    --cluster-name "Fargate-Spot-Deep-Dive" 
    --capacity-providers FARGATE FARGATE_SPOT   

The Output of this command should result in this:

"cluster": {
        "status": "PROVISIONING", 
        "defaultCapacityProviderStrategy": [], 
        "statistics": [], 
        "capacityProviders": [
            "FARGATE", 
            "FARGATE_SPOT"
        ], 
        ... 

Launching a Task

Once the cluster is created, you can launch a Fargate Spot Task by calling RunTask and providing the Spot capacity provider in the –capacity-provider-strategy field. You also need to specify:

  • A task definition
  • Weight options for capacity providers
  • A network configuration like Subnets, Security Groups
  • How many Tasks you want to run

I defined these specifications in the following code:

aws ecs run-task \
 --capacity-provider-strategy capacityProvider=FARGATE,weight=1
 capacityProvider=FARGATE_SPOT,weight=1 \
 --cluster "Fargate-Spot-Deep-Dive" \
 --task-definition task-def-family:revision \
 --network-configuration
"awsvpcConfiguration={subnets=[string,string],securityGroups=[string,string],assignPublicIp=string}"

 \
 -count integer \

If you specify count=10, and weight =1 for both providers, it would start 5 FARGATE_SPOT and 5 FARGATE Tasks.

Creating a service

In the example below, you create a service with FARGATE_SPOT only.

aws ecs create-service \
 --capacity-provider-strategy capacityProvider=FARGATE,weight=1
 capacityProvider=FARGATE_SPOT,weight=1 \
 --cluster "Fargate-Spot-Deep-Dive" \
 --service-name FargateService \
 --task-definition task-def-family:revision \
 --network-configuration
 "awsvpcConfiguration={subnets=[string,string],securityGroups=[string,string],assignPublicIp=string}"

 \
 --desired-count integer \

You can also create a service with a mix of Spot and On-Demand Tasks by calling CreateService and providing both Spot and On-Demand capacity providers in the capacity-provider-strategy field.

The base attribute is an optional field that says there should be at least four On-Demand Tasks (default base is 0, you cannot specify more than one capacity provider with a non-zero base). The weight is another optional field that says for the six remaining Tasks that are not managed by the base attribute, there should be one On-Demand Tasks for every two Spot Tasks.

aws ecs create-service \
--cluster "my-cluster"
--service-name "my-service"
--desired-count 10
--capacity-provider-strategy [
{'capacityProvider':'FARGATE_SPOT', 'weight': 2, 'base': 0},
{'capacityProvider':'FARGATE', 'weight': 1, 'base': 4},
]
...

Add the Fargate and Fargate Spot capacity providers to an existing cluster

aws ecs put-cluster-capacity-providers \
 --cluster "Fargate-Spot-Deep-Dive" \
 --capacity-providers FARGATE FARGATE_SPOT 
existing_capacity_provider1 existing_capacity_provider2 \
 --default-capacity-provider-strategy existing_default_capacity_provider_strategy \

How to handle Fargate Spot termination notices

By design, Fargate Spot is an interruptible service. When tasks using FARGATE_SPOT are stopped due to a Spot interruption, a two-minute warning is sent before a task is stopped.

The warning is sent as a task state change event to Amazon EventBridge and a SIGTERM signal to the running task. When using Fargate Spot as part of a service, the service scheduler will receive the interruption signal and attempt to launch additional Tasks on Fargate Spot if capacity is available.

To ensure that our containers exit gracefully before the Task stops, the following can be configured:

A stopTimeout value of 120 seconds (2 minutes) or less can be specified in the container definition that the task is using. Specifying a stopTimeout value gives us time between the moment the Task state change event is received and the point at which the container is forcefully stopped.

The SIGTERM signal must be received from within the container to perform any cleanup actions.

The following is a snippet of a Task state change event displaying the stopped reason and stop code for a Fargate Spot interruption.

{
  "version": "0",
  "id": "9bcdac79-b31f-4d3d-9410-fbd727c29fab",
  "detail-type": "ECS Task State Change",
  "source": "aws.ecs",
  "account": "111122223333",
  "resources": [
    "arn:aws:ecs:us-east-1:111122223333:task/b99d40b3-5176-4f71-9a52-9dbd6f1cebef"
  ],
  "detail": {
    "clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/default",
    "createdAt": "2016-12-06T16:41:05.702Z",
    "desiredStatus": "STOPPED",
    "lastStatus": "RUNNING",
    "stoppedReason": "Your Spot Task was interrupted.",
    "stopCode": "TerminationNotice",
    "taskArn": "arn:aws:ecs:us-east-1:111122223333:task/b99d40b3-5176-4f71-9a52-9dbd6fEXAMPLE",
    ...
  }
}

Example service Task placement failure event

In case FARGATE_SPOT can’t place a Task due to capacity constraints; service Task placement failure events are delivered.

In the following example, the task attempted to use the FARGATE_SPOT capacity provider, but the service scheduler was unable to acquire any Fargate Spot capacity.

{
    "version": "0",
    "id": "ddca6449-b258-46c0-8653-e0e3a6d0468b",
    "detail-type": "ECS Service Action",
    "source": "aws.ecs",
    "account": "111122223333",
    "time": "2019-11-19T19:55:38Z",
    "region": "us-west-2",
    "resources": [
        "arn:aws:ecs:us-west-2:111122223333:service/default/servicetest"
    ],
    "detail": {
        "eventType": "ERROR",
        "eventName": "SERVICE_TASK_PLACEMENT_FAILURE",
        "clusterArn": "arn:aws:ecs:us-west-2:111122223333:cluster/default",
        "capacityProviderArns": [
            "arn:aws:ecs:us-west-2:111122223333:capacity-provider/FARGATE_SPOT"
        ],
        "reason": "RESOURCE:FARGATE",
        "createdAt": "2019-11-06T19:09:33.087Z"
    }
}

Amazon EventBridge enables you to automate your AWS services, and respond automatically to system events such as application availability issues or resource changes. Events from AWS services are delivered to EventBridge in near real-time. You can write simple rules to indicate which events are of interest to you and what automated actions to take when an event matches a rule.

More details on how to use Amazon ECS Events can be found here.

Fargate Spot pricing

With AWS Fargate, there are no upfront payments and you only pay for the resources that you use. You pay for the amount of vCPU and memory resources consumed by your containerized applications.

The price for Spot CPU-Hour and GB-Hour is the same across all Availability Zones and Task Configurations. However, the price varies throughout the day. The latest price is available at Fargate Pricing page. Pricing is per second with a 1-minute minimum. Duration is calculated from the time you start to download your container image (Docker pull) until the Task terminates, rounded up to the nearest second.

Fargate Spot best practices

As I wrap up, I want to focus on a few best practices about Fargate Spot.

  • Fargate Spot is great for stateless, fault-tolerant workloads, but don’t rely solely on Spot Tasks for critical workloads, configure a mix of regular Fargate Tasks
  • Applications running on Fargate Spot should be fault-tolerant
  • Handle interruptions gracefully by catching SIGTERM signals

Conclusion

Fargate Spot is a great fit for parallelizable workloads like image rendering, Monte Carlo simulations, and genomic processing. However, customers can also use Fargate Spot for Tasks that run as a part of ECS services such as websites and APIs which require high availability.

ECS Fargate has already made its super easy to run containerized workloads without worrying about the setup and managing infrastructure. Fargate Spot makes it more affordable for your price sensitive workloads. With right mix of FARGATE and FARGATE_SPOT capacity providers you can get the optimal capacity for you tasks in budget.


PritamPritam is Sr. Specialist Solutions Architect in EC2 Spot team. For last 13 years he evangelized DevOps and Cloud adoption across industries and verticals. He likes to deep dive and find solutions to every day problems.

Cost Optimize your Jenkins CI/CD pipelines using EC2 Spot Instances

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/cost-optimize-your-jenkins-ci-cd-pipelines-using-ec2-spot-instances/

Author: Rajesh Kesaraju, Sr. Specialist Solution Architect, EC2 Spot Instances

In this blog post, I go over using Amazon EC2 Spot Instances on continuous integration and continuous deployment (CI/CD) workloads, via the popular open-source automation server Jenkins. I also break down the steps required to adopt Spot Instances into your CI/CD pipelines for cost optimization purposes. In this blog, I explain how to configure your Jenkins environment to achieve significant cost savings by using Spot Instances with the EC2 Fleet Jenkins plugin.

Overview of EC2 Spot, CI/CD, and Jenkins

AWS offers multiple purchasing models for its EC2 instances. This particular blog post focuses on Amazon EC2 Spot Instances, which lets you take advantage of unused EC2 capacity in the AWS Cloud at a steep discount.

You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high performance computing (HPC), and other test and development workloads.

CI/CD pipelines are familiar to many readers via a popular piece of open-source software called Jenkins. Jenkins’ automation of development, testing and deployment scenarios, courtesy of more than 2000 plugins, plays a key role in many organizations’ software development and delivery ecosystems. Jenkins accelerates software development through multiple stages, including building and documenting, packaging and analytics, staging and deploying, etc.

Lyft began using EC2 Spot Instances for their Jenkins CI pipelines, and discovered they could save up to 90 percent compared to their previous non-Spot EC2 implementations. They moved their entire CI/CD pipeline to EC2 Spot Instances by modifying just four lines of their deployment code.

In this blog, I walk through how to configure your Jenkins environment to achieve significant cost savings by using Spot Instances with the EC2 Fleet Jenkins plugin.

Solution Overview

For the following tutorial, you need both an AWS account and Jenkins downloaded and installed on your system.

This blog post uses Spot Instances. If your Jenkins server runs on On-Demand Instances, you can easily switch to Spot Instances with EC2-Fleet Plugin. Now, let’s look at how this plugin can be configured to make your Jenkins elastically scale up/down depending on pending jobs, and save significantly on compute costs.

Solution

Create a new EC2 key pair

To access the SSH interfaces of your Jenkins instances, you must have an EC2 key pair. Please follow below steps to create a new EC2 key pair.

1. Log in to your AWS Account;
2. Switch to your preferred Region;
3. Provision a new EC2 key pair:

    1. Go to the EC2 console and click on the key pairs option from the left frame.
    2. Click on the Create key pair button;
    3. Provide key pair name and click on the Create button;
    4. Your web browser should download a .pem file – keep this file as it will be required to access the EC2 instances that you create in this workshop. If you’re using a Windows system, convert the .pem file to a PuTTY .ppk file. If you’re not sure how to do this, instructions are available here.

Create an AWS IAM User for EC2-Fleet Plugin

To control Spot Instances from the EC2-Fleet plugin, you first create an IAM user (with programmatic access) in your AWS account. Then configure an IAM policy for AWS permissions. Use the following code to achieve this step.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "ec2:*",
                "autoscaling:*",
                "iam:ListInstanceProfiles",
                "iam:ListRoles",
                "iam:PassRole"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}

These permissions allow you to configure the plugin, allow programmatic access to AWS resources to create and terminate Spot Instances, and control Auto Scaling Group (ASG) parameters.

In the IAM dashboard, click User and select the Jenkins User you created. Next, click Create access key, and save the Access key ID and Secret access key for use in next steps.

IAM > Users > Jenkins User > Security Credentials > Create Access Key

Create Access Key for Jenkins user fig. 1

Create Access Key for Jenkins user fig. 2

Create an Auto Scaling Group

Auto Scaling groups help you configure your Jenkins EC2-Fleet plugin to control Jenkins build agents and scale up or down depending on the job queue. It also replaces instances that were terminated due to demand spike in specific Spot Instance pools.

Documentation on how to create Auto Scaling Group is here.

Set your ASG to diversify your Spot Fleet across multiple Spot pools to increase your chances of getting a Spot Instance for your Jenkins jobs, and set an allocation strategy. I used “capacity-optimized” as the allocation strategy in the following example.

The “capacity-optimized” option allocates Spot Instances from the deepest pools of available spare capacity, which lowers the chance of interruptions. Alternatively, you may choose the “lowest-price” allocation strategy if you have builds that finish quicker, and the cost of re-processing of failed jobs due to interruption isn’t that significant. Learn more about allocation strategies in this blog.

The Jenkins EC2-Fleet plugin overrides and controls ASG’s capacity configuration. So, start with one instance for now. I also cover a scenario that starts with 0 instances to further minimize costs.

A sample ASG configuration looks like the following image. Notice there are 6 instance types across 3 Availability Zones, this means EC2 Spot capacity is provided from 18 (6×3) Spot Instance pools! This configuration increases the likelihood of getting Spot Instances from the deepest Spot pools at a steep discount.

An example ASG configuration Image

Install and Configure an EC2-Fleet plugin in Jenkins

Install the latest version of EC2-Fleet plugin in Jenkins. This plugin launches Spot Instances using ASG or EC2 Spot Fleet where you can run your build jobs. In this blog, I launch EC2 Spot Instances using ASG.

After installing you see it in the plugin manager. This blog uses current version 2.0.0.

Go to Manage Jenkins > Plugin Manager then install EC2 Fleet Jenkins Plugin

Jenkins Server EC2-Fleet Plugin

In the first part of this solution, you created an AWS user. Now, configure this user in your Jenkins Amazon EC2 Fleet configuration section.

Navigate to Manage Jenkins -> Configure Clouds -> Add a New Cloud -> Amazon EC2 Fleet.

Configure Amazon EC2 Fleet plugin as Cloud Setting

Create a name for your EC2-Fleet plugin configuration. I use Amazon EC2 Spot Fleet. Then configure your AWS Credentials.

Configure ASG in Jenkins EC2-fleet plugin

1. Change the Kind to AWS Credentials;
2. Change the Scope to System (Jenkins and nodes only) – you don’t want your builds to have access to these credentials!
3. At the ID (optional) field. Enter this if need to access this using scripts
4. Provide Access key ID and Secret access key fields, saved when you created Jenkins user before, then click Add.
5. Once you are done adding credentials, select the corresponding AWS Region to your ASG.
6. EC2 Fleet dropdown automatically populates the ASG that you created earlier.
7. Once the ASG is selected, check your configuration. The following image shows this test:

Testing Jenkins EC2 Fleet Plugin configuration

Configure Launcher

1. Change the Kind to SSH Username with private key;
2. Change the Scope to System (Jenkins and nodes only) – you also don’t want your builds to have access to these credentials;
3. Enter ec2-user as the Username.
4. Select the Enter directly button for the Private Key. Open the .pem file that you downloaded previously, and copy the contents of the file to the Key field including the BEGIN RSA PRIVATE KEY and END RSA PRIVATE KEY fields.
5. Verify your launcher looks like as below, and click on the Add button

Configuring Launcher and providing Jenkins credentials

Once your credentials are added, you move on to complete rest of the Launcher configuration.

1. Select the ec2-user option from the Credentials drop-down.
2. Select the “Non verifying Verification Strategy” option from the Host Key Verification Strategy drop-down. Select this option because Spot Instances have a random SSH host fingerprint.

Configuring Jenkins launch agents by ec2-user via SSH

3. Mark the Connect Private check box to ensure that your Jenkins Master always communicates with the Agents via their internal VPC IP addresses (in real-world scenarios, your build      agents would likely not be publicly addressable).
4. Change the Label field to spot-agents.
5. Set the Max Idle Minutes Before Scaledown. In this example, I used AWS launched per-second billing in 2017, so there’s no need to keep a build agent running for too much longer than it’s required.
6. Change the Maximum Cluster Size depending on your need. For example, I set the Maximum Cluster Size to 2.

After saving these configurations, your screen should look similar to the following image.

Configuring Launcher with cluster scaling parameters

Configure Number of Executors

Determine the number of executors based on your build requirements, such as how many builds on average can be executed concurrently on each machine based on machine’s vCPU and RAM allocations.

If you cram many executors into one machine, each build average execution may increase, which slows down the pipeline.

Once you determine optimum executors per machine, any additional pending jobs get executed on scaled out machines by auto scaling.

Some Important aspects about Cluster size settings

Jenkins EC2-Fleet agent settings override ASG settings. So, “Minimum Cluster Size” and “Maximum Cluster Size” values mentioned here override ASG’s settings dynamically.

If you set minimum cluster size as 0, then when there are no pending jobs there won’t be any idle servers after Max Idle Minutes before shut down minutes are met. In this scenario, when there is a new build request, it takes roughly two to five minutes for new EC2 Spot Instances to start processing after boot strapping and installing necessary Jenkins agents.

If your jobs are time-insensitive, this strategy maximizes savings, as you eliminate spending money on idle instances.

Alternatively, you may set “Minimum Cluster Size” as 1 or 2 so you have running instances all the time if you have a need to process several builds/tests a day and/or builds taking very long time and occur on a daily basis.

In this blog the idea is to cost optimize CI/CD pipelines, to avoid idle instances.

Configure Jenkins build Jobs to utilize EC2 Spot Instances

Finally, configure your build job to check “Restrict where this project can be run” and enter “spot-agents” as the label expression. When builds are initiated, they are executed against Spot Instance.

Configuring Jenkins builds to utilize EC2 Spot Instances with Label Expression

At this point, you are ready to run your builds on Spot Instances and save significantly!

Configuring Jenkins server to run on EC2 Spot Instances

Now you started saving on your build agents, is it possible to save on Jenkins server also using Spot Instances? Yes! Let’s see how this can be done with a few simple techniques.

By running your Jenkins server on Spot, you optimize the compute costs associated with the whole Jenkins CI/CD environment. With Spot Instance diversification strategy and making use of ASG features, you can move your Jenkins server also to EC2 Spot Instance.

There is one slight wrinkle in the above process. Jenkins requires persistent data on a local file system, whereas a Spot Instance cannot be guaranteed to be persistent due to the chance of interruptions. Therefore, to switch your Jenkins server over to Spot, you must first move your Jenkins data to an Amazon Elastic File System (EFS) volume — which your Spot Instance can then access.

Amazon Elastic File System provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. More here

Here are the steps to mount EFS volume to an existing Jenkins server, and move its content to Amazon EFS managed store. Then, you can point the Jenkins server to use EFS mount point for its operations and maintain server state. This way when a Spot Instance gets interrupted, server state is not lost, and another Spot Instance can pick up from where the previous server left off.

Here are the steps move data from JENKINS_HOME to Amazon EFS

1. Mount EFS volume:

sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 \
$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)\
.%FILE-SYSTEM-ID%.efs.<AWS Region>.amazonaws.com:/ (http://efs.<AWS Region>.amazonaws.com/) /mnt

2. Copy existing JENKINS_HOME (/var/lib/Jenkins) content to EFS after shutting down Jenkins server
3. >> sudo chown jenkins:jenkins /mnt
>> sudo cp -rpv /var/lib/jenkins/* /mnt

Once you’ve moved the contents of JENKINS_HOME to Amazon EFS, now it’s safe to run Jenkins Server to EC2 Spot Instance.

Spot Instances can be interrupted by AWS, so you may lose your Jenkins server access momentarily when an interruption occurs.

Since you already externalized the state to Amazon EFS, you don’t lose any previous state. So, when the instance running your Jenkins server gets interrupted, in a matter of a few minutes, ASG replenishes new EC2 Spot Instance to run your Jenkins Server.

To get this into service automatically complete the following steps:

1. Ensure ASG launch instances into target group that is pointed by Application Load Balancer (ALB)
2. If Spot Instance is terminated with the two minute warning, ASG launches a replacement instance. The new Spot Instance will be bootstrapped just as your original Spot Instance was and then mount to EFS as configured in user data of “Launch Template”
3. Configure ASG from Launch Template (Sample UserData section as below)

#!/bin/bash
# Install all pending updates to the system
yum -y update
# Configure YUM to be able to access official Jenkins RPM packages
wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
# Import the Jenkins repository public key
rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
# Configure YUM to be able to access contributed Maven RPM packages
wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
# Update the release version in the Maven repository configuration for this mainline release of Amazon Linux
sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
# Install the Java 8 SDK, Git, Jenkins and Maven
yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel git jenkins apache-maven
# Set the default version of java to run out of the Java 8 SDK path (required by Jenkins)
update-alternatives --set java /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java
update-alternatives --set javac /usr/lib/jvm/java-1.8.0-openjdk.x86_64/bin/javac
# Mount the Jenkins EFS volume at JENKINS_HOME
mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).${EFSJenkinsHomeVolume}.efs.${AWSRegion}.amazonaws.com:/ /var/lib/jenkins
# Start the Jenkins service
service jenkins start

That’s it, you are now ready to run your Jenkins server on EC2 Spot Instances and save up to 90% compared to On-Demand price.

Conclusion

You are now ready to leverage the power and flexibility of EC2 Spot Instances.

With a few modifications to your deployment you can significantly reduce your compute costs or accelerate throughput by accessing 10x compute for the same cost.

For users getting started on their Amazon EC2 Spot Instances, we are here to help. Please also share any questions in the comments section below.

Here is where to begin in the Amazon EC2 Spot Instance console — and start transforming your Jenkins workloads today.


About the author

Rajesh Kesaraju

Rajesh Kesaraju is a Sr. Specialist SA for EC2 Spot with Amazon AWS. He helps customers to cost optimize their workloads by utilizing EC2 Spot instances in various types of workloads such as Big Data, Containers, HPC, CI/CD, and Sateless Applications Etc.