All posts by Chris Barclay

Amazon ECS sessions at re:Invent

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-sessions-at-reinvent/

Come learn about containers—from the basics to production topics such as scaling and security—from customers and Amazon ECS subject matter experts at this year’s re:Invent conference. We’re excited to learn from you and hear what you think about our recently launched features. Containers are highlighted at Thursday’s Containers Mini Con at The Mirage:

  • CON301 – Operations Management with Amazon ECS
  • CON302 – Development Workflow with Docker and Amazon ECS
  • CON303 – Introduction to Container Management on AWS
  • CON308 – Service Integration Delivery and Automation Using Amazon ECS
  • CON309 – Running Microservices on Amazon ECS
  • CON310 – Running Batch Jobs on Amazon ECS
  • CON311 – Operations Automation and Infrastructure Management with Amazon ECS
  • CON312 – Deploying Scalable SAP Hybris Clusters using Docker
  • CON313 – Netflix: Container Scheduling, Execution, and Integration with AWS
  • CON316 – State of the Union: Containers
  • CON401 – Amazon ECR Deep Dive on Image Optimization
  • CON402 – Securing Container-Based Applications

There are also two hands-on workshops:

  • CON314 – Workshop: Build a Recommendation Engine on Amazon ECS
  • CON315 – Workshop: Deploy a Swift Web Application on Amazon ECS

There are other breakout sessions that talk about Amazon ECS; two that I’d like to highlight are:

  • GAM401 – Riot Games: Standardizing Application Deployments Using Amazon ECS and Terraform
  • NET203 – From EC2 to ECS: How Capital One uses Application Load Balancer Features to Serve Traffic at Scale

You can also join us for an open Q&A session at the Dev Lounge, watch ECS demos at the Demo Pavilion, and ask us questions in the AWS Booth at re:Invent Central.

We look forward to meeting you at re:Invent 2016!

Running Swift Web Applications with Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/running-swift-web-applications-with-amazon-ecs/

This is a guest post from Asif Khan about how to run Swift applications on Amazon ECS.

—–

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. A goal for Swift is to be the best language for uses ranging from systems programming, to mobile and desktop applications, scaling up to cloud services. As a developer, I am thrilled with the possibility of a homogeneous application stack and being able to leverage the benefits of Swift both on the client and server side. My code becomes more concise, and is more tightly integrated to the iOS environment.

In this post, I provide a walkthrough on building a web application using Swift and deploying it to Amazon ECS with an Ubuntu Linux image and Amazon ECR.

Overview of container deployment

Swift provides an Ubuntu version of the compiler that you can use. You still need a web server, a container strategy, and an automated cluster management with automatic scaling for traffic peaks.

There are some decisions to make in your approach to deploy services to the cloud:

  • HTTP server
    Choose a HTTP server which supports Swift. I found Vapor to be the easiest. Vapor is a type-safe web framework for Swift 3.0 that works on iOS, MACOS, and Ubuntu. It is very simple and easy to deploy a Swift application. Vapor comes with a CLI that will help you create new Vapor applications, generate Xcode projects and build them, as well as deploy your applications to Heroku or Docker. Another Swift webserver is Perfect. In this post, I use Vapor as I found it easier to get started with.

Tip: Join the Vapor slack group; it is super helpful. I got answers on a long weekend which was super cool.

  • Container model
    Docker is an open-source technology that that allows you to build, run, test, and deploy distributed applications inside software containers. It allows you to package a piece of software in a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc. Docker enables you to quickly, reliably, and consistently deploy applications regardless of environment.
    In this post, you’ll use Docker, but if you prefer Heroku, Vapor is compatible with Heroku too.
  • Image repository
    After you choose Docker as the container deployment unit, you need to store your Docker image in a repository to automate the deployment at scale. Amazon ECR is a fully-managed Docker registry and you can employ AWS IAM policies to secure your repositories.
  • Cluster management solution
    Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.

With ECS, it is very easy to adopt containers as a building block for your applications (distributed or otherwise) by skipping the need for you to install, operate, and scale your own cluster infrastructure. Using Docker container within ECS provides flexibility to schedule long-running applications, services, and batch processes. ECS maintains application availability and allows you to scale containers.

To put it all together, you have your Swift web application running in a HTTP server (Vapor), deployed on containers (Docker) with images are stored on a secure repository (ECR) with automated cluster management (ECS) to scale horizontally.

Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions.
  2. Use the region selector in the navigation bar to choose the AWS Region where you want to deploy Swift web applications on AWS.
  3. Create a key pair in your preferred region.

Walkthrough

The following steps are required to set up your first web application written in Swift and deploy it to ECS:

  1. Download and launch an instance of the AWS CloudFormation template. The CloudFormation template installs Swift, Vapor, Docker, and the AWS CLI.
  2. SSH into the instance.
  3. Download the vapor example code
  4. Test the Vapor web application locally.
  5. Enhance the Vapor example code to include a new API.
  6. Push your code to a code repository
  7. Create a Docker image of your code.
  8. Push your image to Amazon ECR.
  9. Deploy your Swift web application to Amazon ECS.

Detailed steps

  1. Download the CloudFormation template and spin up an EC2 instance. The CloudFormation has Swift , Vapor, Docker, and git installed and configured. To launch an instance, launch the CloudFormation template from here.
  2. SSH into your instance:
    ssh –i [email protected]
  3. Download the Vapor example code – this code helps deploy the example you are using for your web application:
    git clone https://github.com/awslabs/ecs-swift-sample-app.git
  4. Test the Vapor application locally:
    1. Build a Vapor project:
      cd ~/ecs-swift-sample-app/example \
      vapor build
    2. Run the Vapor project:
      vapor run serve --port=8080
    3. Validate that server is running (in a new terminal window):
      ssh -i [email protected] curl localhost:8080
  5. Enhance the Vapor code:
    1. Follow the guide to add a new route to the sample application: https://Vapor.readme.io/docs/hello-world
    2. Test your web application locally:
      vapor run serve --port=8080
      curl http://localhost/hello.
  6. Commit your changes and push this change to your GitHub repository:
    git add –all
    git commit –m
    git push
  7. Build a new Docker image with your code:
    docker build -t swift-on-ecs \
    --build-arg SWIFT_VERSION=DEVELOPMENT-SNAPSHOT-2016-06-06-a \
    --build-arg REPO_CLONE_URL= \
    ~/ ecs-swift-sample-app/example
  8. Upload to ECR: Create an ECR repository and push the image following the steps in Getting Started with Amazon ECR.
  9. Create a ECS cluster and run tasks following the steps in Getting Started with Amazon ECS:
    1. Be sure to use the full registry/repository:tag naming for your ECR images when creating your task. For example, aws_account_id.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest.
    2. Ensure that you have port forwarding 8080 set up.
  10. You can now go to the container, get the public IP address, and try to access it to see the result.
    1. Open your running task and get the URL:
    2. Open the public URL in a browser:

Your first Swift web application is now running.

At this point, you can use ECS with Auto Scaling to scale your services and also monitor them using CloudWatch metrics and events.

Conclusion

If you want to leverage the benefits of Swift, you can use Vapor as the web container with Amazon ECS and Amazon ECR to deploy Swift web applications at scale and delegate the cluster management to Amazon ECS.

There are many interesting things you could do with Swift beyond this post. To learn more about Swift, see the additional Swift libraries and read the Swift documentation.

If you have questions or suggestions, please comment below.

Fleet Management Made Easy with Auto Scaling

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/fleet-management-made-easy-with-auto-scaling/

If your application runs on Amazon EC2 instances, then you have what’s referred to as a ‘fleet’. This is true even if your fleet is just a single instance. Automating how your fleet is managed can have big pay-offs, both for operational efficiency and for maintaining the availability of the application that it serves. You can automate the management of your fleet with Auto Scaling, and the best part is how easy it is to set up!

There are three main functions that Auto Scaling performs to automate fleet management for EC2 instances:

  • Monitoring the health of running instances
  • Automatically replacing impaired instances
  • Balancing capacity across Availability Zones

In this post, we describe how Auto Scaling performs each of these functions, provide an example of how easy it is to get started, and outline how to learn more about Auto Scaling.

Monitoring the health of running instances

Auto Scaling monitors the health of all instances that are placed within an Auto Scaling group. Auto Scaling performs EC2 health checks at regular intervals, and if the instance is connected to an Elastic Load Balancing load balancer, it can also perform ELB health checks. Auto Scaling ensures that your application is able to receive traffic and that the instances themselves are working properly. When Auto Scaling detects a failed health check, it can replace the instance automatically.

Automatically replacing impaired instances

When an impaired instance fails a health check, Auto Scaling automatically terminates it and replaces it with a new one. If you’re using an Elastic Load Balancing load balancer, Auto Scaling gracefully detaches the impaired instance from the load balancer before provisioning a new one and attaches it back to the load balancer. This is all done automatically, so you don’t need to respond manually when an instance needs replacing.

Balancing capacity across Availability Zones

Balancing resources across Availability Zones is a best practice for well-architected applications, as this greatly increases aggregate system availability. Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your Auto Scaling group settings. Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet. What’s more, Auto Scaling only launches into Availability Zones in which there is available capacity for the requested instance type.

Getting started is easy!

The easiest way to get started with Auto Scaling is to build a fleet from existing instances. The AWS Management Console provides a simple workflow to do this: right-click on a running instance and choose Instance Settings, Attach to Auto Scaling Group.

You can then opt to attach the instance to a new Auto Scaling group. Your instance is now being automatically monitored for health and will be replaced if it becomes impaired. If you configure additional zones and add more instances, they will be spread evenly across Availability Zones to make your fleet more resilient to unexpected failures.

Diving deeper

While this example is a good starting point, you may want to dive deeper into how Auto Scaling can automate the management of your EC2 instances.

The first thing to explore is how to automate software deployments. AWS Elastic Beanstalk is a popular and easy-to-use solution that works well for web applications. AWS CodeDeploy is a good solution for fine-grained control over the deployment process. If your application is based on containers, then Amazon EC2 Container Service (Amazon ECS) is something to consider. You may also want to look into AWS Partner solutions such as Ansible and Puppet. One common strategy for deploying software across a production fleet without incurring downtime is blue/green deployments, to which Auto Scaling is particularly well-suited.

These solutions are all enhanced by the core fleet management capabilities in Auto Scaling. You can also use the API or CLI to roll your own automation solution based on Auto Scaling. The following learning path will help you to explore the service in more detail.

  • Launch configurations
  • Lifecycle hooks
  • Fleet size
  • Automatic process control
  • Scheduled scaling

Launch configurations

Launch configurations are the key to how Auto Scaling launches instances. Whenever an Auto Scaling group launches a new instance, it uses the currently associated launch configuration as a template for the launch. In the example above, Auto Scaling automatically created a launch configuration by deriving it from the attached instance. In many cases, however, you create your own launch configuration. For example, if your software environment is baked into an Amazon Machine Image (AMI), then your launch configuration points to the version that you want Auto Scaling to deploy onto new instances.

Lifecycle hooks

Lifecycle hooks let you take action before an instance goes into service or before it gets terminated. This can be especially useful if you are not baking your software environment into an AMI. For example, launch hooks can perform software configuration on an instance to ensure that it’s fully prepared to handle traffic before Auto Scaling proceeds to connect it to your load balancer. One way to do this is by connecting the launch hook to an AWS Lambda function that invokes RunCommand on the instance.

Terminate hooks can be useful for collecting important data from an instance before it goes away. For example, you could use a terminate hook to preserve your fleet’s log files by copying them to an Amazon S3 bucket when instances go out of service.

Fleet size

You control the size of your fleet using the minimum, desired, and maximum capacity attributes of an Auto Scaling group. Auto Scaling automatically launches or terminates instances to keep the group at the desired capacity. As mentioned before, Auto Scaling uses the launch configuration as a template for launching new instances in order to meet the desired capacity, doing so such that they are balanced across configured Availability Zones.

Automatic process control

You can control the behavior of Auto Scaling’s automatic processes such as health checks, launches, and terminations. You may find the AZRebalance process of particular interest. By default, Auto Scaling automatically terminates instances from one zone and re-launches them into another if the instances in the fleet are not spread out in a balanced manner.
You may want to disable this behavior under certain conditions. For example, if you’re attaching existing instances to an Auto Scaling group, you may not want them terminated and re-launched right away if that is required to re-balance your zones. Note that Auto Scaling always replaces impaired instances with launches that are balanced across zones, regardless of this setting. You can also control how Auto Scaling performs health checks, launches, terminations, and more.

Scheduled scaling

Scheduled scaling is a simple tool for adjusting the size of your fleet on a schedule. For example, you can add more or fewer instances to your fleet at different times of the day to handle changing customer traffic patterns. A more advanced tool is dynamic scaling, which adjusts the size of your fleet based on Amazon CloudWatch metrics.

Summary

Auto Scaling can bestow important benefits to cloud applications by automating the management of fleets of EC2 instances. Auto Scaling makes it easy to monitor instance health, automatically replace impaired instances, and spread capacity across multiple Availability Zones.

If you already have a fleet of EC2 instances, then it’s easy to get started with Auto Scaling in just a few clicks. After your first Auto Scaling group is working to safeguard your existing fleet, you can follow the suggested learning path in this post. Over time, you can explore more features of Auto Scaling and further automate your software deployments and application scaling.

If you have questions or suggestions, please comment below.

Amazon ECS Service Auto Scaling Enables Rent-A-Center SAP Hybris Solution

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-service-auto-scaling-enables-rent-a-center-sap-hybris-solution/

This is a guest post from Troy Washburn, Sr. DevOps Manager @ Rent-A-Center, Inc., and Ashay Chitnis, Flux7 architect.

—–

Rent-A-Center in their own words: Rent-A-Center owns and operates more than 3,000 rent-to-own retail stores for name-brand furniture, electronics, appliances and computers across the US, Canada, and Puerto Rico.

Rent-A-Center (RAC) wanted to roll out an ecommerce platform that would support the entire online shopping workflow using SAP’s Hybris platform. The goal was to implement a cloud-based solution with a cluster of Hybris servers which would cater to online web-based demand.

The challenge: to run the Hybris clusters in a microservices architecture. A microservices approach has several advantages including the ability for each service to scale up and down to meet fluctuating changes in demand independently. RAC also wanted to use Docker containers to package the application in a format that is easily portable and immutable. There were four types of containers necessary for the architecture. Each corresponded to a particular service:

1. Apache: Received requests from the external Elastic Load Balancing load balancer. Apache was used to set certain rewrite and proxy http rules.
2. Hybris: An external Tomcat was the frontend for the Hybris platform.
3. Solr Master: A product indexing service for quick lookup.
4. Solr Slave: Replication of master cache to directly serve product searches from Hybris.

To deploy the containers in a microservices architecture, RAC and AWS consultants at Flux7 started by launching Amazon ECS resources with AWS CloudFormation templates. Running containers on ECS requires the use of three primary resources: clusters, services, and task definitions. Each container refers to its task definition for the container properties, such as CPU and memory. And, each of the above services stored its container images in Amazon ECR repositories.

This post describes the architecture that we created and implemented.

Auto Scaling

At first glance, scaling on ECS can seem confusing. But the Flux7 philosophy is that complex systems only work when they are a combination of well-designed simple systems that break the problem down into smaller pieces. The key insight that helped us design our solution was understanding that there are two very different scaling operations happening. The first is the scaling up of individual tasks in each service and the second is the scaling up of the cluster of Amazon EC2 instances.

During implementation, Service Auto Scaling was released by the AWS team and so we researched how to implement task scaling into the existing solution. As we were implementing the solution through AWS CloudFormation, task scaling needed to be done the same way. However, the new scaling feature was not available for implementation through CloudFormation and so the natural course was to implement it using AWS Lambda–backed custom resources.

A corresponding Lambda function is implemented in Node.js 4.3, while automatic scaling happens by monitoring the CPUUtilization Amazon CloudWatch metric. The ECS policies below are registered with CloudWatch alarms that are triggered when specific thresholds are crossed. Similarly, by using the MemoryUtilization CloudWatch metric, ECS scaling can be made to scale in and out as well.

The Lambda function and CloudFormation custom resource JSON are available in the Flux7 GitHub repository: https://github.com/Flux7Labs/blog-code-samples/tree/master/2016-10-ecs-enables-rac-sap-hybris

Scaling ECS services and EC2 instances automatically

The key to understanding cluster scaling is to start by understanding the problem. We are no longer running a homogeneous workload in a simple environment. We have a cluster hosting a heterogeneous workload with different requirements and different demands on the system.

This clicked for us after we phrased the problem as, “Make sure the cluster has enough capacity to launch ‘x’ more instances of a task.” This led us to realize that we were no longer looking at an overall average resource utilization problem, but rather a discrete bin packing problem.

The problem is inherently more complex. (Anyone remember from algorithms class how the discrete Knapsack problem is NP-hard, but the continuous knapsack problem can easily be solved in polynomial time? Same thing.) So we have to check on each individual instance if a particular task can be scheduled on it, and if for any task we don’t cross the required capacity threshold, then we need to allocate more instance capacity.

To ensure that ECS scaling always has enough resources to scale out and has just enough resources after scaling in, it was necessary that the Auto Scaling group scales according to three criteria:

1. ECS task count in relation to the host EC2 instance count in a cluster
2. Memory reservation
3. CPU reservation

We implemented the first criteria for the Auto Scaling group. Instead of using the default scaling abilities, we set group scaling in and out using Lambda functions that were triggered periodically by a combination of the AWS::Lambda::Permission and an AWS::Events::Rule resources, as we wanted specific criteria for scaling.

The Lambda function is available in the Flux7 GitHub repository: https://github.com/Flux7Labs/blog-code-samples/tree/master/2016-10-ecs-enables-rac-sap-hybris

Future versions of this piece of code will incorporate the other two criteria along with the ability to use CloudWatch alarms to trigger scaling.

Conclusion

Using advanced ECS features like Service Auto Scaling in conjunction with Lambda to meet RAC’s business requirements, RAC and Flux7 were able to Dockerize SAP Hybris in production for the first time ever.

Further, ECS and CloudFormation give users the ability to implement robust solutions while still providing the ability to roll back in case of failures. With ECS as a backbone technology, RAC has been able to deploy a Hybris setup with automatic scaling, self-healing, one-click deployment, CI/CD, and PCI compliance consistent with the company’s latest technology guidelines and meeting the requirements of their newly-formed culture of DevOps and extreme agility.

If you have any questions or suggestions, please comment below.

Orchestrating GPU-Accelerated Workloads on Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/orchestrating-gpu-accelerated-workloads-on-amazon-ecs/

My colleagues Brandon Chavis, Chad Schmutzer, and Pierre Steckmeyer sent a nice guest post that describes how to run GPU workloads on Amazon ECS.

It’s interesting to note that many workloads on Amazon ECS fit into three primary categories that have obvious synergy with containers:

  • PaaS
  • Batch workloads
  • Long-running services

While these are the most common workloads, we also see ECS used for a wide variety of applications. One new and interesting class of workload is GPU-accelerated workloads or, more specifically, workloads that need to leverage large amounts of GPUs across many nodes.

In this post, we take a look at how ECS enables GPU workloads. For example, at Amazon.com, the Amazon Personalization team runs significant machine learning workloads that leverage many GPUs on ECS.

Amazon ECS overview

ECS is a highly scalable, high performance service for running containers on AWS. ECS provides customers with a state management engine that offloads the heavy lifting of running your own orchestration software, while providing a number of integrations with other services across AWS. For example, you can assign IAM roles to individual tasks, use Auto Scaling to scale your containers in response to load across your services, and use the new Application Load Balancer to distribute load across your application, while automatically checking new tasks into the load balancer when Auto Scaling actions occur.

Today, customers run a wide variety of applications on Amazon ECS, and you can read about some of these use cases here:

Edmunds
Remind
Coursera

ECS and GPU workloads

When you log into Amazon.com, it’s important that you see recommendations for a product you might actually be interested in. The Amazon Personalization team generates personalized product recommendations for Amazon customers. In order to do this, they need to digest very large data sets containing information about the hundreds of millions of products (and just as many customers) that Amazon.com has.

The only way to handle this work in a reasonable amount of time is to ensure it is distributed across a very large number of machines. Amazon Personalization uses machine-learning software that leverages GPUs to train neural networks, but it is challenging to orchestrate this work across a very large number of GPU cores.

To overcome this challenge, Amazon Personalization uses ECS to manage a cluster of Amazon EC2 GPU instances. The team uses P2 instances, which include NVIDIA Tesla K80 GPUs. The cluster of P2 instances functions as a single pool of resources—aggregating CPU, memory, and GPUs—onto which machine learning work can be scheduled.

In order to run this work on an ECS cluster, a Docker image configured with NVIDIA CUDA drivers, which allow the container to communicate with the GPU hardware, is built and stored in Amazon EC2 Container Registry (Amazon ECR).

An ECS task definition is used to point to the container image in ECR and specify configuration for the container at runtime, such as how much CPU and memory each container should use, the command to run inside the container, if a data volume should be mounted in the container, where the source data set lives in Amazon S3, and so on.

After ECS is asked to run a Task, the ECS scheduler finds a suitable place to run the containers by identifying an instance in the cluster with available resources. As shown in the following architecture diagram, ECS can place containers into the cluster of EC2 GPU instances (“GPU slaves” in the diagram):

Give GPUs on ECS a try

To make it easier to try using GPUs on ECS, we’ve built an AWS CloudFormation template to alleviate much of the heavy lifting. This demo architecture is built around DSSTNE, the open source, machine learning library that the Amazon Personalization team uses to actually generate recommendations. Go to the GitHub repository to see the CloudFormation template.

The template spins up an ECS cluster with a single EC2 GPU instance in an Auto Scaling group. You can adjust the desired group capacity to run a larger cluster, if you’d like.

The instance is configured with all of the necessary software that DSSTNE requires for interaction with the underlying GPU hardware, such as NVIDIA drivers. The template also installs some development tools and libraries, like GCC, HDF5, and Open MPI so that you can compile the DSSTNE library at boot time. It then builds a Docker container with the DSSTNE library packaged up and uploads it to ECR. It copies the URL of the resulting container image in ECR and builds an ECS task definition that points to the container.

After the CloudFormation template completes, view the Outputs tab to get an idea of where to look for your new resources.

Conclusion

In this post, we explained how you can use ECS on high GPU workloads, and shared the CloudFormation template that makes it easy to get started with ECS and DSSTNE.

Unfortunately, it would take far too much page space to explain the details of the machine learning specifics in this post, but you can read the Generating Recommendations at Amazon Scale with Apache Spark and Amazon DSSTNE post on the AWS Big Data Blog to learn more about how DSSTNE interacts with Apache Spark, trains models, generates predictions, and other fun machine learning concepts.

If you have questions or suggestions, please comment below.

Centralized Container Logs with Amazon ECS and Amazon CloudWatch Logs

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/centralized-container-logs-with-amazon-ecs-and-amazon-cloudwatch-logs/

Containers make it easy to package and share applications but they often run on a shared cluster. So how do you access your application logs for debugging? Fortunately, Docker provides a log driver that lets you send container logs to a central log service, such as Splunk or Amazon CloudWatch Logs.

Centralized logging has multiple benefits: your Amazon EC2 instance’s disk space isn’t being consumed by logs and log services often include additional capabilities that are useful for operations. For example, CloudWatch Logs includes the ability to create metrics filters that can alarm when there are too many errors and integrates with Amazon Elasticsearch Service and Kibana to enable you to perform powerful queries and analysis. This post shows how to configure Amazon ECS and CloudWatch Logs.

Step 1: Create a CloudWatch Log group

Navigate to the CloudWatch console and choose Logs. On the Actions menu, choose Create log group.

Step 2: Create an ECS task definition

The following steps assume you already have an ECS cluster created. If you do not, go through the ECS first run wizard.

A task definition defines the containers you are running and the log driver options. Navigate to the ECS console, choose Task Definitions and Create new Task Definition. Set the task definition Name and choose Add container. Set the container name, image, memory, and cpu values. In the Storage and Logging section, choose the awslogs log driver. Set the awslogs-group with the name you set in step 1. Set the awslogs-region to the region in which your task will run. Set the awslogs-stream-prefix to a custom prefix that will identify the set of logs you are streaming, such as your application’s name.

The awslogs-stream-prefix was recently added to give you the ability to associate a log stream with the ECS task ID and container name. Previously, the log stream was named with the Docker container ID, which made it hard to associate with the task. If there was an error in a log, there was no direct way to find what container was having the problem. Now, the CloudWatch log stream name includes your custom prefix, the container name, and the task ID to make it simple to associate logs with a task’s containers.
Here is a sample task definition JSON of an NGINX server that displays a welcome message:

{
    "networkMode": "bridge",
    "taskRoleArn": null,
    "containerDefinitions": [
        {
            "memory": 300,
            "portMappings": [
                {
                    "hostPort": 80,
                    "containerPort": 80,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "entryPoint": [
                "sh",
                "-c"
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "awslogs-test",
                    "awslogs-region": "us-west-2",
                    "awslogs-stream-prefix": "nginx"
                }
            },
            "name": "simple-app",
            "image": "httpd:2.4",
            "command": [
                "/bin/sh -c \"echo 'Congratulations! Your application is now running on a container in Amazon ECS.'  > /usr/local/apache2/htdocs/index.html && httpd-foreground\""
            ],
            "cpu": 10
        }
    ],
    "family": "cw-logs-example"
}

Step 3: Run the task

In the ECS console, choose Clusters. Select your cluster, then choose the Tasks tab. Choose Run new task and in the Task definition list, select the task definition that you created in step 2. Choose Run Task.

You will see your task in the PENDING state. Select the task to open the detail view. Refresh your task’s detail view until the task gets to the RUNNING state.

Step 4: Generate logs

If you’re using the sample task definition, NGINX will have already sent an initialization message to the log stream. You can also connect with the web server to generate additional log messages.

Step 5: View the log

The task view now includes a link to the log stream. Select the link and navigate to the CloudWatch console. The log stream name includes the prefix that you specified in the task definition, the container name, and the ECS task ID (nginx/simple-app/600e016a-9301-4f81-90b2-6bfd0ad2d975). This makes it easy to find the log stream from the ECS task and find the task from the log stream.

Cleanup

When you are done, you can stop the task in the ECS console and remove the log stream in the CloudWatch console.

Conclusion

We hope you find these improvements useful. For more information, see the ECS documentation. If you have suggestions or questions, please comment below.

Automatic Scaling with Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/automatic-scaling-with-amazon-ecs/

My colleague Mayank Thakkar sent a nice guest post that describes how to scale Amazon ECS clusters and services.

You’ve always had the option to scale clusters automatically with Amazon EC2 Container Service (Amazon ECS). Now, with the new Service Auto Scaling feature and Amazon CloudWatch alarms, you can now use scaling policies to scale ECS services. With Service Auto Scaling, you can achieve high availability by scaling up when demand is high, and optimize costs by scaling down your service and the cluster, when demand is lower, all automatically and in real-time.

This post shows how you can use this new feature, along with automatic cluster resizing to match demand.

Service Auto Scaling overview

Out-of-the-box scaling for ECS services has been a top request and today we are pleased to announce this feature. The process to create services that scale automatically has been made very easy, and is supported by the ECS console, CLI, and SDK. You choose the desired, minimum and maximum number of tasks, create one or more scaling policies, and Service Auto Scaling handles the rest. The service scheduler is also Availability Zone–aware, so you don’t have to worry about distributing your ECS tasks across multiple zones.

In addition to the above, ECS also makes it very easy to run your ECS tasks on a multi-AZ cluster. The Auto Scaling group for the ECS cluster manages the availability of the cluster across multiple zones to give you the resiliency and dependability that you are looking for, and ECS manages the task distribution across these zones, allowing you to focus on your business logic.

The benefits include:

  1. Match deployed capacity to the incoming application load, using scaling policies for both the ECS service and the Auto Scaling group in which the ECS cluster runs. Scaling up cluster instances and service tasks when needed and safely scaling them down when demand subsides, keeps you out of the capacity guessing game. This provides you high availability with lowered costs in the long run.
  2. Multi-AZ clusters make your ECS infrastructure highly available, keeping it safeguarded from potential zone failure. The Availability Zone–aware ECS scheduler manages, scales, and distributes the tasks across the cluster, thus making your architecture highly available.

Service Auto Scaling Walkthrough

This post walks you through the process of using these features and creating a truly scalable, highly available, microservices architecture. To achieve these goals, we show how to:

  1. Spin up an ECS cluster, within an Auto Scaling group, spanning 2 (or more) zones.
  2. Set up an ECS service over the cluster and define the desired number of tasks.
  3. Configure an Elastic Load Balancing load balancer in front of the ECS service. This serves as an entry point for the workload.
  4. Set up CloudWatch alarms to scale in and scale out the ECS service.
  5. Set up CloudWatch alarms to scale in and scale out the ECS cluster. (Note that these alarms are separate from the ones created in the previous step.)
  6. Create scaling policies for the ECS service, defining scaling actions while scaling out and scaling in.
  7. Create scaling policies for the Auto Scaling group in which the ECS cluster is running. These policies are used to scale in and scale out the ECS cluster.
  8. Test the highly available, scalable ECS service, along with the scalable cluster by gradually increasing the load and followed by decreasing the load.

In this post, we walk you through setting up one ECS service on the cluster. However, this pattern can also be applied to multiple ECS services running on the same cluster.

Please note: You are responsible for any AWS costs incurred as a result of running this example.

Conceptual diagram

Set up Service Auto Scaling with ECS

Before you set up the scaling, you should have an ECS service running on a multi-AZ (2 zone) cluster, fronted by a load balancer.

Set up CloudWatch alarms

  1. In the Amazon CloudWatch console, set up a CloudWatch alarm, to be used during scale in and scale out of the ECS service. This walkthrough uses CPUUtilization (from the ECS, ClusterName, ServiceName category), but you can use other metrics if you wish. (Note: Alternatively, you can set up these alarms in the ECS Console when configuring scaling policies for your service.)
  2. Name the alarm ECSServiceScaleOutAlarm and set the threshold for CPUUtilization to 75.
  3. Under the Actions section, delete the notifications. For this walkthrough, you’ll configure an action through the ECS and Auto Scaling consoles.
  4. Repeat the two steps above to create the scale in alarm, setting the CPUUtilization threshold to 25 and the operator to ‘<=’”.
  5. In the Alarms section, you should see your scale in alarm in the ALARM state. This is expected, as there is currently no load on the ECS service.
  6. Follow the same actions as in the previous step to set up CloudWatch alarms on the ECS cluster. This time, use CPUReservation as a metric (from ECS, ClusterName). Create 2 alarms, as in the previous step, one to scale out the ECS cluster and other to scale in. Name them ECSClusterScaleOutAlarm and ECSClusterScaleInAlarm (or whatever name you like).

Note: This is a cluster specific metric (as opposed to a cluster_service specific metric), which makes the pattern useful even in multiple ECS service scenarios. The ECS cluster is always scaled according to the load on the cluster, irrespective of where it originates.

Because scaling ECS services is much faster than scaling an ECS cluster, we recommend keeping the ECS cluster scaling alarm more responsive than the ECS service alarm. This ensures that you always have extra cluster capacity available during scaling events, to accommodate instantaneous peak loads. Keep in mind that running this extra EC2 capacity increases your cost, so find the balance between reserve cluster capacity and cost, which will vary from application to application.

Add scaling policies on the ECS service

Add a scale out and a scale in policy on the ECS service created earlier.

  1. Sign in to the ECS console, choose the cluster that your service is running on, choose Services, and select the service.
  2. On the service page, choose Auto Scaling, Update.
  3. Make sure the Number of Tasks is set to 2. This is the default number of tasks that your service will be running.
  4. On the Update Service page, under Optional configurations, choose Configure Service Auto Scaling.
  5. On the Service Auto Scaling (optional) page, under Scaling, choose Configure Service Auto Scaling to adjust your service’s desired count. For both Minimum number of tasks and Desired number of tasks, enter 2. For Maximum number of tasks, enter 10. Because you mapped port 80 of the host (EC2 instance) to port 80 of the ECS container when you created the ECS service, make sure that you set the same numbers for both the Auto Scaling group and the ECS tasks.
  6. Under the Automatic task scaling policies section, choose Add Scaling Policy.
  7. On the Add Policy page, enter a value for Policy Name. For Execute policy when, enter the scale out CloudWatch alarm created earlier (ECSServiceScaleOutAlarm). For Take the action, choose Add 100 percent. Choose Save.
  8. Repeat the two steps above to create the scale in policy, using the scale in CloudWatch alarm created earlier (ECSServiceScaleInAlarm). For Take the action, choose Remove 50 percent. Choose Save.
  9. On the Service Auto Scaling (optional) page, choose Save.

Add scaling policies on the ECS cluster

Add a scale out and a scale in policy on the ECS cluster (Auto Scaling group).

  1. Sign in to the Auto Scaling console and select the Auto Scaling Group which was created for this walkthrough.
  2. Choose Details, Edit.
  3. Make sure the Desired and Min are set to 2, and Max is set to 10. Choose Save.
  4. Choose Scaling Policies, Add Policy.
  5. First, create the scale out policy. Enter a value for Name. For Execute policy when, choose the scale out alarm (ECSClusterScaleOutAlarm) created earlier. For Take the action, choose Add 100 percent of group and then choose Create.
  6. Repeat the above step to add the scale in policy, using the scale in alarm (ECSClusterScaleInAlarm) and setting Take the action as Remove 50 percent of group.

You should be able to see the scale in and scale out polices for your Auto Scaling group. Using these policies, the Auto Scaling group can increase or decrease the size of the cluster on which the ECS service is running.

Note: You may set the cluster scaling policies in such a way so that you can have some additional cluster capacity in reserve. This will help your ECS service scale up faster, but at the same time, depending on your demand, keep some EC2 instances underutilized.

This completes the Auto Scaling configuration of the ECS service and the Auto Scaling group, which in this case, will be triggered from the different CloudWatch alarms. You can always use a different combination of CloudWatch alarms to drive each of these policies for more sophisticated scaling policies.

Now that you have the service running on a cluster that has capacity to scale out on, send traffic to the load balancer that should trigger the alarm.

Load test the ECS service scaling

Now, load test the ECS service using the Apache ab utility and make sure that the scaling configuration is working (see the Create a load-testing instance section). On the CloudWatch console, you can see your service scale up and down. Because the Auto Scaling group is set up with two Availability Zones, you should be able to see five EC2 instances in each zone. Also, because the ECS service scheduler is Availability Zone–aware, the tasks would be distributed across those two zones too.

You can further test the high availability by terminating your EC2 instances manually from the EC2 console. The Auto Scaling group and ECS service scheduler should bring up additional EC2 instances, followed by tasks.

Additional Considerations

  • Reserve capacity. As discussed before, keeping some additional ECS cluster capacity in reserve helps the ECS service to scale out much faster, without waiting for the cluster’s newly provisioned instances to warm up. This can easily be achieved by either changing the values on which CloudWatch alarms are triggered, or by changing the parameters of the scaling policy itself.
  • Instance termination protection. While scaling in, in some cases, a decrease in available ECS cluster capacity might force some tasks to be terminated or relocated from one host to another. This can be mitigated by either tweaking ECS cluster scale in policies to be less responsive to demand or by gracefully allowing tasks to finish on an EC2 host, before it is terminated. This can easily be achieved by tapping into the Auto Scaling Lifecycle events or instance termination protection, which is a topic for a separate post.

Although we have used the AWS console to create this walkthrough, you can always use the AWS SDK or the CLI to achieve the same result.

Conclusion

When you run a mission-critical microservices architecture, keeping your TCO down is critical, along with having the ability to deploy the workload on multiple zones and to adjust ECS service and cluster capacity to respond to load variations. Using the procedure outlined in this post, which leverages two-dimensional scaling, you can achieve the same results.

Service Discovery: An Amazon ECS Reference Architecture

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/service-discovery-an-amazon-ecs-reference-architecture/

Microservices are capturing a lot of mindshare nowadays, through the promises of agility, scale, resiliency, and more. The design approach is to build a single application as a set of small services. Each service runs in its own process and communicates with other services via a well-defined interface using a lightweight mechanism, typically HTTP-based application programming interface (API).

Microservices are built around business capabilities, and each service performs a single function. Microservices can be written using different frameworks or programming languages, and you can deploy them independently, as a single service or a group of services.

Containers are a natural fit for microservices. They make it simple to model, they allow any application or language to be used, and you can test and deploy the same artifact. Containers bring an elegant solution to the challenge of running distributed applications on an increasingly heterogeneous infrastructure – materializing the idea of immutable servers. You can now run the same multi-tiered application on a developer’s laptop, a QA server, or a production cluster of EC2 instances, and it behaves exactly the same way. Containers can be credited for solidifying the adoption of microservices.

Because containers are so easy to ship from one platform to another and scale from one to hundreds, they have unearthed a new set of challenges. One of these is service discovery. When running containers at scale on an infrastructure made of immutable servers, how does an application identify where to connect to in order to find the service it requires? For example, if your authentication layer is dynamically created, your other services need to be able to find it.

Static configuration works for a while but gets quickly challenged by the proliferation and mobility of containers. For example, services (and containers) scale in or out; they are associated to different environments like staging or prod. You do not want to keep this in code or have lots of configuration files around.

What is needed is a mechanism for registering services immediately as they are launched and a query protocol that returns the IP address of a service, without having this logic built into each component. Solutions exist with trade-offs in consistency, ability to scale, failure resilience, resource utilization, performance, and management complexity. In the absence of service discovery, a modern distributed architecture is not able to scale and achieve resilience. Hence, it is important to think about this challenge when adopting a microservices architecture style.

Amazon ECS Reference Architecture: Service Discovery

We’ve created a reference architecture to demonstrate a DNS- and load balancer-based solution to service discovery on Amazon EC2 Container Service (Amazon ECS) that relies on some of our higher level services without the need to provision extra resources. There is no need to stand up new instances or add more load to the current working resource pool.

Alternatives to our approach include directly passing Elastic Load Balancing names as environment variables – a more manual configuration – or setting up a vendor solution. In this case, you would have to take on the additional responsibilities to install, configure, and scale the solution as well as keeping it up-to-date and highly available.

The technical details are as follows: we define an Amazon CloudWatch Events filter which listens to all ECS service creation messages from AWS CloudTrail and triggers an Amazon Lambda function. This function identifies which Elastic Load Balancing load balancer is used by the new service and inserts a DNS resource record (CNAME) pointing to it, using Amazon Route 53 – a highly available and scalable cloud Domain Name System (DNS) web service. The Lambda function also handles service deletion to make sure that the DNS records reflect the current state of applications running in your cluster.

There are many benefits to this approach:

  • Because DNS is such a common system, we guarantee a higher level of backward compatibility without the need for “sidecar” containers or expensive code change.
  • By using event-based, infrastructure-less compute (AWS Lambda), service registration is extremely affordable, instantaneous, reliable, and maintenance-free.
  • Because Route 53 allows hosted zones per VPC and ECS lets you segment clusters per VPC, you can isolate different environments (dev, test, prod) while sharing the same service names.
  • Finally, making use of the service’s load balancer allows for health checks, container mobility, and even a zero-downtime application version update. You end up with a solution which is scalable, reliable, very cost-effective, and easily adoptable.

We are excited to share this solution with our customers. You can find it at the AWS Labs Amazon EC2 Container Service – Reference Architecture: Service Discovery GitHub repository. We look forward to seeing how our customers will use it and help shape the state of service discovery in the coming months.

Optimizing Disk Usage on Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/optimizing-disk-usage-on-amazon-ecs/

My colleague Jay McConnell sent a nice guest post that describes how to track and optimize the disk spaced used in your Amazon ECS cluster.

Failure to monitor disk space utilization can cause problems that prevent Docker containers from working as expected. Amazon EC2 instance disks are used for multiple purposes, such as Docker daemon logs, containers, and images. This post covers techniques to monitor and reclaim disk space on the cluster of EC2 instances used to run your containers.

Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to run applications easily on a managed cluster of Amazon EC2 instances. You can use ECS to schedule the placement of containers across a cluster of EC2 instances based on your resource needs, isolation policies, and availability requirements.

The ECS-optimized AMI stores images and containers in an EBS volume that uses the devicemapper storage driver in a direct-lvm configuration. As devicemapper stores every image and container in a thin-provisioned virtual device, free space for container storage is not visible through standard Linux utilities such as df. This poses an administrative challenge when it comes to monitoring free space and can also result in increased time troubleshooting task failures, as the cause may not be immediately obvious.

Disk space errors can result in new tasks failing to launch with the following error message:

 Error running deviceCreate (createSnapDevice) dm_task_run failed

NOTE: The scripts and techniques described in this post were tested against the ECS 2016.03.a AMI. You may need to modify these techniques depending on your operating system and environment.

Monitoring

You can use Amazon CloudWatch custom metrics to track EC2 instance disk usage. After a CloudWatch metric is created, you can add a CloudWatch alarm to alert you proactively, before low disk space causes a problem on your cluster.

Step 1: Create an IAM role

The first step is to ensure that the EC2 instance profile for the EC2 instances in the ECS cluster uses the “cloudwatch:PutMetricData” policy, as this is required to publish to CloudWatch.
In the IAM console, choose Policies, Create Policy. Choose Create Your Own Policy, name it “CloudwatchPutMetricData”, and paste in the following policy in JSON:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CloudwatchPutMetricData",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:PutMetricData"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

After you have saved the policy, navigate to Roles and select the role attached to the EC2 instances in your ECS cluster. Choose Attach Policy, select the “CloudwatchPutMetricData” policy, and choose Attach Policy.

Step 2: Push metrics to CloudWatch

Open a shell to each EC2 instance in the ECS cluster. Open a text editor and create the following bash script:

#!/bin/bash

### Get docker free data and metadata space and push to CloudWatch metrics
### 
### requirements:
###  * must be run from inside an EC2 instance
###  * docker with devicemapper backing storage
###  * aws-cli configured with instance-profile/user with the put-metric-data permissions
###  * local user with rights to run docker cli commands
###
### Created by Jay McConnell

# install aws-cli, bc and jq if required
if [ ! -f /usr/bin/aws ]; then
  yum -qy -d 0 -e 0 install aws-cli
fi
if [ ! -f /usr/bin/bc ]; then
  yum -qy -d 0 -e 0 install bc
fi
if [ ! -f /usr/bin/jq ]; then
  yum -qy -d 0 -e 0 install jq
fi

# Collect region and instanceid from metadata
AWSREGION=`curl -ss http://169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region`
AWSINSTANCEID=`curl -ss http://169.254.169.254/latest/meta-data/instance-id`

function convertUnits {
  # convert units back to bytes as both docker api and cli only provide freindly units
  if [ "$1" == "b" ] ; then
    echo $2
  elif [ "$1" == "kb" ] ; then 
    echo "$2*1000" | bc | awk '{print $1}' FS="."
  elif [ "$1" == "mb" ] ; then
    echo "$2*1000*1000" | bc | awk '{print $1}' FS="."
  elif [ "$1" == "gb" ] ; then
    echo "$2*1000*1000*1000" | bc | awk '{print $1}' FS="."
  elif [ "$1" == "tb" ] ; then
    echo "$2*1000*1000*1000*1000" | bc | awk '{print $1}' FS="."
  else
    echo "Unknown unit $1"
    exit 1
  fi
}

function getMetric {
  # Get freespace and split unit
  if [ "$1" == "Data" ] || [ "$1" == "Metadata" ] ; then
    echo $(docker info | grep "$1 Space Available" | awk '{print tolower($5), $4}')
  else
    echo "Metric must be either 'Data' or 'Metadata'"
    exit 1
  fi
}

data=$(convertUnits `getMetric Data`)
aws cloudwatch put-metric-data --value $data --namespace ECS/$AWSINSTANCEID --unit Bytes --metric-name FreeDataStorage --region $AWSREGION
data=$(convertUnits `getMetric Metadata`)
aws cloudwatch put-metric-data --value $data --namespace ECS/$AWSINSTANCEID --unit Bytes --metric-name FreeMetadataStorage --region $AWSREGION

Next, set the script to be executable:

chmod +x /path/to/metricscript.sh

Now, schedule the script to run every 5 minutes via cron. To do this, create the file /etc/cron.d/ecsmetrics with the following contents:

*/5 * * * * root /path/to/metricscript.sh

This pulls both free data and metadata every 5 minutes and push them to CloudWatch with the namespace ECS/.

Disk cleanup

The next step is to clean up the disk, either automatically on a schedule or manually. This post covers cleanup of tasks and images; there is a great blog post, Send ECS Container Logs to CloudWatch Logs for Centralized Monitoring, that covers pushing log files to CloudWatch. Using CloudWatch Logs instead of local log files reduces disk utilization and provides a resilient and centralized place from which to manage logs.

Take a look at what you can do to remove unneeded containers and images from your instances.

Delete containers

Stopped containers should be deleted if they are no longer needed. The ECS agent, by default, deletes all containers that have exited every 3 hours. This behavior can be customized by adding the following to /etc/ecs/ecs.config:

ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION=10m

This sets the frequency of the task to 10 minutes.
For this change to take effect, the ECS agent needs to be restarted, which can be done via ssh:

stop ecs; start ecs

To set this up for new instances, attach the following EC2 user data:

cat /etc/ecs/ecs.config | grep -v 'ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION' > /tmp/ecs.config
echo "ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION=5m" >> /tmp/ecs.config
mv -f /tmp/ecs.config /etc/ecs/
stop ecs
start ecs

Delete images

By default, Docker caches images indefinitely. Cached images can be useful to reduce the time needed to launch new tasks: if the image is cached, the container can be started from the cache. If you have a lot of images that are rarely used, as is common in CI or development environments, then cleaning these out is a good idea. Use the following commands to remove unused images:

List images:

docker images

Delete an image:

docker rmi IMAGE

This could be condensed and saved to a bash script:

#!/bin/bash
docker images -q | xargs --no-run-if-empty docker rmi

Set the script to be executable:

chmod +x /path/to/cleanupscript.sh

Execute the script daily via cron by creating a file called /etc/cron.d/dockerImageCleanup with the following contents:

00 00 * * * root /path/to/cleanupscript.sh

Conclusion

The techniques described in this post provide visibility into a critical component of running Docker—the disk space used on the cluster’s EC2 instances—and techniques to clean up unnecessary storage. If you have any questions or suggestions for other best practices, please comment below.

Powering your Amazon ECS Clusters with Spot Fleet

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/powering-your-amazon-ecs-clusters-with-spot-fleet/

My colleague Drew Dennis sent a nice guest post that shows how to use Amazon ECS with Spot fleet.

There are advantages to using on-demand EC2 instances. However, for many workloads, such as stateless or task-based scenarios that simply run as long as they need to run and are easily replaced with subsequent identical processes, Spot fleet can provide additional compute resources that are more economical. Furthermore, Spot fleet attempts to replace any terminated instances to maintain the requested target capacity.
Amazon ECS is a highly scalable, high performance, container management service that supports Docker containers and allows you to run applications on a managed cluster of Amazon EC2 instances easily. ECS already handles the placement and scheduling of containers on EC2 instances. When combined with Spot fleet, ECS can deliver significant savings over EC2 on-demand pricing.
Why Spot fleet?
Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Because Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications. Spot fleet enables customers to request a collection of Spot instances across multiple Availability Zones and instance types with a single API call.
The Spot fleet API call can specify a target capacity and an allocation strategy. The two available allocation strategies are lowest price and diversified. Lowest price means the instances are provisioned based solely on the lowest current Spot price available while diversified fulfills the request equally across multiple Spot pools (instances of the same type and OS within an Availability Zone) to help mitigate the risk of a sudden Spot price increase. For more information, see How Spot Fleet Works.
Using Spot fleet
The Spot fleet console is available at https://console.aws.amazon.com/ec2spot/home. It provides a simple approach to creating a Spot fleet request and setting up all necessary attributes of the request, including creating an IAM role and base64-encoding user data. The console also provides the option to download the request JSON, which can be used with the CLI if desired.
If you prefer not to use the Spot fleet console, you need to make sure you have an IAM role created with the necessary privileges for the Spot fleet request to bid on, launch, and terminate instances. Note that the iam:PassRole action is needed in this scenario so that Spot fleet can launch instances with a role to participate in an ECS cluster. You need to make sure that you have an AWS SDK or the AWS CLI installed.
This post assumes you are familiar with the process of creating an ECS cluster, creating an ECS task definition, and launching the task definition as a manual task or service. If not, see the ECS documentation.
Creating a Spot fleet request
Before you make your Spot fleet request, make sure you know the instance types, Availability Zones, and bid prices that you plan to request. Note that individual bid prices for various instance types can be used in a Spot fleet request. When you have decided on these items, you are ready to begin the request. In the screenshot below, a fleet request is being created for four c4.large instances using an Amazon Linux ECS-optimized AMI. You can obtain the most up-to-date list of ECS optimized AMIs by region in the Launching an Amazon ECS Container Instance topic.

Notice the very useful warnings if your bid price is below the minimum price to initially launch the instance. From here, you can also access the Spot pricing history and Spot Bid Advisor to better understand past pricing volatility. After choosing Next, you see options to spread the request across multiple zones, specify values for User data, and define other request attributes as shown below. In this example, the user data sets the ECS cluster to which the ECS container agent connects.

Other examples could create a Spot fleet request that contains multiple instance types with Spot price overrides for each instance type in a single Availability Zone. The allocation strategy could still be diversified, which means it will pull equally from the two instance-type pools. This could easily be combined with the previous example to create a fleet request that spans multiple Availability Zones and instance types, further mitigating the risk of Spot instance termination.
Running ECS tasks and services on your Spot fleet
After your instances have joined your ECS cluster, you are ready to start tasks or services on them. This involves first creating a task definition. For more information, see the Docker basics walkthrough. After the task definition is created, you can run the tasks manually, or schedule them as a long-running process or service.
In the case of an ECS service, if one of the Spot fleet instances is terminated due to a Spot price interruption, ECS re-allocates the running containers on another EC2 instance within the cluster to maintain the desired number of running tasks, assuming that sufficient resources are available.
If not, within a few minutes, the instance is replaced with a new instance by the Spot fleet request. The new instance is launched according to the configuration of the initial Spot fleet request and rejoins the cluster to participate and run any outstanding containers needed to meet the desired quantity.
In summary, Spot fleet provides an effective and economical way to add instances to an ECS cluster. Because a Spot fleet request can span multiple instance types and Availability Zones, and will always try to maintain a target number of instances, it is a great fit for running stateless containers and adding inexpensive capacity to your ECS clusters.
Auto Scaling and Spot fleet requests
Auto Scaling has proven to be a great way to add or remove EC2 capacity to many AWS workloads. ECS supports Auto Scaling on cluster instances and provides CloudWatch metrics to help facilitate this scenario. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms. The combination of Auto Scaling and Spot fleet provides a nice way to have a pool of fixed capacity and variable capacity on demand while reducing costs.
Currently, Spot fleet requests cannot be integrated directly with Auto Scaling policies as they can with Spot instance requests. However, the Spot fleet API does include an action called ModifySpotFleetRequest that can change the target capacity of your request. The Dynamic Scaling with EC2 Spot Fleet blog post shows an example of a scenario that leverages CloudWatch metrics to invoke a Lambda function and change the Spot fleet target capacity. Using ModifySpotFleetRequest can be a great way to not only fine-tune your fleet requests, but also minimize over-provisioning and further lower costs.
Conclusion
Amazon ECS manages clusters of EC2 instances for reliable state management and flexible container scheduling. Docker containers lend themselves to flexible and portable application deployments, and when used with ECS provide a simple and effective way to manage fleets of instances and containers, both large and small.
Combining Spot fleet with ECS can provide lower-cost options to augment existing clusters and even provision new ones. Certainly, this can be done with traditional Spot instance requests. However, because Spot fleet allows requests to span instance families and Availability Zones (with multiple allocation strategies, prices, etc.), it is a great way to enhance your ECS strategy by increasing availability and lowering the overall cost of your cluster’s compute capacity.

Powering your Amazon ECS Clusters with Spot Fleet

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/powering-your-amazon-ecs-clusters-with-spot-fleet/

My colleague Drew Dennis sent a nice guest post that shows how to use Amazon ECS with Spot fleet.

There are advantages to using on-demand EC2 instances. However, for many workloads, such as stateless or task-based scenarios that simply run as long as they need to run and are easily replaced with subsequent identical processes, Spot fleet can provide additional compute resources that are more economical. Furthermore, Spot fleet attempts to replace any terminated instances to maintain the requested target capacity.
Amazon ECS is a highly scalable, high performance, container management service that supports Docker containers and allows you to run applications on a managed cluster of Amazon EC2 instances easily. ECS already handles the placement and scheduling of containers on EC2 instances. When combined with Spot fleet, ECS can deliver significant savings over EC2 on-demand pricing.
Why Spot fleet?
Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Because Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications. Spot fleet enables customers to request a collection of Spot instances across multiple Availability Zones and instance types with a single API call.
The Spot fleet API call can specify a target capacity and an allocation strategy. The two available allocation strategies are lowest price and diversified. Lowest price means the instances are provisioned based solely on the lowest current Spot price available while diversified fulfills the request equally across multiple Spot pools (instances of the same type and OS within an Availability Zone) to help mitigate the risk of a sudden Spot price increase. For more information, see How Spot Fleet Works.
Using Spot fleet
The Spot fleet console is available at https://console.aws.amazon.com/ec2spot/home. It provides a simple approach to creating a Spot fleet request and setting up all necessary attributes of the request, including creating an IAM role and base64-encoding user data. The console also provides the option to download the request JSON, which can be used with the CLI if desired.
If you prefer not to use the Spot fleet console, you need to make sure you have an IAM role created with the necessary privileges for the Spot fleet request to bid on, launch, and terminate instances. Note that the iam:PassRole action is needed in this scenario so that Spot fleet can launch instances with a role to participate in an ECS cluster. You need to make sure that you have an AWS SDK or the AWS CLI installed.
This post assumes you are familiar with the process of creating an ECS cluster, creating an ECS task definition, and launching the task definition as a manual task or service. If not, see the ECS documentation.
Creating a Spot fleet request
Before you make your Spot fleet request, make sure you know the instance types, Availability Zones, and bid prices that you plan to request. Note that individual bid prices for various instance types can be used in a Spot fleet request. When you have decided on these items, you are ready to begin the request. In the screenshot below, a fleet request is being created for four c4.large instances using an Amazon Linux ECS-optimized AMI. You can obtain the most up-to-date list of ECS optimized AMIs by region in the Launching an Amazon ECS Container Instance topic.

Notice the very useful warnings if your bid price is below the minimum price to initially launch the instance. From here, you can also access the Spot pricing history and Spot Bid Advisor to better understand past pricing volatility. After choosing Next, you see options to spread the request across multiple zones, specify values for User data, and define other request attributes as shown below. In this example, the user data sets the ECS cluster to which the ECS container agent connects.

Other examples could create a Spot fleet request that contains multiple instance types with Spot price overrides for each instance type in a single Availability Zone. The allocation strategy could still be diversified, which means it will pull equally from the two instance-type pools. This could easily be combined with the previous example to create a fleet request that spans multiple Availability Zones and instance types, further mitigating the risk of Spot instance termination.
Running ECS tasks and services on your Spot fleet
After your instances have joined your ECS cluster, you are ready to start tasks or services on them. This involves first creating a task definition. For more information, see the Docker basics walkthrough. After the task definition is created, you can run the tasks manually, or schedule them as a long-running process or service.
In the case of an ECS service, if one of the Spot fleet instances is terminated due to a Spot price interruption, ECS re-allocates the running containers on another EC2 instance within the cluster to maintain the desired number of running tasks, assuming that sufficient resources are available.
If not, within a few minutes, the instance is replaced with a new instance by the Spot fleet request. The new instance is launched according to the configuration of the initial Spot fleet request and rejoins the cluster to participate and run any outstanding containers needed to meet the desired quantity.
In summary, Spot fleet provides an effective and economical way to add instances to an ECS cluster. Because a Spot fleet request can span multiple instance types and Availability Zones, and will always try to maintain a target number of instances, it is a great fit for running stateless containers and adding inexpensive capacity to your ECS clusters.
Auto Scaling and Spot fleet requests
Auto Scaling has proven to be a great way to add or remove EC2 capacity to many AWS workloads. ECS supports Auto Scaling on cluster instances and provides CloudWatch metrics to help facilitate this scenario. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms. The combination of Auto Scaling and Spot fleet provides a nice way to have a pool of fixed capacity and variable capacity on demand while reducing costs.
Currently, Spot fleet requests cannot be integrated directly with Auto Scaling policies as they can with Spot instance requests. However, the Spot fleet API does include an action called ModifySpotFleetRequest that can change the target capacity of your request. The Dynamic Scaling with EC2 Spot Fleet blog post shows an example of a scenario that leverages CloudWatch metrics to invoke a Lambda function and change the Spot fleet target capacity. Using ModifySpotFleetRequest can be a great way to not only fine-tune your fleet requests, but also minimize over-provisioning and further lower costs.
Conclusion
Amazon ECS manages clusters of EC2 instances for reliable state management and flexible container scheduling. Docker containers lend themselves to flexible and portable application deployments, and when used with ECS provide a simple and effective way to manage fleets of instances and containers, both large and small.
Combining Spot fleet with ECS can provide lower-cost options to augment existing clusters and even provision new ones. Certainly, this can be done with traditional Spot instance requests. However, because Spot fleet allows requests to span instance families and Availability Zones (with multiple allocation strategies, prices, etc.), it is a great way to enhance your ECS strategy by increasing availability and lowering the overall cost of your cluster’s compute capacity.

Amazon ECS launches new deployment capabilities; CloudWatch metrics; Singapore and Frankfurt regions

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-launches-new-deployment-capabilities-cloudwatch-metrics-singapore-and-frankfurt-regions/

Today, we launched two improvements that make it easier to run Docker-enabled applications on Amazon EC2 Container Service (ECS). Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.
The first improvement allows more flexible deployments. The ECS service scheduler is used for long running stateless services and applications. The service scheduler ensures that the specified number of tasks are constantly running and can optionally register tasks with an Elastic Load Balancing load balancer. Previously, during a deployment the service scheduler created a task with the new task definition; after the new task reached the RUNNING state, a task that was using the old task definition was drained and stopped. This process continued until all of the desired tasks in the service were using the new task definition. This process maintains the service’s capacity during the deployment, but requires enough spare capacity in the cluster to start one additional task. Sometimes that’s not desired, because you do not want to use additional capacity in your cluster to perform a deployment.
Now, a service’s minimumHealthyPercent lets you specify a lower limit on the number of running tasks during a deployment. A minimumHealthyPercent of 100% ensures that you always have the desiredCount of tasks running and values below 100% allow the scheduler to violate desiredCount temporarily during a deployment. For example, if you have 4 Amazon EC2 instances in your cluster, and 4 tasks each running on a separate instances, changing minumumHealthyPercent from 100% to 50% would allow the scheduler to stop 2 tasks before deploying 2 new tasks.
A service’s maximumPercent represents an upper limit on the number of running tasks during a deployment, enabling you to define the deployment batch size. For example, if you have 8 instances in your cluster, and 4 tasks, each running on a separate instance, maximumPercent of 200% starts 4 new tasks before stopping the 4 old tasks. For more information on these new deployment options, see the documentation.
To illustrate these options visually, consider a scenario where you want to deploy using the least space. You could set minimumHealthyPercent to 50% and maximumPercent to 100%. The deployment would look like this:

Another scenario is to deploy quickly without reducing your service’s capacity. You could set set minimumHealthyPercent to 100% and maximumPercent to 200%. The deployment would look like this:

The next improvement involves scaling the EC2 instances in your ECS cluster automatically. When ECS schedules a task, it requires an EC2 instance that meets the constraints in the task definition. For example, if a task definition requires 1 GB RAM, ECS finds an EC2 instance that has at least that much memory so that the container can start. If the scheduler cannot find an EC2 instance that meets the constraints required to place a task, it fails to place the task.
Managing the cluster capacity is thus essential to successful task scheduling. Auto Scaling can enable clusters of EC2 instances to scale dynamically in response to CloudWatch alarms. ECS now publishes CloudWatch metrics for the reserved amount of CPU and memory used by running tasks in the cluster. You can create a CloudWatch alarm using these metrics that adds more EC2 instances to the Auto Scaling group when the cluster’s available capacity drops below a threshold that you define. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms.
Last, Amazon ECS is now available in the Asia Pacific (Singapore) region and EU (Frankfurt) regions, bringing ECS to eight regions.

Amazon ECS launches new deployment capabilities; CloudWatch metrics; Singapore and Frankfurt regions

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-launches-new-deployment-capabilities-cloudwatch-metrics-singapore-and-frankfurt-regions/

Today, we launched two improvements that make it easier to run Docker-enabled applications on Amazon EC2 Container Service (ECS). Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.
The first improvement allows more flexible deployments. The ECS service scheduler is used for long running stateless services and applications. The service scheduler ensures that the specified number of tasks are constantly running and can optionally register tasks with an Elastic Load Balancing load balancer. Previously, during a deployment the service scheduler created a task with the new task definition; after the new task reached the RUNNING state, a task that was using the old task definition was drained and stopped. This process continued until all of the desired tasks in the service were using the new task definition. This process maintains the service’s capacity during the deployment, but requires enough spare capacity in the cluster to start one additional task. Sometimes that’s not desired, because you do not want to use additional capacity in your cluster to perform a deployment.
Now, a service’s minimumHealthyPercent lets you specify a lower limit on the number of running tasks during a deployment. A minimumHealthyPercent of 100% ensures that you always have the desiredCount of tasks running and values below 100% allow the scheduler to violate desiredCount temporarily during a deployment. For example, if you have 4 Amazon EC2 instances in your cluster, and 4 tasks each running on a separate instances, changing minumumHealthyPercent from 100% to 50% would allow the scheduler to stop 2 tasks before deploying 2 new tasks.
A service’s maximumPercent represents an upper limit on the number of running tasks during a deployment, enabling you to define the deployment batch size. For example, if you have 8 instances in your cluster, and 4 tasks, each running on a separate instance, maximumPercent of 200% starts 4 new tasks before stopping the 4 old tasks. For more information on these new deployment options, see the documentation.
To illustrate these options visually, consider a scenario where you want to deploy using the least space. You could set minimumHealthyPercent to 50% and maximumPercent to 100%. The deployment would look like this:

Another scenario is to deploy quickly without reducing your service’s capacity. You could set set minimumHealthyPercent to 100% and maximumPercent to 200%. The deployment would look like this:

The next improvement involves scaling the EC2 instances in your ECS cluster automatically. When ECS schedules a task, it requires an EC2 instance that meets the constraints in the task definition. For example, if a task definition requires 1 GB RAM, ECS finds an EC2 instance that has at least that much memory so that the container can start. If the scheduler cannot find an EC2 instance that meets the constraints required to place a task, it fails to place the task.
Managing the cluster capacity is thus essential to successful task scheduling. Auto Scaling can enable clusters of EC2 instances to scale dynamically in response to CloudWatch alarms. ECS now publishes CloudWatch metrics for the reserved amount of CPU and memory used by running tasks in the cluster. You can create a CloudWatch alarm using these metrics that adds more EC2 instances to the Auto Scaling group when the cluster’s available capacity drops below a threshold that you define. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms.
Last, Amazon ECS is now available in the Asia Pacific (Singapore) region and EU (Frankfurt) regions, bringing ECS to eight regions.

Send ECS Container Logs to CloudWatch Logs for Centralized Monitoring

Post Syndicated from Chris Barclay original http://blogs.aws.amazon.com/application-management/post/TxFRDMTMILAA8X/Send-ECS-Container-Logs-to-CloudWatch-Logs-for-Centralized-Monitoring

My colleagues Brandon Chavis, Pierre Steckmeyer and Chad Schmutzer sent a nice guest post that demonstrates how to send your container logs to a central source for easy troubleshooting and alarming.

 

—–

Amazon EC2 Container Service (Amazon ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.

In this multipart blog post, we have chosen to take a universal struggle amongst IT professionals—log collection—and approach it from different angles to highlight possible architectural patterns that facilitate communication and data sharing between containers.

When building applications on ECS, it is a good practice to follow a micro services approach, which encourages the design of a single application component in a single container. This design improves flexibility and elasticity, while leading to a loosely coupled architecture for resilience and ease of maintenance. However, this architectural style makes it important to consider how your containers will communicate and share data with each other.

Why is it useful?

Application logs are useful for many reasons. They are the primary source of troubleshooting information. In the field of security, they are essential to forensics. Web server logs are often leveraged for analysis (at scale) in order to gain insight into usage, audience, and trends.

Centrally collecting container logs is a common problem that can be solved in a number of ways. The Docker community has offered solutions such as having working containers map a shared volume; having a log-collecting container; and getting logs from a container that logs to stdout/stderr and retrieving them with docker logs.

In this post, we present a solution using Amazon CloudWatch Logs. CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. CloudWatch Logs can be used to collect and monitor your logs for specific phrases, values, or patterns. For example, you could set an alarm on the number of errors that occur in your system logs or view graphs of web request latencies from your application logs. The additional advantages here are that you can look at a single pane of glass for all of your monitoring needs because such metrics as CPU, disk I/O, and network for your container instances are already available on CloudWatch.

Here is how we are going to do it

Our approach involves setting up a container whose sole purpose is logging. It runs rsyslog and the CloudWatch Logs agent, and we use Docker Links to communicate to other containers. With this strategy, it becomes easy to link existing application containers such as Apache and have discrete logs per task. This logging container is defined in each ECS task definition, which is a collection of containers running together on the same container instance. With our container log collection strategy, you do not have to modify your Docker image. Any log mechanism tweak is specified in the task definition.

 

Note: This blog provisions a new ECS cluster in order to test the following instructions. Also, please note that we are using the US East (N. Virginia) region throughout this exercise. If you would like to use a different AWS region, please make sure to update your configuration accordingly.

Linking to a CloudWatch logging container

We will create a container that can be deployed as a syslog host. It will accept standard syslog connections on 514/TCP to rsyslog through container links, and will also forward those logs to CloudWatch Logs via the CloudWatch Logs agent. The idea is that this container can be deployed as the logging component in your architecture (not limited to ECS; it could be used for any centralized logging).

As a proof of concept, we show you how to deploy a container running httpd, clone some static web content (for this example, we clone the ECS documentation), and have the httpd access and error logs sent to the rsyslog service running on the syslog container via container linking. We also send the Docker and ecs-agent logs from the EC2 instance the task is running on. The logs in turn are sent to CloudWatch Logs via the CloudWatch Logs agent.

Note: Be sure to replace your information througout the document as necessary (for example: replace "my_docker_hub_repo" with the name of your own Docker Hub repository).

We also assume that all following requirements are in place in your AWS account:

A VPC exists for the account

There is an IAM user with permissions to launch EC2 instances and create IAM policies/roles

SSH keys have been generated

Git and Docker are installed on the image building host

The user owns a Docker Hub account and a repository ("my_docker_hub_repo" in this document)

Let’s get started.

Create the Docker image

The first step is to create the Docker image to use as a logging container. For this, all you need is a machine that has Git and Docker installed. You could use your own local machine or an EC2 instance.

Install Git and Docker. The following steps pertain to the Amazon Linux AMI but you should follow the Git and Docker installation instructions respective to your machine.

$ sudo yum update -y && sudo yum -y install git docker

Make sure that the Docker service is running:

$ sudo service docker start

Clone the GitHub repository containing the files you need:

$ git clone https://github.com/awslabs/ecs-cloudwatch-logs.git
$ cd ecs-cloudwatch-logs
You should now have a directory containing two .conf files and a Dockerfile. Feel free to read the content of these files and identify the mechanisms used.
 

Log in to Docker Hub:

$ sudo docker login

Build the container image (replace the my_docker_hub_repo with your repository name):

$ sudo docker build -t my_docker_hub_repo/cloudwatchlogs .

Push the image to your repo:

$ sudo docker push my_docker_hub_repo/cloudwatchlogs

Use the build-and-push time to dive deeper into what will live in this container. You can follow along by reading the Dockerfile. Here are a few things worth noting:

The first RUN updates the distribution and installs rsyslog, pip, and curl.

The second RUN downloads the AWS CloudWatch Logs agent.

The third RUN enables remote conncetions for rsyslog.

The fourth RUN removes the local6 and local7 facilities to prevent duplicate entries. If you don’t do this, you would see every single apache log entry in /var/log/syslog.

The last RUN specifies which output files will receive the log entries on local6 and local7 (e.g., "if the facility is local6 and it is tagged with httpd, put those into this httpd-access.log file").

We use Supervisor to run more than one process in this container: rsyslog and the CloudWatch Logs agent.

We expose port 514 for rsyslog to collect log entries via the Docker link.

Create an ECS cluster

Now, create an ECS cluster. One way to do so could be to use the Amazon ECS console first run wizard. For now, though, all you need is an ECS cluster.

7. Navigate to the ECS console and choose Create cluster. Give it a unique name that you have not used before (such as "ECSCloudWatchLogs"), and choose Create.

Create an IAM role

The next five steps set up a CloudWatch-enabled IAM role with EC2 permissions and spin up a new container instance with this role. All of this can be done manually via the console or you can run a CloudFormation template. To use the CloudFormation template, navigate to CloudFormation console, create a new stack by using this template and go straight to step 14 (just specify the ECS cluster name used above, choose your prefered instance type and select the appropriate EC2 SSH key, and leave the rest unchanged). Otherwise, continue on to step 8.

8. Create an IAM policy for CloudWatch Logs and ECS: point your browser to the IAM console, choose Policies and then Create Policy. Choose Select next to Create Your Own Policy. Give your policy a name (e.g., ECSCloudWatchLogs) and paste the text below as the Policy Document value.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:Create*",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*"
},
{
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:RegisterContainerInstance",
"ecs:Submit*",
"ecs:Poll"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

9. Create a new IAM EC2 service role and attach the above policy to it. In IAM, choose Roles, Create New Role. Pick a name for the role (e.g., ECSCloudWatchLogs). Choose Role Type, Amazon EC2. Find and pick the policy you just created, click Next Step, and then Create Role.

Launch an EC2 instance and ECS cluster

10. Launch an instance with the Amazon ECS AMI and the above role in the US East (N. Virginia) region. On the EC2 console page, choose Launch Instance. Choose Community AMIs. In the search box, type "amazon-ecs-optimized" and choose Select for the latest version (2015.03.b). Select the appropriate instance type and choose Next.

11. Choose the appropriate Network value for your ECS cluster. Make sure that Auto-assign Public IP is enabled. Choose the IAM role that you just created (e.g., ECSCloudWatchLogs). Expand Advanced Details and in the User data field, add the following while substituting your_cluster_name for the appropriate name:

#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
EOF

12. Choose Next: Add Storage, then Next: Tag Instance. You can give your container instance a name on this page. Choose Next: Configure Security Group. On this page, you should make sure that both SSH and HTTP are open to at least your own IP address.

13. Choose Review and Launch, then Launch and Associate with the appropriate SSH key. Note the instance ID.

14. Ensure that your newly spun-up EC2 instance is part of your container instances (note that it may take up to a minute for the container instance to register with ECS). In the ECS console, select the appropriate cluster. Select the ECS Instances tab. You should see a container instance with the instance ID that you just noted after a minute.

15. On the left pane of the ECS console, choose Task Definitions, then Create new Task Definition. On the JSON tab, paste the code below, overwriting the default text. Make sure to replace "my_docker_hub_repo" with your own Docker Hub repo name and choose Create.

{
"volumes": [
{
"name": "ecs_instance_logs",
"host": {
"sourcePath": "/var/log"
}
}
],
"containerDefinitions": [
{
"environment": [],
"name": "cloudwatchlogs",
"image": "my_docker_hub_repo/cloudwatchlogs",
"cpu": 50,
"portMappings": [],
"memory": 64,
"essential": true,
"mountPoints": [
{
"sourceVolume": "ecs_instance_logs",
"containerPath": "/mnt/ecs_instance_logs",
"readOnly": true
}
]
},
{
"environment": [],
"name": "httpd",
"links": [
"cloudwatchlogs"
],
"image": "httpd",
"cpu": 50,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 128,
"entryPoint": ["/bin/bash", "-c"],
"command": [
"apt-get update && apt-get -y install wget && echo ‘CustomLog "| /usr/bin/logger -t httpd -p local6.info -n cloudwatchlogs -P 514" "%v %h %l %u %t %r %>s %b %{Referer}i %{User-agent}i"’ >> /usr/local/apache2/conf/httpd.conf && echo ‘ErrorLogFormat "%v [%t] [%l] [pid %P] %F: %E: [client %a] %M"’ >> /usr/local/apache2/conf/httpd.conf && echo ‘ErrorLog "| /usr/bin/logger -t httpd -p local7.info -n cloudwatchlogs -P 514"’ >> /usr/local/apache2/conf/httpd.conf && echo ServerName `hostname` >> /usr/local/apache2/conf/httpd.conf && rm -rf /usr/local/apache2/htdocs/* && cd /usr/local/apache2/htdocs && wget -mkEpnp -nH –cut-dirs=4 http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html && /usr/local/bin/httpd-foreground"
],
"essential": true
}
],
"family": "cloudwatchlogs"
}

What are some highlights of this task definition?

The sourcePath value allows the CloudWatch Logs agent running in the log collection container to access the host-based Docker and ECS agent log files. You can change the retention period in CloudWatch Logs.

The cloudwatchlogs container is marked essential, which means that if log collection goes down, so should the application it is collecting from. Similarly, the web server is marked essential as well. You can easily change this behavior.

The command section is a bit lengthy. Let us break it down:

We first install wget so that we can later clone the ECS documentation for display on our web server.

We then write four lines to httpd.conf. These are the echo commands. They describe how httpd will generate log files and their format. Notice how we tag (-t httpd) these files with httpd and assign them a specific facility (-p localX.info). We also specify that logger is to send these entries to host -n cloudwatchlogs on port -p 514. This will be handled by linking. Hence, port 514 is left untouched on the machine and we could have as many of these logging containers running as we want.

%h %l %u %t %r %>s %b %{Referer}i %{User-agent}i should look fairly familiar to anyone who has looked into tweaking Apache logs. The initial %v is the server name and it will be replaced by the container ID. This is how we are able to discern what container the logs come from in CloudWatch Logs.

We remove the default httpd landing page with rm -rf.

We instead use wget to download a clone of the ECS documentation.

And, finally, we start httpd. Note that we redirect httpd log files in our task definition at the command level for the httpd image. Applying the same concept to another image would simply require you to know where your application maintains its log files.

Note that we redirect httpd log files in our task definition at the command level for the httpd image. Applying the same concept to another image would simply require you to know where your application maintains its log files.

Create a service

16. On the services tab in the ECS console, choose Create. Choose the task definition created in step 15, name the service and set the number of tasks to 1. Select Create service.

17. The task will start running shortly. You can press the refresh icon on your service’s Tasks tab. After the status says "Running", choose the task and expand the httpd container. The container instance IP will be a hyperlink under the Network bindings section’s External link. When you select the link you should see a clone of the Amazon ECS documentation. You are viewing this thanks to the httpd container running on your ECS cluster.

18. Open the CloudWatch Logs console to view new ecs entries.

Conclusion

If you have followed all of these steps, you should now have a two container task running in your ECS cluster. One container serves web pages while the other one collects the log activity from the web container and sends it to CloudWatch Logs. Such a setup can be replicated with any other application. All you need is to specify a different container image and describe the expected log files in the command section.

Send ECS Container Logs to CloudWatch Logs for Centralized Monitoring

Post Syndicated from Chris Barclay original http://blogs.aws.amazon.com/application-management/post/TxFRDMTMILAA8X/Send-ECS-Container-Logs-to-CloudWatch-Logs-for-Centralized-Monitoring

My colleagues Brandon Chavis, Pierre Steckmeyer and Chad Schmutzer sent a nice guest post that demonstrates how to send your container logs to a central source for easy troubleshooting and alarming.

 

—–

Amazon EC2 Container Service (Amazon ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.

In this multipart blog post, we have chosen to take a universal struggle amongst IT professionals—log collection—and approach it from different angles to highlight possible architectural patterns that facilitate communication and data sharing between containers.

When building applications on ECS, it is a good practice to follow a micro services approach, which encourages the design of a single application component in a single container. This design improves flexibility and elasticity, while leading to a loosely coupled architecture for resilience and ease of maintenance. However, this architectural style makes it important to consider how your containers will communicate and share data with each other.

Why is it useful?

Application logs are useful for many reasons. They are the primary source of troubleshooting information. In the field of security, they are essential to forensics. Web server logs are often leveraged for analysis (at scale) in order to gain insight into usage, audience, and trends.

Centrally collecting container logs is a common problem that can be solved in a number of ways. The Docker community has offered solutions such as having working containers map a shared volume; having a log-collecting container; and getting logs from a container that logs to stdout/stderr and retrieving them with docker logs.

In this post, we present a solution using Amazon CloudWatch Logs. CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. CloudWatch Logs can be used to collect and monitor your logs for specific phrases, values, or patterns. For example, you could set an alarm on the number of errors that occur in your system logs or view graphs of web request latencies from your application logs. The additional advantages here are that you can look at a single pane of glass for all of your monitoring needs because such metrics as CPU, disk I/O, and network for your container instances are already available on CloudWatch.

Here is how we are going to do it

Our approach involves setting up a container whose sole purpose is logging. It runs rsyslog and the CloudWatch Logs agent, and we use Docker Links to communicate to other containers. With this strategy, it becomes easy to link existing application containers such as Apache and have discrete logs per task. This logging container is defined in each ECS task definition, which is a collection of containers running together on the same container instance. With our container log collection strategy, you do not have to modify your Docker image. Any log mechanism tweak is specified in the task definition.

 

Note: This blog provisions a new ECS cluster in order to test the following instructions. Also, please note that we are using the US East (N. Virginia) region throughout this exercise. If you would like to use a different AWS region, please make sure to update your configuration accordingly.

Linking to a CloudWatch logging container

We will create a container that can be deployed as a syslog host. It will accept standard syslog connections on 514/TCP to rsyslog through container links, and will also forward those logs to CloudWatch Logs via the CloudWatch Logs agent. The idea is that this container can be deployed as the logging component in your architecture (not limited to ECS; it could be used for any centralized logging).

As a proof of concept, we show you how to deploy a container running httpd, clone some static web content (for this example, we clone the ECS documentation), and have the httpd access and error logs sent to the rsyslog service running on the syslog container via container linking. We also send the Docker and ecs-agent logs from the EC2 instance the task is running on. The logs in turn are sent to CloudWatch Logs via the CloudWatch Logs agent.

Note: Be sure to replace your information througout the document as necessary (for example: replace "my_docker_hub_repo" with the name of your own Docker Hub repository).

We also assume that all following requirements are in place in your AWS account:

A VPC exists for the account

There is an IAM user with permissions to launch EC2 instances and create IAM policies/roles

SSH keys have been generated

Git and Docker are installed on the image building host

The user owns a Docker Hub account and a repository ("my_docker_hub_repo" in this document)

Let’s get started.

Create the Docker image

The first step is to create the Docker image to use as a logging container. For this, all you need is a machine that has Git and Docker installed. You could use your own local machine or an EC2 instance.

Install Git and Docker. The following steps pertain to the Amazon Linux AMI but you should follow the Git and Docker installation instructions respective to your machine.

$ sudo yum update -y && sudo yum -y install git docker

Make sure that the Docker service is running:

$ sudo service docker start

Clone the GitHub repository containing the files you need:

$ git clone https://github.com/awslabs/ecs-cloudwatch-logs.git
$ cd ecs-cloudwatch-logs
You should now have a directory containing two .conf files and a Dockerfile. Feel free to read the content of these files and identify the mechanisms used.
 

Log in to Docker Hub:

$ sudo docker login

Build the container image (replace the my_docker_hub_repo with your repository name):

$ sudo docker build -t my_docker_hub_repo/cloudwatchlogs .

Push the image to your repo:

$ sudo docker push my_docker_hub_repo/cloudwatchlogs

Use the build-and-push time to dive deeper into what will live in this container. You can follow along by reading the Dockerfile. Here are a few things worth noting:

The first RUN updates the distribution and installs rsyslog, pip, and curl.

The second RUN downloads the AWS CloudWatch Logs agent.

The third RUN enables remote conncetions for rsyslog.

The fourth RUN removes the local6 and local7 facilities to prevent duplicate entries. If you don’t do this, you would see every single apache log entry in /var/log/syslog.

The last RUN specifies which output files will receive the log entries on local6 and local7 (e.g., "if the facility is local6 and it is tagged with httpd, put those into this httpd-access.log file").

We use Supervisor to run more than one process in this container: rsyslog and the CloudWatch Logs agent.

We expose port 514 for rsyslog to collect log entries via the Docker link.

Create an ECS cluster

Now, create an ECS cluster. One way to do so could be to use the Amazon ECS console first run wizard. For now, though, all you need is an ECS cluster.

7. Navigate to the ECS console and choose Create cluster. Give it a unique name that you have not used before (such as "ECSCloudWatchLogs"), and choose Create.

Create an IAM role

The next five steps set up a CloudWatch-enabled IAM role with EC2 permissions and spin up a new container instance with this role. All of this can be done manually via the console or you can run a CloudFormation template. To use the CloudFormation template, navigate to CloudFormation console, create a new stack by using this template and go straight to step 14 (just specify the ECS cluster name used above, choose your prefered instance type and select the appropriate EC2 SSH key, and leave the rest unchanged). Otherwise, continue on to step 8.

8. Create an IAM policy for CloudWatch Logs and ECS: point your browser to the IAM console, choose Policies and then Create Policy. Choose Select next to Create Your Own Policy. Give your policy a name (e.g., ECSCloudWatchLogs) and paste the text below as the Policy Document value.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:Create*",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*"
},
{
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:RegisterContainerInstance",
"ecs:Submit*",
"ecs:Poll"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

9. Create a new IAM EC2 service role and attach the above policy to it. In IAM, choose Roles, Create New Role. Pick a name for the role (e.g., ECSCloudWatchLogs). Choose Role Type, Amazon EC2. Find and pick the policy you just created, click Next Step, and then Create Role.

Launch an EC2 instance and ECS cluster

10. Launch an instance with the Amazon ECS AMI and the above role in the US East (N. Virginia) region. On the EC2 console page, choose Launch Instance. Choose Community AMIs. In the search box, type "amazon-ecs-optimized" and choose Select for the latest version (2015.03.b). Select the appropriate instance type and choose Next.

11. Choose the appropriate Network value for your ECS cluster. Make sure that Auto-assign Public IP is enabled. Choose the IAM role that you just created (e.g., ECSCloudWatchLogs). Expand Advanced Details and in the User data field, add the following while substituting your_cluster_name for the appropriate name:

#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
EOF

12. Choose Next: Add Storage, then Next: Tag Instance. You can give your container instance a name on this page. Choose Next: Configure Security Group. On this page, you should make sure that both SSH and HTTP are open to at least your own IP address.

13. Choose Review and Launch, then Launch and Associate with the appropriate SSH key. Note the instance ID.

14. Ensure that your newly spun-up EC2 instance is part of your container instances (note that it may take up to a minute for the container instance to register with ECS). In the ECS console, select the appropriate cluster. Select the ECS Instances tab. You should see a container instance with the instance ID that you just noted after a minute.

15. On the left pane of the ECS console, choose Task Definitions, then Create new Task Definition. On the JSON tab, paste the code below, overwriting the default text. Make sure to replace "my_docker_hub_repo" with your own Docker Hub repo name and choose Create.

{
"volumes": [
{
"name": "ecs_instance_logs",
"host": {
"sourcePath": "/var/log"
}
}
],
"containerDefinitions": [
{
"environment": [],
"name": "cloudwatchlogs",
"image": "my_docker_hub_repo/cloudwatchlogs",
"cpu": 50,
"portMappings": [],
"memory": 64,
"essential": true,
"mountPoints": [
{
"sourceVolume": "ecs_instance_logs",
"containerPath": "/mnt/ecs_instance_logs",
"readOnly": true
}
]
},
{
"environment": [],
"name": "httpd",
"links": [
"cloudwatchlogs"
],
"image": "httpd",
"cpu": 50,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 128,
"entryPoint": ["/bin/bash", "-c"],
"command": [
"apt-get update && apt-get -y install wget && echo ‘CustomLog "| /usr/bin/logger -t httpd -p local6.info -n cloudwatchlogs -P 514" "%v %h %l %u %t %r %>s %b %{Referer}i %{User-agent}i"’ >> /usr/local/apache2/conf/httpd.conf && echo ‘ErrorLogFormat "%v [%t] [%l] [pid %P] %F: %E: [client %a] %M"’ >> /usr/local/apache2/conf/httpd.conf && echo ‘ErrorLog "| /usr/bin/logger -t httpd -p local7.info -n cloudwatchlogs -P 514"’ >> /usr/local/apache2/conf/httpd.conf && echo ServerName `hostname` >> /usr/local/apache2/conf/httpd.conf && rm -rf /usr/local/apache2/htdocs/* && cd /usr/local/apache2/htdocs && wget -mkEpnp -nH –cut-dirs=4 http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html && /usr/local/bin/httpd-foreground"
],
"essential": true
}
],
"family": "cloudwatchlogs"
}

What are some highlights of this task definition?

The sourcePath value allows the CloudWatch Logs agent running in the log collection container to access the host-based Docker and ECS agent log files. You can change the retention period in CloudWatch Logs.

The cloudwatchlogs container is marked essential, which means that if log collection goes down, so should the application it is collecting from. Similarly, the web server is marked essential as well. You can easily change this behavior.

The command section is a bit lengthy. Let us break it down:

We first install wget so that we can later clone the ECS documentation for display on our web server.

We then write four lines to httpd.conf. These are the echo commands. They describe how httpd will generate log files and their format. Notice how we tag (-t httpd) these files with httpd and assign them a specific facility (-p localX.info). We also specify that logger is to send these entries to host -n cloudwatchlogs on port -p 514. This will be handled by linking. Hence, port 514 is left untouched on the machine and we could have as many of these logging containers running as we want.

%h %l %u %t %r %>s %b %{Referer}i %{User-agent}i should look fairly familiar to anyone who has looked into tweaking Apache logs. The initial %v is the server name and it will be replaced by the container ID. This is how we are able to discern what container the logs come from in CloudWatch Logs.

We remove the default httpd landing page with rm -rf.

We instead use wget to download a clone of the ECS documentation.

And, finally, we start httpd. Note that we redirect httpd log files in our task definition at the command level for the httpd image. Applying the same concept to another image would simply require you to know where your application maintains its log files.

Note that we redirect httpd log files in our task definition at the command level for the httpd image. Applying the same concept to another image would simply require you to know where your application maintains its log files.

Create a service

16. On the services tab in the ECS console, choose Create. Choose the task definition created in step 15, name the service and set the number of tasks to 1. Select Create service.

17. The task will start running shortly. You can press the refresh icon on your service’s Tasks tab. After the status says "Running", choose the task and expand the httpd container. The container instance IP will be a hyperlink under the Network bindings section’s External link. When you select the link you should see a clone of the Amazon ECS documentation. You are viewing this thanks to the httpd container running on your ECS cluster.

18. Open the CloudWatch Logs console to view new ecs entries.

Conclusion

If you have followed all of these steps, you should now have a two container task running in your ECS cluster. One container serves web pages while the other one collects the log activity from the web container and sends it to CloudWatch Logs. Such a setup can be replicated with any other application. All you need is to specify a different container image and describe the expected log files in the command section.

Set up a build pipeline with Jenkins and Amazon ECS

Post Syndicated from Chris Barclay original http://blogs.aws.amazon.com/application-management/post/Tx32RHFZHXY6ME1/Set-up-a-build-pipeline-with-Jenkins-and-Amazon-ECS

My colleague Daniele Stroppa sent a nice guest post that demonstrates how to use Jenkins to build Docker images for Amazon EC2 Container Service.

 

—–

 

In this walkthrough, we’ll show you how to set up and configure a build pipeline using Jenkins and the Amazon EC2 Container Service (ECS).

 

We’ll be using a sample Python application, available on GitHub. The repository contains a simple Dockerfile that uses a python base image and runs our application:

FROM python:2-onbuild
CMD [ "python", "./application.py" ]

This Dockerfile is used by the build pipeline to create a new Docker image upon pushing code to the repository. The built image will then be used to start a new service on an ECS cluster.

 

For the purpose of this walkthrough, fork the py-flask-signup-docker repository to your account.

 

Setup the build environment

For our build environment we’ll launch an Amazon EC2 instance using the Amazon Linux AMI and install and configure the required packages. Make sure that the security group you select for your instance allows traffic on ports TCP/22 and TCP/80.

 

Install and configure Jenkins, Docker and Nginx

Connect to your instance using your private key and switch to the root user. First, let’s update the repositories and install Docker, Nginx and Git.

# yum update -y
# yum install -y docker nginx git

To install Jenkins on Amazon Linux, we need to add the Jenkins repository and install Jenkins from there.

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key
# yum install jenkins

As Jenkins typically uses port TCP/8080, we’ll configure Nginx as a proxy. Edit the Nginx config file (/etc/nginx/nginx.conf) and change the server configuration to look like this:

server {
listen 80;
server_name _;

location / {
proxy_pass http://127.0.0.1:8080;
}
}

We’ll be using Jenkins to build our Docker images, so we need to add the jenkins user to the docker group. A reboot may be required for the changes to take effect.

# usermod -a -G docker jenkins

Start the Docker, Jenkins and Nginx services and make sure they will be running after a reboot:

# service docker start
# service jenkins start
# service nginx start
# chkconfig docker on
# chkconfig jenkins on
# chkconfig nginx on

You can launch the Jenkins instance complete with all the required plugins with this CloudFormation template.

 

Point your browser to the public DNS name of your EC2 instance (e.g. ec2-54-163-4-211.compute-1.amazonaws.com) and you should be able to see the Jenkins home page:

 

 

The Jenkins installation is currently accessible through the Internet without any form of authentication. Before proceeding to the next step, let’s secure Jenkins. Select Manage Jenkins on the Jenkins home page, click Configure Global Security and then enable Jenkins security by selecting the Enable Security checkbox.

 

For the purpose of this walkthrough, select Jenkins’s Own User Database under Security realm and make sure to select the Allow users to sign up checkbox. Under Authorization, select Matrix-based security. Add a user (e.g. admin) and provide necessary privileges to this user.

 

 

After that’s complete, save your changes. Now you will be asked to provide a username and password for the user to login. Click on Create an account, provide your username – i.e. admin – and fill in the user details. Now you will be able to log in securely to Jenkins.

 

Install and configure Jenkins plugins

The last step in setting up our build environment is to install and configure the Jenkins plugins required to build a Docker image and publish it to a Docker registry (DockerHub in our case). We’ll also need a plugin to interact with the code repository of our choice, GitHub in our case.

 

From the Jenkins dashboard select Manage Jenkins and click Manage Plugins. On the Available tab, search for and select the following plugins:

Docker Build and Publish plugin

dockerhub plugin

Github plugin

Then click the Install button. After the plugin installation is completed, select Manage Jenkins from the Jenkins dashboard and click Configure System. Look for the Docker Image Builder section and fill in your Docker registry (DockerHub) credentials:

 

 

Install and configure the Amazon ECS CLI

Now we are ready to setup and configure the ECS Command Line Interface (CLI). The sample application creates and uses an Amazon DynamoDB table to store signup information, so make sure that the IAM Role that you create for the EC2 instances allows the dynamodb:* action.

 

Follow the Setting Up with Amazon ECS guide to get ready to use ECS. If you haven’t done so yet, make sure to start at least one container instance in your account and create the Amazon ECS service role in the AWS IAM console.

 

Make sure that Jenkins is able to use the ECS CLI. Switch to the jenkins user and configure the AWS CLI, providing your credentials:

# sudo -su jenkins
> aws configure

Login to Docker Hub
The Jenkins user needs to login to Docker Hub before doing the first build:

# docker login

Create a task definition template

Create a task definition template for our application (note, you will replace the image name with your own repository):

{
"family": "flask-signup",
"containerDefinitions": [
{
"image": "your-repository/flask-signup:v_%BUILD_NUMBER%",
"name": "flask-signup",
"cpu": 10,
"memory": 256,
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 80
}
]
}
]
}

Save your task definition template as flask-signup.json. Since the image specified in the task definition template will be built in the Jenkins job, at this point we will create a dummy task definition. Substitute the %BUILD_NUMBER% parameter in your task definition template with a non-existent value (0) and register it with ECS:

# sed -e "s;%BUILD_NUMBER%;0;g" flask-signup.json > flask-signup-v_0.json
# aws ecs register-task-definition –cli-input-json file://flask-signup-v_0.json
{
"taskDefinition": {
"volumes": [],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:123456789012:task-definition/flask-signup:1",
"containerDefinitions": [
{
"name": "flask-signup",
"image": "your-repository/flask-signup:v_0",
"cpu": 10,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 80
}
],
"memory": 256,
"essential": true
}
],
"family": "flask-signup",
"revision": 1
}
}

Make note of the family value (flask-signup), as it will be needed when configuring the Execute shell step in the Jenkins job.

 

Create the ECS IAM Role, an ELB and your service definition

Create a new IAM role (e.g. ecs-service-role), select the Amazon EC2 Container Service Role type and attach the AmazonEC2ContainerServiceRole policy. This will allows ECS to create and manage AWS resources, such as an ELB, on your behalf. Create an Amazon Elastic Load Balancing (ELB) load balancer to be used in your service definition and note the ELB name (e.g. elb-flask-signup-1985465812). Create the flask-signup-service service, specifying the task definition (e.g. flask-signup) and the ELB name (e.g. elb-flask-signup-1985465812):

# aws ecs create-service –cluster default –service-name flask-signup-service –task-definition flask-signup –load-balancers loadBalancerName=elb-flask-signup-1985465812,containerName=flask-signup,containerPort=5000 –role ecs-service-role –desired-count 0
{
"service": {
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:us-east-1:123456789012:task-definition/flask-signup:1",
"desiredCount": 0,
"serviceName": "flask-signup-service",
"clusterArn": "arn:aws:ecs:us-east-1:123456789012:cluster/default",
"serviceArn": "arn:aws:ecs:us-east-1:123456789012:service/flask-signup-service",
"runningCount": 0
}
}

Since we have not yet build a Docker image for our task, make sure to set the –desired-count flag to 0.

 

Configure the Jenkins build

On the Jenkins dashboard, click on New Item, select the Freestyle project job, add a name for the job, and click OK. Configure the Jenkins job:

Under GitHub Project, add the path of your GitHub repository – e.g. https://github.com/awslabs/py-flask-signup-docker. In addition to the application source code, the repository contains the Dockerfile used to build the image, as explained at the beginning of this walkthrough. 

Under Source Code Management provide the Repository URL for Git, e.g. https://github.com/awslabs/py-flask-signup-docker.

In the Build Triggers section, select Build when a change is pushed to GitHub.

In the Build section, add a Docker build and publish step to the job and configure it to publish to your Docker registry repository (e.g. DockerHub) and add a tag to identify the image (e.g. v_$BUILD_NUMBER). 

 

The Repository Name specifies the name of the Docker repository where the image will be published; this is composed of a user name (dstroppa) and an image name (flask-signup). In our case, the Dockerfile sits in the root path of our repository, so we won’t specify any path in the Directory Dockerfile is in field. Note, the repository name needs to be the same as what is used in the task definition template in flask-signup.json.

Add a Execute Shell step and add the ECS CLI commands to start a new task on your ECS cluster. 

The script for the Execute shell step will look like this:

#!/bin/bash
SERVICE_NAME="flask-signup-service"
IMAGE_VERSION="v_"${BUILD_NUMBER}
TASK_FAMILY="flask-signup"

# Create a new task definition for this build
sed -e "s;%BUILD_NUMBER%;${BUILD_NUMBER};g" flask-signup.json > flask-signup-v_${BUILD_NUMBER}.json
aws ecs register-task-definition –family flask-signup –cli-input-json file://flask-signup-v_${BUILD_NUMBER}.json

# Update the service with the new task definition and desired count
TASK_REVISION=`aws ecs describe-task-definition –task-definition flask-signup | egrep "revision" | tr "/" " " | awk ‘{print $2}’ | sed ‘s/"$//’`
DESIRED_COUNT=`aws ecs describe-services –services ${SERVICE_NAME} | egrep "desiredCount" | tr "/" " " | awk ‘{print $2}’ | sed ‘s/,$//’`
if [ ${DESIRED_COUNT} = "0" ]; then
DESIRED_COUNT="1"
fi

aws ecs update-service –cluster default –service ${SERVICE_NAME} –task-definition ${TASK_FAMILY}:${TASK_REVISION} –desired-count ${DESIRED_COUNT}

To trigger the build process on Jenkins upon pushing to the GitHub repository we need to configure a service hook on GitHub. Go to the GitHub repository settings page, select Webhooks and Services and add a service hook for Jenkins (GitHub plugin). Add the Jenkins hook url: http://<username>:<password>@<EC2-DNS-Name>/github-webhook/.

 

 

Now we have configured a Jenkins job in such a way that whenever a change is committed to GitHub repository it will trigger the build process on Jenkins.

 

Happy building

From your local repository, push the application code to GitHub:

# git add *
# git commit -m "Kicking off Jenkins build"
# git push origin master

This will trigger the Jenkins job. After the job is completed, point your browser to the public DNS name for your EC2 container instance and verify that the application is correctly running:

 

Conclusion

In this walkthrough we demonstrated how to use Jenkins to automate the deployment of an ECS service. See the documentation for further information on Amazon ECS.

 

 

Set up a build pipeline with Jenkins and Amazon ECS

Post Syndicated from Chris Barclay original http://blogs.aws.amazon.com/application-management/post/Tx32RHFZHXY6ME1/Set-up-a-build-pipeline-with-Jenkins-and-Amazon-ECS

My colleague Daniele Stroppa sent a nice guest post that demonstrates how to use Jenkins to build Docker images for Amazon EC2 Container Service.

 

—–

 

In this walkthrough, we’ll show you how to set up and configure a build pipeline using Jenkins and the Amazon EC2 Container Service (ECS).

 

We’ll be using a sample Python application, available on GitHub. The repository contains a simple Dockerfile that uses a python base image and runs our application:

FROM python:2-onbuild
CMD [ "python", "./application.py" ]

This Dockerfile is used by the build pipeline to create a new Docker image upon pushing code to the repository. The built image will then be used to start a new service on an ECS cluster.

 

For the purpose of this walkthrough, fork the py-flask-signup-docker repository to your account.

 

Setup the build environment

For our build environment we’ll launch an Amazon EC2 instance using the Amazon Linux AMI and install and configure the required packages. Make sure that the security group you select for your instance allows traffic on ports TCP/22 and TCP/80.

 

Install and configure Jenkins, Docker and Nginx

Connect to your instance using your private key and switch to the root user. First, let’s update the repositories and install Docker, Nginx and Git.

# yum update -y
# yum install -y docker nginx git

To install Jenkins on Amazon Linux, we need to add the Jenkins repository and install Jenkins from there.

# wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
# rpm –import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key
# yum install jenkins

As Jenkins typically uses port TCP/8080, we’ll configure Nginx as a proxy. Edit the Nginx config file (/etc/nginx/nginx.conf) and change the server configuration to look like this:

server {
listen 80;
server_name _;

location / {
proxy_pass http://127.0.0.1:8080;
}
}

We’ll be using Jenkins to build our Docker images, so we need to add the jenkins user to the docker group. A reboot may be required for the changes to take effect.

# usermod -a -G docker jenkins

Start the Docker, Jenkins and Nginx services and make sure they will be running after a reboot:

# service docker start
# service jenkins start
# service nginx start
# chkconfig docker on
# chkconfig jenkins on
# chkconfig nginx on

You can launch the Jenkins instance complete with all the required plugins with this CloudFormation template.

 

Point your browser to the public DNS name of your EC2 instance (e.g. ec2-54-163-4-211.compute-1.amazonaws.com) and you should be able to see the Jenkins home page:

 

 

The Jenkins installation is currently accessible through the Internet without any form of authentication. Before proceeding to the next step, let’s secure Jenkins. Select Manage Jenkins on the Jenkins home page, click Configure Global Security and then enable Jenkins security by selecting the Enable Security checkbox.

 

For the purpose of this walkthrough, select Jenkins’s Own User Database under Security realm and make sure to select the Allow users to sign up checkbox. Under Authorization, select Matrix-based security. Add a user (e.g. admin) and provide necessary privileges to this user.

 

 

After that’s complete, save your changes. Now you will be asked to provide a username and password for the user to login. Click on Create an account, provide your username – i.e. admin – and fill in the user details. Now you will be able to log in securely to Jenkins.

 

Install and configure Jenkins plugins

The last step in setting up our build environment is to install and configure the Jenkins plugins required to build a Docker image and publish it to a Docker registry (DockerHub in our case). We’ll also need a plugin to interact with the code repository of our choice, GitHub in our case.

 

From the Jenkins dashboard select Manage Jenkins and click Manage Plugins. On the Available tab, search for and select the following plugins:

Docker Build and Publish plugin

dockerhub plugin

Github plugin

Then click the Install button. After the plugin installation is completed, select Manage Jenkins from the Jenkins dashboard and click Configure System. Look for the Docker Image Builder section and fill in your Docker registry (DockerHub) credentials:

 

 

Install and configure the Amazon ECS CLI

Now we are ready to setup and configure the ECS Command Line Interface (CLI). The sample application creates and uses an Amazon DynamoDB table to store signup information, so make sure that the IAM Role that you create for the EC2 instances allows the dynamodb:* action.

 

Follow the Setting Up with Amazon ECS guide to get ready to use ECS. If you haven’t done so yet, make sure to start at least one container instance in your account and create the Amazon ECS service role in the AWS IAM console.

 

Make sure that Jenkins is able to use the ECS CLI. Switch to the jenkins user and configure the AWS CLI, providing your credentials:

# sudo -su jenkins
> aws configure

Login to Docker Hub
The Jenkins user needs to login to Docker Hub before doing the first build:

# docker login

Create a task definition template

Create a task definition template for our application (note, you will replace the image name with your own repository):

{
"family": "flask-signup",
"containerDefinitions": [
{
"image": "your-repository/flask-signup:v_%BUILD_NUMBER%",
"name": "flask-signup",
"cpu": 10,
"memory": 256,
"essential": true,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 80
}
]
}
]
}

Save your task definition template as flask-signup.json. Since the image specified in the task definition template will be built in the Jenkins job, at this point we will create a dummy task definition. Substitute the %BUILD_NUMBER% parameter in your task definition template with a non-existent value (0) and register it with ECS:

# sed -e "s;%BUILD_NUMBER%;0;g" flask-signup.json > flask-signup-v_0.json
# aws ecs register-task-definition –cli-input-json file://flask-signup-v_0.json
{
"taskDefinition": {
"volumes": [],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:123456789012:task-definition/flask-signup:1",
"containerDefinitions": [
{
"name": "flask-signup",
"image": "your-repository/flask-signup:v_0",
"cpu": 10,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 80
}
],
"memory": 256,
"essential": true
}
],
"family": "flask-signup",
"revision": 1
}
}

Make note of the family value (flask-signup), as it will be needed when configuring the Execute shell step in the Jenkins job.

 

Create the ECS IAM Role, an ELB and your service definition

Create a new IAM role (e.g. ecs-service-role), select the Amazon EC2 Container Service Role type and attach the AmazonEC2ContainerServiceRole policy. This will allows ECS to create and manage AWS resources, such as an ELB, on your behalf. Create an Amazon Elastic Load Balancing (ELB) load balancer to be used in your service definition and note the ELB name (e.g. elb-flask-signup-1985465812). Create the flask-signup-service service, specifying the task definition (e.g. flask-signup) and the ELB name (e.g. elb-flask-signup-1985465812):

# aws ecs create-service –cluster default –service-name flask-signup-service –task-definition flask-signup –load-balancers loadBalancerName=elb-flask-signup-1985465812,containerName=flask-signup,containerPort=5000 –role ecs-service-role –desired-count 0
{
"service": {
"status": "ACTIVE",
"taskDefinition": "arn:aws:ecs:us-east-1:123456789012:task-definition/flask-signup:1",
"desiredCount": 0,
"serviceName": "flask-signup-service",
"clusterArn": "arn:aws:ecs:us-east-1:123456789012:cluster/default",
"serviceArn": "arn:aws:ecs:us-east-1:123456789012:service/flask-signup-service",
"runningCount": 0
}
}

Since we have not yet build a Docker image for our task, make sure to set the –desired-count flag to 0.

 

Configure the Jenkins build

On the Jenkins dashboard, click on New Item, select the Freestyle project job, add a name for the job, and click OK. Configure the Jenkins job:

Under GitHub Project, add the path of your GitHub repository – e.g. https://github.com/awslabs/py-flask-signup-docker. In addition to the application source code, the repository contains the Dockerfile used to build the image, as explained at the beginning of this walkthrough. 

Under Source Code Management provide the Repository URL for Git, e.g. https://github.com/awslabs/py-flask-signup-docker.

In the Build Triggers section, select Build when a change is pushed to GitHub.

In the Build section, add a Docker build and publish step to the job and configure it to publish to your Docker registry repository (e.g. DockerHub) and add a tag to identify the image (e.g. v_$BUILD_NUMBER). 

 

The Repository Name specifies the name of the Docker repository where the image will be published; this is composed of a user name (dstroppa) and an image name (flask-signup). In our case, the Dockerfile sits in the root path of our repository, so we won’t specify any path in the Directory Dockerfile is in field. Note, the repository name needs to be the same as what is used in the task definition template in flask-signup.json.

Add a Execute Shell step and add the ECS CLI commands to start a new task on your ECS cluster. 

The script for the Execute shell step will look like this:

#!/bin/bash
SERVICE_NAME="flask-signup-service"
IMAGE_VERSION="v_"${BUILD_NUMBER}
TASK_FAMILY="flask-signup"

# Create a new task definition for this build
sed -e "s;%BUILD_NUMBER%;${BUILD_NUMBER};g" flask-signup.json > flask-signup-v_${BUILD_NUMBER}.json
aws ecs register-task-definition –family flask-signup –cli-input-json file://flask-signup-v_${BUILD_NUMBER}.json

# Update the service with the new task definition and desired count
TASK_REVISION=`aws ecs describe-task-definition –task-definition flask-signup | egrep "revision" | tr "/" " " | awk ‘{print $2}’ | sed ‘s/"$//’`
DESIRED_COUNT=`aws ecs describe-services –services ${SERVICE_NAME} | egrep "desiredCount" | tr "/" " " | awk ‘{print $2}’ | sed ‘s/,$//’`
if [ ${DESIRED_COUNT} = "0" ]; then
DESIRED_COUNT="1"
fi

aws ecs update-service –cluster default –service ${SERVICE_NAME} –task-definition ${TASK_FAMILY}:${TASK_REVISION} –desired-count ${DESIRED_COUNT}

To trigger the build process on Jenkins upon pushing to the GitHub repository we need to configure a service hook on GitHub. Go to the GitHub repository settings page, select Webhooks and Services and add a service hook for Jenkins (GitHub plugin). Add the Jenkins hook url: http://<username>:<password>@<EC2-DNS-Name>/github-webhook/.

 

 

Now we have configured a Jenkins job in such a way that whenever a change is committed to GitHub repository it will trigger the build process on Jenkins.

 

Happy building

From your local repository, push the application code to GitHub:

# git add *
# git commit -m "Kicking off Jenkins build"
# git push origin master

This will trigger the Jenkins job. After the job is completed, point your browser to the public DNS name for your EC2 container instance and verify that the application is correctly running:

 

Conclusion

In this walkthrough we demonstrated how to use Jenkins to automate the deployment of an ECS service. See the documentation for further information on Amazon ECS.

 

 

Using OpsWorks to Perform Operational Tasks

Post Syndicated from Chris Barclay original http://blogs.aws.amazon.com/application-management/post/Tx1GHWXXODNKSKD/Using-OpsWorks-to-Perform-Operational-Tasks

Today Jeff Barr blogged about a new feature that gives users the ability to deploy and operate applications on existing Amazon EC2 instances and on-premises servers with AWS OpsWorks. You may know OpsWorks as a service that lets users deploy and manage applications. However OpsWorks can also perform operational tasks that simplify server management. This blog includes three examples of how to use OpsWorks to manage instances. This blog will create EC2 instances using OpsWorks, but you can also use the newly launched features to register on-premises servers or existing EC2 instances.

Example 1: Use OpsWorks to perform tasks on instances  

Server administrators must often perform routine tasks on multiple instances, such as installing software updates. In the past you might have logged in with SSH to each instance and run the commands manually. With OpsWorks you can now perform these tasks on every instance with a single command as often as you like by using predefined scripts and Chef recipes. You can even have OpsWorks run your recipes automatically at key points in the instance’s life cycle, such as after the instance boots or when you deploy an app. This example will show how you can run a simple shell command and get the response back on the console.

Step 1: Create a stack

To get started, open the AWS Management Console. Your first task is to create a stack:

Select Add a Stack to create an OpsWorks stack.

Give it a name and select Advanced.

Set Use custom Chef Cookbooks to Yes.

Set Repository type to Git.

Set the Repository URL to https://github.com/amazonwebservices/opsworks-first-cookbook

Accept the defaults for the other settings and click the Add Stack button at the bottom of the page to create the stack.

Step 2: Add a Layer

An OpsWorks layer is a template that specifies how to configure a related set of EC2 instances. For this example:

Select Add a Layer

Choose a Custom layer; give it a Name and Short Name. The short name should be all lower case with no spaces or punctuation.

Step 3: Add an Instance

You now need to add some instances to the layer: 

Click Instances in the navigation pane and under the layer you just created click + Instance to create a new EC2 instance. You can also Register an on-premises instance in this step.

For this walkthrough, just accept the default settings and click Add Instance to add the instance to the layer.

Click start in the row’s Actions column and OpsWorks will then launch a new EC2 instance. The instance’s status will change to online when it’s ready.

Step 4: Run a command

This step shows how to run a command that executes one of the custom recipes that you installed earlier. It detects whether the instance is vulnerable to Shellshock.

Click Stack

Click Run Command

Select “Execute Recipes” from the drop down

Set Recipes to execute to shellout 

Select Advanced

Copy the following to the Custom Chef JSON box:

{ "shellout" : { "code" : "env x='() { :;}; echo vulnerable’ bash -c ‘echo this is a test’" } }

Click Execute Recipes

 

Step 5: View the results

 

Once the recipe run has completed, you can view the results by selecting the View link under Logs. About half way down the log file you should see the output:

[2014-12-03T23:49:03+00:00] INFO: @@@
this is a test
@@@

Next steps

It’s usually a better practice to put each script you plan to run into a Chef recipe. It improves consistency and avoids incorrect results. You can easily include Bash, Python and Ruby scripts in a recipes. For example, the following recipe is basically a wrapper for a one-line Bash script:

bash "change system greeting" do
user "root"
code <<-EOH
echo "Hello OpsWorks World" > /etc/motd
EOH
end

Example 2: Manage operating system users and ssh/sudo access

It is often useful to be able to grant multiple users SSH access to an EC2 instance. However Amazon EC2 installs only one SSH key when it launches an instance. With OpsWorks, each user can have their own SSH key and you can use OpsWorks to grant SSH and sudo permissions to selected users. OpsWorks then automatically adds the users’ keys to the instance’s authorized_keys file. If a user no longer needs SSH access, you remove those permissions and OpsWorks automatically removes the key.

Step 1: Import users into AWS OpsWorks

Sign in to AWS OpsWorks as an administrative user or as the account owner.

Click Users on the upper right to open the Users page.

Click Import IAM Users to display the users that have not yet been imported.

Select the users you want, then click Import to OpsWorks.

GitHub Fork

 

Step 2: Edit user settings

On the Users page, click edit in the user’s Actions column.

Enter a public SSH key for the user and give the user the corresponding private key. The public key will appear on the user’s My Settings page. For more information, see Setting an IAM User’s Public SSH Key. If you enable self-management, the user can specify his or her own key.

Set the user’s permissions levels for the stack you created in Example 1 to include "SSH" access. You can also set permissions separately by using each stack’s Permissions page. 

GitHub Fork

Step 3: SSH to the instance

Click Dashboard on the upper right to open the Dashboard page.

Select the stack you created in Example 1 and navigate to Instances.

Select the instance you created in Example 1.

In the Logs section you will see the execute_recipes command that added the user and the user’s public key to the instance. When this command has completed, as indicated by the green check, select the SSH button at the top of the screen to launch an SSH client. You can then sign into the instance with your username and private key.

Example 3: Archive a file to Amazon S3

There are times when you may want to archive a file, for example to investigate a problem later. This script will send a file from an instance to S3.

Step 1: Create or select an existing S3 bucket

Open the S3 console and create a new bucket or select an existing bucket to use for this example.

Step 2: Run a command to push a file to S3

Using the stack you created in Example 1, navigate to Stack

Select Run Command

Select “Execute Recipes” from the drop down menu

Set Recipes to execute to sample::push-s3

Select Advanced

Set Custom Chef JSON to

{ "s3": {
"filename": "opsworks-agent.log",
"bucketname": "your-s3-bucket-name",
"filepath": "/var/log/aws/opsworks/opsworks-agent.log"
} }

The sample::push-s3 recipe was included in the cookbook that you installed earlier. It gets the required information from the JSON and uses the AWS Ruby SDK to upload the file to S3.

Click Execute Recipes

Step 3: View the file in S3

The file you selected in step 2 should now be in your bucket.

These examples demonstrate three ways that OpsWorks can be used for more than software configuration. See the documentation for more information on how to manage on-premises and EC2 instances with OpsWorks.

Using OpsWorks to Perform Operational Tasks

Post Syndicated from Chris Barclay original http://blogs.aws.amazon.com/application-management/post/Tx1GHWXXODNKSKD/Using-OpsWorks-to-Perform-Operational-Tasks

Today Jeff Barr blogged about a new feature that gives users the ability to deploy and operate applications on existing Amazon EC2 instances and on-premises servers with AWS OpsWorks. You may know OpsWorks as a service that lets users deploy and manage applications. However OpsWorks can also perform operational tasks that simplify server management. This blog includes three examples of how to use OpsWorks to manage instances. This blog will create EC2 instances using OpsWorks, but you can also use the newly launched features to register on-premises servers or existing EC2 instances.

Example 1: Use OpsWorks to perform tasks on instances  

Server administrators must often perform routine tasks on multiple instances, such as installing software updates. In the past you might have logged in with SSH to each instance and run the commands manually. With OpsWorks you can now perform these tasks on every instance with a single command as often as you like by using predefined scripts and Chef recipes. You can even have OpsWorks run your recipes automatically at key points in the instance’s life cycle, such as after the instance boots or when you deploy an app. This example will show how you can run a simple shell command and get the response back on the console.

Step 1: Create a stack

To get started, open the AWS Management Console. Your first task is to create a stack:

Select Add a Stack to create an OpsWorks stack.

Give it a name and select Advanced.

Set Use custom Chef Cookbooks to Yes.

Set Repository type to Git.

Set the Repository URL to https://github.com/amazonwebservices/opsworks-first-cookbook

Accept the defaults for the other settings and click the Add Stack button at the bottom of the page to create the stack.

Step 2: Add a Layer

An OpsWorks layer is a template that specifies how to configure a related set of EC2 instances. For this example:

Select Add a Layer

Choose a Custom layer; give it a Name and Short Name. The short name should be all lower case with no spaces or punctuation.

Step 3: Add an Instance

You now need to add some instances to the layer: 

Click Instances in the navigation pane and under the layer you just created click + Instance to create a new EC2 instance. You can also Register an on-premises instance in this step.

For this walkthrough, just accept the default settings and click Add Instance to add the instance to the layer.

Click start in the row’s Actions column and OpsWorks will then launch a new EC2 instance. The instance’s status will change to online when it’s ready.

Step 4: Run a command

This step shows how to run a command that executes one of the custom recipes that you installed earlier. It detects whether the instance is vulnerable to Shellshock.

Click Stack

Click Run Command

Select “Execute Recipes” from the drop down

Set Recipes to execute to shellout 

Select Advanced

Copy the following to the Custom Chef JSON box:

{ "shellout" : { "code" : "env x='() { :;}; echo vulnerable’ bash -c ‘echo this is a test’" } }

Click Execute Recipes

 

Step 5: View the results

 

Once the recipe run has completed, you can view the results by selecting the View link under Logs. About half way down the log file you should see the output:

[2014-12-03T23:49:03+00:00] INFO: @@@
this is a test
@@@

Next steps

It’s usually a better practice to put each script you plan to run into a Chef recipe. It improves consistency and avoids incorrect results. You can easily include Bash, Python and Ruby scripts in a recipes. For example, the following recipe is basically a wrapper for a one-line Bash script:

bash "change system greeting" do
user "root"
code <<-EOH
echo "Hello OpsWorks World" > /etc/motd
EOH
end

Example 2: Manage operating system users and ssh/sudo access

It is often useful to be able to grant multiple users SSH access to an EC2 instance. However Amazon EC2 installs only one SSH key when it launches an instance. With OpsWorks, each user can have their own SSH key and you can use OpsWorks to grant SSH and sudo permissions to selected users. OpsWorks then automatically adds the users’ keys to the instance’s authorized_keys file. If a user no longer needs SSH access, you remove those permissions and OpsWorks automatically removes the key.

Step 1: Import users into AWS OpsWorks

Sign in to AWS OpsWorks as an administrative user or as the account owner.

Click Users on the upper right to open the Users page.

Click Import IAM Users to display the users that have not yet been imported.

Select the users you want, then click Import to OpsWorks.

GitHub Fork

 

Step 2: Edit user settings

On the Users page, click edit in the user’s Actions column.

Enter a public SSH key for the user and give the user the corresponding private key. The public key will appear on the user’s My Settings page. For more information, see Setting an IAM User’s Public SSH Key. If you enable self-management, the user can specify his or her own key.

Set the user’s permissions levels for the stack you created in Example 1 to include "SSH" access. You can also set permissions separately by using each stack’s Permissions page. 

GitHub Fork

Step 3: SSH to the instance

Click Dashboard on the upper right to open the Dashboard page.

Select the stack you created in Example 1 and navigate to Instances.

Select the instance you created in Example 1.

In the Logs section you will see the execute_recipes command that added the user and the user’s public key to the instance. When this command has completed, as indicated by the green check, select the SSH button at the top of the screen to launch an SSH client. You can then sign into the instance with your username and private key.

Example 3: Archive a file to Amazon S3

There are times when you may want to archive a file, for example to investigate a problem later. This script will send a file from an instance to S3.

Step 1: Create or select an existing S3 bucket

Open the S3 console and create a new bucket or select an existing bucket to use for this example.

Step 2: Run a command to push a file to S3

Using the stack you created in Example 1, navigate to Stack

Select Run Command

Select “Execute Recipes” from the drop down menu

Set Recipes to execute to sample::push-s3

Select Advanced

Set Custom Chef JSON to

{ "s3": {
"filename": "opsworks-agent.log",
"bucketname": "your-s3-bucket-name",
"filepath": "/var/log/aws/opsworks/opsworks-agent.log"
} }

The sample::push-s3 recipe was included in the cookbook that you installed earlier. It gets the required information from the JSON and uses the AWS Ruby SDK to upload the file to S3.

Click Execute Recipes

Step 3: View the file in S3

The file you selected in step 2 should now be in your bucket.

These examples demonstrate three ways that OpsWorks can be used for more than software configuration. See the documentation for more information on how to manage on-premises and EC2 instances with OpsWorks.