All posts by Daniele Stroppa

Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source

Post Syndicated from Daniele Stroppa original https://aws.amazon.com/blogs/devops/build-a-continuous-delivery-pipeline-for-your-container-images-with-amazon-ecr-as-source/

Today, we are launching support for Amazon Elastic Container Registry (Amazon ECR) as a source provider in AWS CodePipeline. You can now initiate an AWS CodePipeline pipeline update by uploading a new image to Amazon ECR. This makes it easier to set up a continuous delivery pipeline and use the AWS Developer Tools for CI/CD.

You can use Amazon ECR as a source if you’re implementing a blue/green deployment with AWS CodeDeploy from the AWS CodePipeline console. For more information about using the Amazon Elastic Container Service (Amazon ECS) console to implement a blue/green deployment without CodePipeline, see Implement Blue/Green Deployments for AWS Fargate and Amazon ECS Powered by AWS CodeDeploy.

This post shows you how to create a complete, end-to-end continuous deployment (CD) pipeline with Amazon ECR and AWS CodePipeline. It walks you through setting up a pipeline to build your images when the upstream base image is updated.

Prerequisites

To follow along, you must have these resources in place:

  • A source control repository with your base image Dockerfile and a Docker image repository to store your image. In this walkthrough, we use a simple Dockerfile for the base image:
    FROM alpine:3.8

    RUN apk update

    RUN apk add nodejs
  • A source control repository with your application Dockerfile and source code and a Docker image repository to store your image. For the application Dockerfile, we use our base image and then add our application code:
    FROM 012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image

    ENV PORT=80

    EXPOSE $PORT

    COPY app.js /app/

    CMD ["node", "/app/app.js"]

This walkthrough uses AWS CodeCommit for the source control repositories and Amazon ECR  for the Docker image repositories. For more information, see Create an AWS CodeCommit Repository in the AWS CodeCommit User Guide and Creating a Repository in the Amazon Elastic Container Registry User Guide.

Note: The source control repositories and image repositories must be created in the same AWS Region.

Set up IAM service roles

In this walkthrough you use AWS CodeBuild and AWS CodePipeline to build your Docker images and push them to Amazon ECR. Both services use Identity and Access Management (IAM) service roles to makes calls to Amazon ECR API operations. The service roles must have a policy that provides permissions to make these Amazon ECR calls. The following procedure helps you attach the required permissions to the CodeBuild service role.

To create the CodeBuild service role

  1. Follow these steps to use the IAM console to create a CodeBuild service role.
  2. On step 10, make sure to also add the AmazonEC2ContainerRegistryPowerUser policy to your role.

CodeBuild service role policies

Create a build specification file for your base image

A build specification file (or build spec) is a collection of build commands and related settings, in YAML format, that AWS CodeBuild uses to run a build. Add a buildspec.yml file to your source code repository to tell CodeBuild how to build your base image. The example build specification used here does the following:

  • Pre-build stage:
    • Sign in to Amazon ECR.
    • Set the repository URI to your ECR image and add an image tag with the first seven characters of the Git commit ID of the source.
  • Build stage:
    • Build the Docker image and tag the image with latest and the Git commit ID.
  • Post-build stage:
    • Push the image with both tags to your Amazon ECR repository.
version: 0.2

phases:
  pre_build:
    commands:
      - echo.Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
      - REPOSITORY_URI=012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=${COMMIT_HASH:=latest}
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $REPOSITORY_URI:latest .
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG

To add a buildspec.yml file to your source repository

  1. Open a text editor and then copy and paste the build specification above into a new file.
  2. Replace the REPOSITORY_URI value (012345678910.dkr.ecr.us-east-1.amazonaws.com/base-image) with your Amazon ECR repository URI (without any image tag) for your Docker image. Replace base-image with the name for your base Docker image.
  3. Commit and push your buildspec.yml file to your source repository.
    git add .
    git commit -m "Adding build specification."
    git push

Create a build specification file for your application

Add a buildspec.yml file to your source code repository to tell CodeBuild how to build your source code and your application image. The example build specification used here does the following:

  • Pre-build stage:
    • Sign in to Amazon ECR.
    • Set the repository URI to your ECR image and add an image tag with the first seven characters of the CodeBuild build ID.
  • Build stage:
    • Build the Docker image and tag the image with latest and the Git commit ID.
  • Post-build stage:
    • Push the image with both tags to your ECR repository.
version: 0.2

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
      - REPOSITORY_URI=012345678910.dkr.ecr.us-east-1.amazonaws.com/hello-world
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
      - IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}')
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $REPOSITORY_URI:latest .
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG
artifacts:
  files:
    - imageDetail.json

To add a buildspec.yml file to your source repository

  1. Open a text editor and then copy and paste the build specification above into a new file.
  2. Replace the REPOSITORY_URI value (012345678910.dkr.ecr.us-east-1.amazonaws.com/hello-world) with your Amazon ECR repository URI (without any image tag) for your Docker image. Replace hello-world with the container name in your service’s task definition that references your Docker image.
  3. Commit and push your buildspec.yml file to your source repository.
    git add .
    git commit -m "Adding build specification."
    git push

Create a continuous deployment pipeline for your base image

Use the AWS CodePipeline wizard to create your pipeline stages:

  1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  2. On the Welcome page, choose Create pipeline.
    If this is your first time using AWS CodePipeline, an introductory page appears instead of Welcome. Choose Get Started Now.
  3. On the Step 1: Name page, for Pipeline name, type the name for your pipeline and choose Next step. For this walkthrough, the pipeline name is base-image.
  4. On the Step 2: Source page, for Source provider, choose AWS CodeCommit.
    1. For Repository name, choose the name of the AWS CodeCommit repository to use as the source location for your pipeline.
    2. For Branch name, choose the branch to use, and then choose Next step.
  5. On the Step 3: Build page, choose AWS CodeBuild, and then choose Create project.
    1. For Project name, choose a unique name for your build project. For this walkthrough, the project name is base-image.
    2. For Operating system, choose Ubuntu.
    3. For Runtime, choose Docker.
    4. For Version, choose aws/codebuild/docker:17.09.0.
    5. For Service role, choose Existing service role, choose the CodeBuild service role you’ve created earlier, and then clear the Allow AWS CodeBuild to modify this service role so it can be used with this build project box.
    6. Choose Continue to CodePipeline.
    7. Choose Next.
  6. On the Step 4: Deploy page, choose Skip and acknowledge the pop-up warning.
  7. On the Step 5: Review page, review your pipeline configuration, and then choose Create pipeline.

Base image pipeline

Create a continuous deployment pipeline for your application image

The execution of the application image pipeline is triggered by changes to the application source code and changes to the upstream base image. You first create a pipeline, and then edit it to add a second source stage.

    1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
    2. On the Welcome page, choose Create pipeline.
    3. On the Step 1: Name page, for Pipeline name, type the name for your pipeline, and then choose Next step. For this walkthrough, the pipeline name is hello-world.
    4. For Service role, choose Existing service role, and then choose the CodePipeline service role you modified earlier.
    5. On the Step 2: Source page, for Source provider, choose Amazon ECR.
      1. For Repository name, choose the name of the Amazon ECR repository to use as the source location for your pipeline. For this walkthrough, the repository name is base-image.

Amazon ECR source configuration

  1. On the Step 3: Build page, choose AWS CodeBuild, and then choose Create project.
    1. For Project name, choose a unique name for your build project. For this walkthrough, the project name is hello-world.
    2. For Operating system, choose Ubuntu.
    3. For Runtime, choose Docker.
    4. For Version, choose aws/codebuild/docker:17.09.0.
    5. For Service role, choose Existing service role, choose the CodeBuild service role you’ve created earlier, and then clear the Allow AWS CodeBuild to modify this service role so it can be used with this build project box.
    6. Choose Continue to CodePipeline.
    7. Choose Next.
  2. On the Step 4: Deploy page, choose Skip and acknowledge the pop-up warning.
  3. On the Step 5: Review page, review your pipeline configuration, and then choose Create pipeline.

The pipeline will fail, because it is missing the application source code. Next, you edit the pipeline to add an additional action to the source stage.

  1. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  2. On the Welcome page, choose your pipeline from the list. For this walkthrough, the pipeline name is hello-world.
  3. On the pipeline page, choose Edit.
  4. On the Editing: hello-world page, in Edit: Source, choose Edit stage.
  5. Choose the existing source action, and choose the edit icon.
    1. Change Output artifacts to BaseImage, and then choose Save.
  6. Choose Add action, and then enter a name for the action (for example, Code).
    1. For Action provider, choose AWS CodeCommit.
    2. For Repository name, choose the name of the AWS CodeCommit repository for your application source code.
    3. For Branch name, choose the branch.
    4. For Output artifacts, specify SourceArtifact, and then choose Save.
  7. On the Editing: hello-world page, choose Save and acknowledge the pop-up warning.

Application image pipeline

Test your end-to-end pipeline

Your pipeline should have everything for running an end-to-end native AWS continuous deployment. Now, test its functionality by pushing a code change to your base image repository.

  1. Make a change to your configured source repository, and then commit and push the change.
  2. Open the AWS CodePipeline console at https://console.aws.amazon.com/codepipeline/.
  3. Choose your pipeline from the list.
  4. Watch the pipeline progress through its stages. As the base image is built and pushed to Amazon ECR, see how the second pipeline is triggered, too. When the execution of your pipeline is complete, your application image is pushed to Amazon ECR, and you are now ready to deploy your application. For more information about continuously deploying your application, see Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment in the AWS CodePipeline User Guide.

Conclusion

In this post, we showed you how to create a complete, end-to-end continuous deployment (CD) pipeline with Amazon ECR and AWS CodePipeline. You saw how to initiate an AWS CodePipeline pipeline update by uploading a new image to Amazon ECR. Support for Amazon ECR in AWS CodePipeline makes it easier to set up a continuous delivery pipeline and use the AWS Developer Tools for CI/CD.

Using Federation with Amazon ECR

Post Syndicated from Daniele Stroppa original https://aws.amazon.com/blogs/compute/using-federation-with-amazon-ecr/

This is a guest post from my colleague Asif Khan.

————————-

Federation is a mechanism to connect identity management systems together. A user’s credentials are always stored with the “home” organization (the identity provider). A service provider trusts the identity provider to validate credentials when the user logs into a service. The user never provides credentials directly to anybody but the identity provider.

Many of our customers use federation to manage secure access to systems and user stores and have expressed a requirement to use federation with Amazon ECR, and this post explains how to do that.

Overview of Amazon ECR

With Amazon EC2 Container Registry (Amazon ECR), customers can store their Docker images in highly available repositories managed by AWS. ECR encrypts images at rest using server-side encryption managed by Amazon S3, and also provides integration with AWS Identity and Access Management (IAM) to control who can access a given repository.

Amazon ECR makes it easy to set up repository policies that grant push/pull access to IAM users, roles, or other AWS accounts. In order to push and pull images to an ECR repository using the standard Docker commands, customers must first authenticate with ECR by obtaining an encrypted token to pass in the docker login command. The token is generated for the AWS identity (IAM user, group, role, etc.) that originally requested the token.

However, many customers might already have an identity store outside AWS that they would like to use to authenticate with ECR. With IAM identity federation support, customers can benefit from the standardization on a protocol such as SAML, security control, and an improved user experience. Developers can use their corporate SAML identity provider to log in and use ECR seamlessly. IAM provides an integration with a variety of SAML providers such as Auth0, Bitium and Okta Ping Identity, among others. For a full list, see Integrating Third-Party SAML Solution Providers with AWS.

Customers can also choose to implement federation and control access issuing tokens themselves which can be used with the docker login command.

With IAM federation, we can set up federation to other identity providers listed. After federation is set up, developers or hosts can use AssumeRole, be granted tokens from the customer identity provider (IDP), and switch to a role which has permission to access the ECR repository.

Workflow

Walkthrough
This post walks you through the process of using federation along with ECR. This walkthrough uses the SAML identity provider Auth0, but you can follow along with any other AWS-supported provider. The steps are:

  1. Set up federation between an identity provider and IAM.
  2. Set up a role and configure it for federation.
  3. Set up user permissions to assume the role.
  4. Set up ECR permissions to allow access to the repository.
  5. Use the temporary AWS credentials returned by assumeRole when calling aws ecr get-login to obtain the docker auth token.
  6. Push and pull from an ECR repository using the Docker commands.

Set up federation
Follow the steps in the following topic to set up federation against an identity provider of your choice: Integrating Third-Party SAML Solution Providers with AWS. This walkthrough uses Auth0 as an example. Next, register an app on Auth0 or your IDP.
In the IAM console, set up the identity provider on AWS.
SAML Provider

Set up a role
Next, create a role with read-only access to assume. Note the role name for later.
IAM Role

Set up your user permissions to assume the new role
To switch to a role using the AWS CLI, follow the steps in Switching to an IAM Role (AWS Command Line Interface) and grant permissions to switch roles to the role that you created in the previous step.

Set up permissions for the ECR repository
With ECR, you can use the permissions tool to restrict access to selected principals. Choose the previously-created role “TestSAML” as the principal in the permissions.

ECR Permissions
ECR Actions

Use temporary AWS credentials
Use the temporary AWS credentials when calling aws ecr get-login to obtain the docker auth token. After you have assumed the role, you can enter the following command:


$ aws ecr get-login --profile testSAML

The results should be as follows:


docker login -u AWS -p  -e none https://.dkr.ecr.us-west-2.amazonaws.com

Log in to Docker and push/pull
Using the encrypted credentials returned from the previous command, you can now log in to Docker:


$ docker login -u AWS -p  -e none https://.dkr.ecr.us-west-2.amazonaws.com

You can now push or pull to and from Amazon ECR using the following commands:


$ docker push  aws_account_id.dkr.ecr.us-east-1.amazonaws.com/my-web-app
$ docker pull aws_account_id.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest

Conclusion
When you run microservices on Amazon ECS, you can pull images from Amazon ECR securely using federation and your identity provider. Federation includes benefits such as standardization on SAML, enhanced security, and an improved user experience. Use this walkthrough with any other identity provider, and achieve the same results.

If you have questions or suggestions, please comment below.

Authenticating Amazon ECR Repositories for Docker CLI with Credential Helper

Post Syndicated from Daniele Stroppa original https://aws.amazon.com/blogs/compute/authenticating-amazon-ecr-repositories-for-docker-cli-with-credential-helper/

This is a guest post from my colleagues Ryosuke Iwanaga and Prahlad Rao.

————————

Developers building and managing microservices and containerized applications using Docker containers require a secure, scalable repository to store and manage Docker images. In order to securely access the repository, proper authentication from the Docker client to the repository is important, but re-authenticating or refreshing authentication token every few hours often can be cumbersome.

This post walks you through a quick overview of Amazon ECR and how deploying Amazon ECR Docker Credential Helper can automate authentication token refresh on Docker push/pull requests.

Overview of Amazon ECS and Amazon ECR
Amazon ECS is a highly scalable, fast container management service that makes it easy to run and manage Docker containers on a cluster of Amazon EC2 instances and eliminates the need to operate your own cluster management or worry about scaling management infrastructure.

In order to reliably store Docker images on AWS, ECR provides a managed Docker registry service that is secure, scalable, and reliable. ECR is a private Docker repository with resource-based permissions using IAM so that users or EC2 instances can access repositories and images through the Docker CLI to push, pull, and manage images.

Manual ECR authentication with the Docker CLI
Most commonly, developers use Docker CLI to push and pull images or automate as part of a CI/CD workflow. Because Docker CLI does not support standard AWS authentication methods, client authentication must be handled so that ECR knows who is requesting to push or pull an image.

This can be done with a docker login command to authenticate to an ECR registry that provides an authorization token valid for 12 hours. One of the reasons for the 12-hour validity and subsequent necessary token refresh is that the Docker credentials are stored in a plain-text file and can be accessed if the system is compromised, which essentially gives access to the images. Authenticating every 12 hours ensures appropriate token rotation to protect against misuse.

If you’re using the AWS CLI, you can use a simpler get-login command which retrieves the token, decodes it, and converts into a docker login command for you. An example for the default registry associated with the account is shown below:


$ aws ecr get-login
docker login –u AWS –p password –e none https://aws_account_id.dkr.ecr.us-east-1.amazonaws.com

To access other account registries, use the -registry-ids <aws_account_id> option.

As you can see, the resulting output is a docker login command that you can use to authenticate your Docker client to your ECR registry. The generated token is valid for 12 hours, which means developers running and managing container images have to re-authenticate every 12 hours manually, or script it to generate a new token, which can be somewhat cumbersome in a CI/CD environment. For example if you’re using Jenkins to build and push docker images to ECR, you have to set up Jenkins instances to re-authenticate using get-login to ECR every 12 hours.

If you want a programmatic approach, you can use GetAuthorizationToken from the AWS SDK to fetch credentials for Docker. GetAuthorizationToken returns an authorization token of a base64-encoded string that can be decoded into username and password with “AWS” as username and temporary token as password.

It’s important to note that when executing docker login commands, the command string can be visible by other users on the system in a process list, e.g., ps –e, meaning other users can view authentication credentials to gain push and pull access to repositories. To avoid this, you can interactively log in by omitting the –p password option and enter password only when prompted. Overall, this may add additional overhead in a continuous development environment where developers need to worry about re-authentication every few hours.

Amazon ECR Docker Credential Helper
This is where Amazon ECR Docker Credential Helper makes it easy for developers to use ECR without the need to use docker login or write logic to refresh tokens and provide transparent access to ECR repositories.

Credential Helper helps developers in a continuous development environment to automate the authentication process to ECR repositories without having to regenerate tokens every 12 hours. In addition, Credential Helper also provides token caching under the hood so you don’t have to worry about getting throttled or writing additional logic.

You can access Credential Helper in the amazon-ecr-credential-helper GitHub repository.

Using Credential Helper on Linux/Mac and Windows
The prerequisites include:

  • Docker 1.11 or above installed on your system
  • AWS credentials available in one of the standard locations:
    • ~/.aws/credentials file
    • AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables
    • IAM role for Amazon EC2

First, build a binary for your client machine. Although you can do it with your own Go environment, we also provide a way to build it inside a Docker container without installing Go by yourself. To build by container, just type make docker on the root directory of the repository. It will run a container FROM go image and build the binary on the mounted volume. After that, you can see it at /bin/local/docker-credential-ecr-login.

Note: You need to run this with the local Docker engine as the remote Docker Engine can’t mount your local volume.

You can also build the binary cross compiled:

  • To build a Mac binary, use make docker TARGET_GOOS=darwin
  • To build a Windows binary, use make docker TARGET_GOOS=windows

With these commands, Go builds the binary for the target OS inside the Linux container.

The last thing you need to do is create a Docker configuration file for the helper. Put the file under ~/.docker/config.json or C:\Users\bob\.docker\config.json with the following content:


{
    "credsStore": "ecr-login"
}

Now, you can use the docker command to interact with ECR without docker login. When you type docker push/pull YOUR_ECR_IMAGE_ID, Credential Helper is called and communicates with the ECR endpoint to get the Docker credentials. Because it automatically detects the proper region from the image ID, you don’t have to worry about it.

Using Credential Helper with Jenkins
One of the common customer deployment patterns with ECS and ECR is integrating with existing CI/CD tools like Jenkins. Using Credential Helper, your Docker CI/CD setup with Jenkins is much simpler and more reliable.
To set up ECR as a Docker image repository for Jenkins and configure Credential Helper:

  • Ensure that your Jenkins instance has the proper AWS credentials to pull/push with your ECR repository. These can be in the form of environment variables, a shared credential file, or an instance profile.
  • Place docker-credential-ecr-login binary at one of directories in $PATH.
  • Write the Docker configuration file under the home directory of the Jenkins user, for example, /var/lib/jenkins/.docker/config.json.
  • Install the Docker Build and Publish plugin and make sure that the jenkins user can contact the Docker daemon.

Then, create a project with a build step, as in the following screenshot:

Jenkins Build Step

Now Jenkins can push/pull images to the ECR registry without needing to refresh tokens, just like your previous Docker CLI experience.

Conclusion
The Amazon ECR Docker Credential Helper provides a very efficient way to access ECR repositories. It is transparent so that you no longer need to recall this helper after setup. This tool is hosted on GitHub and we welcome your feedback and pull requests.

If you have any questions or suggestions, please comment below.

Run Containerized Microservices with Amazon EC2 Container Service and Application Load Balancer

Post Syndicated from Daniele Stroppa original https://aws.amazon.com/blogs/compute/microservice-delivery-with-amazon-ecs-and-application-load-balancers/

This is a guest post from Sathiya Shunmugasundaram, Gnani Dathathreya, and Jeff Storey from the Capital One Technology team.

—————–

At Capital One, we are rapidly embracing cloud-native microservices architectures and applying them to both existing and new workloads. To advance microservices adoption, increase efficiencies of cloud resources and decouple application layer from the underlying infrastructure, we are starting to use Docker to containerize the workloads and Amazon EC2 Container Service (Amazon ECS) to manage them.

Docker enables environment consistency and allows us to spin up new containers in a location (Dev / QA / Performance / Prod) of choice in seconds vs. minutes / hours / days it used to take us in the past.

Amazon ECS gives us a platform to manage Docker containers with no hassle. We chose ECS because of its simplicity in deploying and managing containers. Our API platform is an early adopter and ECS-based deployments quickly became the norm for managing the lifecycle of stateless workloads.

Container orchestration has traditionally been challenging. With the Elastic Load Balancing Classic Load Balancer, we have had limitations routing to multiple ports on the same server and are also unable to route services based on context, which meant that we needed to use one load balancer per service. By using open source software like Consul, Nginx, and registrator it was possible to achieve dynamic service discovery and context based routing but at the cost of adding more complexity and costs for running these additional components.

This post shows how the arrival of the Application Load Balancer has significantly simplified Docker based deployments on ECS and enabled delivering microservices in the cloud with enterprise class capabilities like service discovery, health checks, and load balancing.

Overview of Application Load Balancer

With the announcement of the new Application Load Balancer, we can take advantage of several out of the box features that are readily integrated with ECS. With dynamic port mapping option for containers we can simply register a service with a load balancer, and ECS transparently manages the registration and de-registration of Docker containers. We no longer need to know the host port ahead of time, as the load balancer automatically detects it and dynamically reconfigures itself. In addition to the port mapping, we also get all the features of traditional load balancers, like health checking, connection draining and access logs to name a few.

Similar to EC2 Auto Scaling, ECS also has the ability to auto scale services based on CloudWatch alarms. This functionality is critical as it allows us to scale-in or scale-out based on demand. This feature coupled with the new Application Load Balancer gives us fully-featured container orchestration. The pace at which we can now spin up new applications on ECS has greatly improved and we can spend much less time managing orchestration tools.

The Application Load Balancer also introduces path-based routing. This feature allows us to easily map different URL paths to different services. In a traditional monolithic application, URL paths are often used to denote different parts of the application, for example, http://.com/service1 and http://.com/service2.

Traditionally this was done using a context root in an application server or by using different load balancers for each service. With the new path-based routing, these parts of the application can be split into individual ECS-backed services without changing the existing URL patterns. This makes migrating applications seamless, as clients can continue to call the same URLs. A large monolithic application with many subcontexts like www.example.com/orders, www.example.com/inventory can be refactored into smaller micro services and each such path can be directed to a different target group of servers such as an ECS Service with Docker containers.

Key features of Application Load Balancers include:

  • Path-based routing – URL-based routing policies enable using the same ELB URL to route to different microservices
  • Multiple ports routing on same server
  • AWS integration – Integrated with many AWS services, such as ECS, IAM, Auto Scaling, and CloudFormation
  • Application monitoring – Improved metrics and health checks for the application

Core components of Application Load Balancers include:

  • Load balancer – The entry point for clients
  • Listener – Listens to requests from clients on a specific protocol/port and forwards to one or more target group based on rules
  • Rule – Determines how to route the request – based on path-based condition and priority matches to one or more target groups
  • Target – The entity that runs the backend servers – currently EC2 is the available target group. The same EC2 instance can be registered multiple times with different ports
  • Target group – Each target group identifies a set of backend servers which can be routed based on a rule. Health checks can be defined per target group. The same load balancer can have many target groups

ALB Components

Sample application architecture

In the following example, we show two services with three tasks deployed on two ECS container instances. This shows the ability to route to multiple ports on the same host. Also, there is only one Application Load Balancer that provides path-based routing for both ECS services, simplifying the architecture and reducing costs.

Sample App

Configuring Application Load Balancers

The following steps create and configure an Application Load Balancer using the AWS Management Console. These steps can also be done using the AWS CLI.

    1. Create an Application Load Balancer using the AWS Console
      • Login to the EC2 Console (https://console.aws.amazon.com/ec2)
      • Select Load Balancers
      • Select Create Load Balancer
      • ALB Console

      • Choose Application Load Balancer
    2. Configure Load Balancer
    3. Configure Load Balancer

      • For Name, type a name for your load balancer.
      • For Scheme, an Internet-facing load balancer routes requests from clients over the Internet to targets. An internal load balancer routes requests to targets using private IP addresses.
      • For Listeners, the default is a listener that accepts HTTP traffic on port 80. You can keep the default listener settings, modify the protocol or port of the listener, or choose Add to add another listener.
      • For VPC, select the same VPC that you used for the container instances on which you intend to run your service.
      • For Available subnets, select at least two subnets from different Availability Zones, and choose the icon in the Actions column.
    4. Configure Security Groups
    5. You must assign a security group to your load balancer that allows inbound traffic to the ports that you specified for your listeners.

    6. Configure Routing
      • For Target group, keep the default, New target group.
      • For Name, type a name for the new target group.
      • Set Protocol and Port as needed.
      • For Health checks, keep the default health check settings.

      ALB Routing

    7. Register Targets
    8. Your load balancer distributes traffic between the targets that are registered to its target groups. When you associate a target group to an Amazon ECS service, Amazon ECS automatically registers and deregisters containers with your target group. Because Amazon ECS handles target registration, you do not add targets to your target group at this time.

      ALB Targets

      • Click Next:Review
      • Click Create
    9. Registering Docker containers as a target
      • If you don’t already have an ECS cluster, open the Amazon ECS console first run wizard at https://console.aws.amazon.com/ecs/home#/firstRun.
      • From the ECS Cluster’s Services tab, Click Create
      • Create Service

      • Provide the Task definition for service
      • Provide a Service Name and number of tasks
      • Click the Configure ELB button
      • Choose Application Load Balancer
      • ECS Service ALB

      • Choose the ecsService Role as IAM role
      • Choose the Application Load Balancer created above
      • Select a container that you want the load balancer to use and click Add to ELB
      • ECS Service ALB

      • Select the target group name that was created above
      • Click Save and then Create Service

Cleaning up

  • Using the ECS Console, go to your Cluster and Service, update the number of tasks to 0, and then delete the service.
  • Using the EC2 Console, select the Load Balancer created above and delete.

Conclusion
Docker has given our developers a simplified application automation mechanism. Amazon ECS and Application Load Balancers make it easy to deliver these applications without needing to manage dynamic service discovery, load balancing and container orchestration. Using ECS and Application Load Balancers new services can be deployed in less than 30 minutes and existing services updated with newer versions in less than a minute. ECS automatically takes care of rolling updates and Application Load Balancer takes care of registering new versions quickly and unregistering existing containers gracefully. This not only improves the agility of teams, but also reduces the overall time to market.

Help Secure Container-Enabled Applications with IAM Roles for ECS Tasks

Post Syndicated from Daniele Stroppa original https://aws.amazon.com/blogs/compute/help-secure-container-enabled-applications-with-iam-roles-for-ecs-tasks/

In Amazon ECS, you have always had the benefit of being able to use IAM roles for Amazon EC2 in order to simplify API requests from your containers. This also allows you to follow AWS best practices by not storing your AWS credentials in your code or configuration files, as well as providing benefits such as automatic key rotation.

For example, roles let you build systems for secret management using ECS and Amazon S3, use Amazon DynamoDB for state management, or use S3 to store artifacts generated or used by your containers, all without having to deal with AWS credentials in your code.

Previously, you had to use IAM roles for Amazon EC2, meaning the IAM policies you assigned to the EC2 instances in your ECS cluster had to contain all the IAM policies for the tasks performed within the same cluster. This means that if you had one container that needed access to a specific S3 bucket and another container that needed access to a DynamoDB table, you had to assign both IAM permissions to the same EC2 instance.

With the introduction of the newly-launched IAM roles for ECS tasks, you can now secure your infrastructure further by assigning an IAM role directly to the ECS task rather than to the EC2 container instance. This way, you can have one task that uses a specific IAM role for access to S3 and one task that uses an IAM role to access a DynamoDB table.

This feature also allows you to use a minimal IAM policy for the ECS cluster instances because you only need to give the tasks a few required IAM permissions to interact with the ECS service.

This post walks you through setting up a task IAM role.

Prerequisites

If you haven’t yet done so, create an ECS cluster and launch at least one EC2 instance in the cluster. When you launch your EC2 instances, for the Container instance IAM role, choose an IAM role that has the AmazonEC2ContainerServiceforEC2Role policy attached to it.

If you have an existing cluster, use the ECS Optimized AMI 2016.03.e and the SDK released on July 13, 2016 or later to access this feature.

Walkthrough

For this walkthrough, you use a simple Node.js application that creates an Amazon S3 bucket and uploads a ‘Hello World’ file. You can find the source code for the application on the aws-nodejs-sample GitHub repository.

Build and push the Docker image

From your terminal application, execute the following command:

$ git clone https://github.com/awslabs/aws-nodejs-sample

This creates a directory named aws-nodejs-sample in your current directory that contains the code for the sample app. In the same directory, create a Dockerfile file, paste the following text into it, and save it.

FROM node:argon

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install

# Bundle app source
COPY sample.js /usr/src/app

CMD [ "node", "sample.js" ]

Create a repository on Amazon ECR named aws-nodejs-sample to store your image. Run the following commands to build and push your Docker image to your ECR repository, making sure to replace the AWS region and account ID with appropriate values.

$ docker build -t aws-nodejs-sample .

$ aws ecr get-login --region us-west-2 | sh

$ docker tag aws-nodejs-sample:latest 123456789012.dkr.ecr.us-west-2.amazonaws.com/aws-nodejs-sample:v1

$ docker push 123456789012.dkr.ecr.us-west-2.amazonaws.com/aws-nodejs-sample:v1

Create an IAM role for the task

Create an IAM role for your task. For AWS Service Roles, choose Amazon EC2 Container Service Task Role and on the Attach Policy screen, choose the AmazonS3FullAccess IAM managed policy.

Create a task definition and launch a task

Create a task definition for the sample app. Switch to the JSON builder by choosing Configure via JSON and paste in the following text, making sure to replace the AWS region and account ID with appropriate values.

{
    "containerDefinitions": [
        {
            "name": "sample-app",
            "image": "123456789012.dkr.ecr.us-west-2.amazonaws.com/aws-nodejs-sample:v1",
            "memory": "200",
            "cpu": "10",
            "essential": true,
            "portMappings": []      
        }
    ],
    "volumes": [],
    "family": "aws-nodejs-sample"
}

For Task Role, select the IAM role that you created before and choose Create to create the task definition.

On the Task Definition page, select the revision you just created e.g., aws-nodejs-sample:1 and choose Actions, Run Task. Select your ECS cluster from the list and choose Run Task on the next screen to launch a task on your ECS cluster.
Open the Amazon S3 console and verify that a bucket has been created and that it contains the hello_world.txt file. The bucket name is in the node-sdk-sample-UUID format.

Note: To avoid unexpected charges, make sure to empty and delete the S3 bucket and to terminate the EC2 instances created in the example above.

Conclusion

As you learned in the example above, it is easy to follow AWS best practices around IAM usage and only provide the least needed privileges to a task; thereby minimizing the risk of other tasks accessing data that they are not intended to access.
This also simplifies ECS cluster management, as you now have more freedom in bundling your tasks together on the same cluster instances. Keep in mind that you still need to handle security groups on a per-instance basis but you can now be very granular when creating and assigning IAM policies.

Read more about Task IAM Roles in the ECS documentation. If you have any questions or suggestions, please comment below