All posts by Nathan Peck

Setting up AWS PrivateLink for Amazon ECS, and Amazon ECR

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/setting-up-aws-privatelink-for-amazon-ecs-and-amazon-ecr/

Amazon ECS and Amazon ECR now have support for AWS PrivateLink. AWS PrivateLink is a networking technology designed to enable access to AWS services in a highly available and scalable manner. It keeps all the network traffic within the AWS network. When you create AWS PrivateLink endpoints for ECR and ECS, these service endpoints appear as elastic network interfaces with a private IP address in your VPC.

Before AWS PrivateLink, your Amazon EC2 instances had to use an internet gateway to download Docker images stored in ECR or communicate to the ECS control plane. Instances in a public subnet with a public IP address used the internet gateway directly. Instances in a private subnet used a network address translation (NAT) gateway hosted in a public subnet. The NAT gateway would then use the internet gateway to talk to ECR and ECS.

Now that AWS PrivateLink support has been added, instances in both public and private subnets can use it to get private connectivity to download images from Amazon ECR. Instances can also communicate with the ECS control plane via AWS PrivateLink endpoints without needing an internet gateway or NAT gateway.

 

This networking architecture is considerably simpler. It enables enhanced security by allowing you to deny your private EC2 instances access to anything other than these AWS services. That’s assuming that you want to block all other outbound internet access for those instances. For this to work, you must create some AWS PrivateLink resources:

  • AWS PrivateLink endpoints for ECR. This allows instances in your VPC to communicate with ECR to download image manifests
  • Gateway VPC endpoint for Amazon S3. This allows instances to download the image layers from the underlying private Amazon S3 buckets that host them.
  • AWS PrivateLink endpoints for ECS. These endpoints allow instances to communicate with the telemetry and agent services in the ECS control plane.

This post explains how to create these resources.

Create an AWS PrivateLink interface endpoint for ECR

ECR requires two interface endpoints:

  • com.amazonaws.region.ecr.api
  • com.amazonaws.region.ecr.dkr

In the VPC console, create the interface VPC endpoints for ECR using the endpoint creation wizard. Choose AWS services and select an endpoint. Substitute your AWS Region of choice.

Next, specify the VPC and subnets to which the AWS PrivateLink interface should be added. Make sure that you select the same VPC in which your ECS cluster is running. To be on the safe side, select every Availability Zone and subnet from the list. Each zone has a list of the subnets available. You can select all the subnets in each Availability Zone.

However, depending on your networking needs, you might also choose to only enable the AWS PrivateLink endpoint in your private subnets from each Availability Zone. Let instances running in a public subnet continue to communicate with ECR via the public subnet’s internet gateway.

Next, enable Private DNS Name, which is required for the endpoint.

com.amazonaws.region.ecr.dkr.

A private hosted zone enables you to access the resources in your VPC using the Amazon ECR default DNS domain names. You don’t need to use the private IPv4 address or the private DNS hostnames provided by Amazon VPC endpoints. The Amazon ECR DNS hostname that the AWS CLI and Amazon ECR SDKs use by default (https://api.ecr.region.amazonaws.com) resolves to your VPC endpoint.

If you enabled a private hosted zone for com.amazonaws.region.ecr.api and you are using an SDK released before January 24, 2019, you must specify the following endpoint when using an SDK or the AWS CLI. Use the following command:

aws --endpoint-url https://api.ecr.region.amazonaws.com

If you don’t enable a private hosted zone, use the following command:

aws --endpoint-url https://VPC_Endpoint_ID.api.ecr.region.vpce.amazonaws.com ecr describe-repositories

If you enabled a private hosted zone and you are using the SDK released on January 24, 2019 or later, use the following command:

aws ecr describe-repositories

Lastly, specify a security group for the interface itself. This is going to control whether each host is able to talk to the interface. The security group should allow inbound connections on port 80 from the instances in your cluster.

You may have a security group that is applied to all the EC2 instances in the cluster, perhaps using an Auto Scaling group. You can create a rule that allows the VPC endpoint to be accessed by any instance in that security group.

Finally, choose Create endpoint. The new endpoint appears in the list.

Add a gateway VPC endpoint for S3

The next step is to create a gateway VPC endpoint for S3. This is necessary because ECR uses S3 to store Docker image layers. When your instances download Docker images from ECR, they must access ECR to get the image manifest and S3 to download the actual image layers.

S3 uses a slightly different endpoint type called a gateway. Be careful about adding an S3 gateway to your VPC if your application is actively using S3. With gateway endpoints, your application’s existing connections to S3 may be briefly interrupted while the gateway is being added. You may have a busy cluster with many active ECS deployments, causing image layer downloads from S3. Or, your application itself may make heavy usage of S3. In that case, it’s best to create a fresh new VPC with an S3 gateway, then migrate your ECS cluster and its containers into that VPC.

To add the S3 gateway endpoint, select com.amazonaws.region.s3 on the list of AWS services and select the VPC hosting your ECS cluster. Gateway endpoints are added to the VPC route table for the subnets. Select each route table associated with the subnet in which the S3 gateway should be.

Instead of using a security group, the gateway endpoint uses an IAM policy document to limit access to the service. This policy is similar to an IAM policy but does not replace the default level of access that your applications have through their IAM role. It just further limits what portions of the service are available via the gateway.

It’s okay to just use the default Full Access policy. Any restrictions you have put on your task IAM roles or other IAM user policies still apply on top of this policy. For information about a minimal access policy, see the Minimum Amazon S3 Bucket Permissions for Amazon ECR.

Choose Create to add this gateway endpoint to your VPC. When you view the route tables in your VPC subnets, you see an S3 gateway that is used whenever ECR Docker image layers are being downloaded from S3.

Create an AWS PrivateLink interface endpoint for ECS

In addition to downloading Docker images from ECR, your EC2 instances must also communicate with the ECS control plane to receive orchestration instructions.

ECS requires three endpoints:

  • com.amazonaws.region.ecs-agent
  • com.amazonaws.region.ecs-telemetry
  • com.amazonaws.region.ecs

Create these three interface endpoints in the same way that you created the endpoint for ECR, by adding each endpoint and setting the subnets and security group for the endpoint.

After the endpoints are created and added to your VPC, there is one additional step. Make sure that your ECS agent is upgraded to version 1.25.1 or higher. For more information, see the instructions for upgrading the ECS agent.

If you are already running the right version of the ECS agent, restart any ECS agents that are currently running in the VPC. The ECS agent uses a persistent web socket connection to the ECS backend and VPC endpoints do not interrupt existing connections. The agent continues to use its existing connection instead of establishing a new connection through the new endpoint, unless you restart it.

To restart the agent with no disruption to your application containers, you can connect using SSH to each EC2 instance in the cluster and issue the following command:

sudo docker restart ecs-agent

This restarts the ECS agent without stopping any of the other application containers on the host. Your application may be stateless and safe to stop at any time, or you may not have or want SSH access to the underlying hosts. In that case, choose to just reboot each EC2 instance in the cluster one at a time. This restarts the agent on that host while also restarting any service launched tasks on that host on a different host.

Conclusion

In this post, I showed you how to add AWS PrivateLink endpoints to your VPC for ECS and ECR, including an S3 gateway for ECR layer downloads.

The instances in your ECS cluster can communicate directly with the ECS control plane. They should be able to download Docker images directly without needing to make any connections outside of your VPC using an internet gateway or NAT gateway. All container orchestration traffic stays inside the VPC.

If you have questions or suggestions, please comment below.

AWS Fargate Price Reduction – Up to 50%

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/aws-fargate-price-reduction-up-to-50/

AWS Fargate is a compute engine that uses containers as its fundamental compute primitive. AWS Fargate runs your application containers for you on demand. You no longer need to provision a pool of instances or manage a Docker daemon or orchestration agent. Because the infrastructure that runs your containers is invisible, you don’t have to worry about whether you have provisioned enough instances to run your containerized workload. You also don’t have to worry about whether you’re using those instances efficiently to avoid paying for resources that you don’t use. You no longer need to do undifferentiated heavy lifting to maintain the infrastructure that runs your containers. AWS Fargate automatically updates and patches underlying resources to keep you safe from vulnerabilities in the underlying operating system and software. AWS Fargate uses an on-demand pricing model that charges per vCPU and per GB of memory reserved per second, with a 1-minute minimum.

At re:Invent 2018 we announced Firecracker, an open source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services. Firecracker enables you to deploy workloads in lightweight virtual machines called microVMs. These microVMs can initiate code faster, with less overhead. Innovations such as these allow us to improve the efficiency of Fargate and help us pass on cost savings to customers.

Effective January 7th, 2019 Fargate pricing per vCPU per second is being reduced by 20%, and pricing per GB of memory per second is being reduced by 65%. Depending on the ratio of CPU to memory that you’re allocating for your containers, you could see an overall price reduction of anywhere from 35% to 50%.

The following table shows the price reduction for each built-in launch configuration.

vCPUGB MemoryEffective Price Cut
0.250.5-35.00%
0.251-42.50%
0.252-50.00%
0.51-35.00%
0.52-42.50%
0.53-47.00%
0.54-50.00%
12-35.00%
13-39.30%
14-42.50%
15-45.00%
16-47.00%
17-48.60%
18-50.00%
24-35.00%
25-37.30%
26-39.30%
27-41.00%
28-42.50%
29-43.80%
210-45.00%
211-46.10%
212-47.00%
213-47.90%
214-48.60%
215-49.30%
216-50.00%
48-35.00%
49-36.20%
410-37.30%
411-38.30%
412-39.30%
413-40.20%
414-41.00%
415-41.80%
416-42.50%
417-43.20%
418-43.80%
419-44.40%
420-45.00%
421-45.50%
422-46.10%
423-46.50%
424-47.00%
425-47.40%
426-47.90%
427-48.30%
428-48.60%
429-49.00%
430-49.30%

Many engineering organizations such as Turner Broadcasting System, Veritone, and Catalytic have already been using AWS Fargate to achieve significant infrastructure cost savings for batch jobs, cron jobs, and other on-and-off workloads. Running a cluster of instances at all times to run your containers constantly incurs cost, but AWS Fargate stops charging when your containers stop.

With these new price reductions, AWS Fargate also enables significant savings for containerized web servers, API services, and background queue consumers run by organizations like KPMG, CBS, and Product Hunt. If your application is currently running on large EC2 instances that peak at 10-20% CPU utilization, consider migrating to containers in AWS Fargate. Containers give you more granularity to provision the exact amount of CPU and memory that your application needs. You no longer pay for instance resources that your application doesn’t use. If a sudden spike of traffic causes your application to require more resources you still have the ability to rapidly scale your application out by adding more containers, or scale your application up by launching larger containers.

AWS Fargate lets you focus on building your containerized application without worrying about the infrastructure. This encompasses not just the infrastructure capacity provisioning, monitoring, and maintenance but also the infrastructure price. Implementing Firecracker in AWS Fargate is just part of our journey to keep making AWS Fargate faster, more powerful, and more efficient. Running your containers in AWS Fargate allows you to benefit from these improvements without any manual intervention required on your part.

AWS Fargate has achieved SOC, PCI, HIPAA BAA, ISO, MTCS, C5, and ENS High compliance certification, and has a 99.99% SLA. You can get started with AWS Fargate in 13 AWS Regions around the world.

Getting started with the AWS Cloud Development Kit for Amazon ECS

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/getting-started-with-the-aws-cloud-development-kit-for-amazon-ecs/

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation. The AWS CDK integrates fully with AWS services and offers a higher-level object-oriented abstraction to define AWS resources imperatively.

Using the AWS CDK library of infrastructure constructs, you can easily encapsulate AWS best practices in your infrastructure definition and share it without worrying about boilerplate logic. The AWS CDK improves the end-to-end development experience because you get to use the power of modern programming languages to define your AWS infrastructure in a predictable and efficient manner. The AWS CDK is currently available for TypeScript, JavaScript, Java, and .NET.

The AWS CDK now includes constructs for ECS resources, allowing you to deploy a fully functioning containerized application environment on AWS with just a few lines of simple, readable code. Here’s how it works.

Install the AWS CDK

The first step is to install the AWS CDK on your development machine:

mkdir greeter-cdk
cd greeter-cdk
npm init -y
npm install @aws-cdk/cdk
npm install -g aws-cdk

Next, write some JavaScript code that imports the AWS CDK library and uses it to define a skeleton that you can place all your resources in:

index.js

const cdk = require('@aws-cdk/cdk');

class GreetingStack extends cdk.Stack {
  constructor(parent, id, props) {
    super(parent, id, props);
  }
}

class GreetingApp extends cdk.App {
  constructor(argv) {
    super(argv);
    new GreetingStack(this, 'greeting-stack');
  }
}

new GreetingApp().run();

Next, write a small configuration file telling the AWS CDK CLI that index.js is the code that defines your application stack:

cdk.json

{
  "app": "node index.js"
}

Now if you run cdk ls –l, you can see that the AWS CDK has found your stack, and has automatically interpolated some details about it from your development machine’s environment, such as your AWS account ID and default Region:

$ cdk ls -l
- name: greeting-stack
  environment:
    name: 209640446841/us-east-1
    account: '209640446841'
    region: us-east-1

Add ECS constructs

It’s time to add some ECS constructs to your stack. To do this, first install the ECS construct library. Also, install a couple of other constructs to help you set up resources linked to your containers:

npm install @aws-cdk/aws-ecs
npm install @aws-cdk/aws-ec2
npm install @aws-cdk/aws-elasticloadbalancingv2

Now it’s time to use the ECS constructs to set up your application environment:

const cdk = require('@aws-cdk/cdk');
const ecs = require('@aws-cdk/aws-ecs');
const ec2 = require('@aws-cdk/aws-ec2');

class GreetingStack extends cdk.Stack {
  constructor(parent, id, props) {
    super(parent, id, props);

    const vpc = new ec2.VpcNetwork(this, 'GreetingVpc', { maxAZs: 2 });

    // Create an ECS cluster
    const cluster = new ecs.Cluster(this, 'Cluster', { vpc });

    // Add capacity to it
    cluster.addDefaultAutoScalingGroupCapacity({
      instanceType: new ec2.InstanceType('t3.xlarge'),
      instanceCount: 3
    });
  }
}

With just three calls, you can create a VPC to hold all your application resources, and an ECS cluster with three t3.xlarge instances. All it takes is one command to tell the AWS CDK to automatically deploy this stack on your account:

cdk deploy

Behind the scenes, the AWS CDK synthesizes your JavaScript calls into a CloudFormation template. It asks CloudFormation to deploy the resources described in the synthesized template. You can see a live log of each resource that is being created and what the status is. As you can see from the numbers on the left side of the message stream, those three simple commands added to the AWS CDK stack automatically expanded into 32 lower-level, primitive resources to be created on your AWS account.

After the AWS CDK deployment finishes, you have a fresh ECS cluster ready to run your services. Next you will deploy a simple microservices stack onto this cluster. Your application will be a simple greeting server. The frontend greeter service fetches a random greeting and name from two backend services. There are two tiers to this application: the frontend and backend. The network will look like the following diagram:

There are two load balancers, one of them allows anyone on the internet to talk to your greeter service. The other is internal and designed to allow the greeter service to talk to the other greeting and name services.

In total, you need to add five more high-level constructs to your AWS CDK application: two load balancers and three services.

Add a new ECS service

Adding a new ECS service to the application stack is easy. Define a task definition, add a container to it, and tell the AWS CDK to turn the task definition into a service:

// Name service
const nameTaskDefinition = new ecs.Ec2TaskDefinition(this, 'name-task-definition', {});

const nameContainer = nameTaskDefinition.addContainer('name', {
   image: ecs.ContainerImage.fromDockerHub('nathanpeck/name'),
   memoryLimitMiB: 128
});

nameContainer.addPortMappings({
   containerPort: 3000
});

const nameService = new ecs.Ec2Service(this, 'name-service', {
    cluster: cluster,
    desiredCount: 2,
     taskDefinition: nameTaskDefinition
});

// Greeting service
 const greetingTaskDefinition = new ecs.Ec2TaskDefinition(this, 'greeting-task-definition', {});

  const greetingContainer = greetingTaskDefinition.addContainer('greeting', {
    image: ecs.ContainerImage.fromDockerHub ('nathanpeck/greeting'),
    memoryLimitMiB: 128
  });

  greetingContainer.addPortMappings({
    containerPort: 3000
  });

  const greetingService = new ecs.Ec2Service(this, 'greeting-service', {
    cluster: cluster,
    desiredCount: 2,
    taskDefinition: greetingTaskDefinition
  });

Just like that, you’ve defined two different ECS services that run by loading a public image from Docker Hub. The next step is to create a load balancer and add the services to it:

// Internal load balancer for the backend services
    const internalLB = new elbv2.ApplicationLoadBalancer(this, 'internal', {
      vpc: vpc,
      internetFacing: false
    });

    const internalListener = internalLB.addListener('PublicListener', { port: 80, open: true });

    internalListener.addTargetGroups('default', {
      targetGroups: [new elbv2.ApplicationTargetGroup(this, 'default', {
        vpc: vpc,
        protocol: 'HTTP',
        port: 80
      })]
    });

    internalListener.addTargets('name', {
      port: 80,
      pathPattern: '/name*',
      priority: 1,
      targets: [nameService]
    });

    internalListener.addTargets('greeting', {
      port: 80,
      pathPattern: '/greeting*',
      priority: 2,
      targets: [greetingService]
    });

For this configuration, the code defines a single load balancer with a single listener on port 80, but adds two different services behind it. If the path of the request looks like /name, it sends the request to your name service. If it looks like /greeting, it sends the request to the greeting service.

Finally, add the frontend greeter service, which constructs a random greeting phrase by fetching a random name from the name service and a random greeting from the greeting service. To do this, configure the greeter service to know how to make requests to the other two backend services:

    // Greeter service
    const greeterTaskDefinition = new ecs.Ec2TaskDefinition(this, 'greeter-task-definition', {});

    const greeterContainer = greeterTaskDefinition.addContainer('greeter', {
      image: ecs.ContainerImage.fromDockerHub ('nathanpeck/greeter'),
      memoryLimitMiB: 128,
      environment: {
        GREETING_URL: 'http://' + internalLB.dnsName + '/greeting',
        NAME_URL: 'http://' + internalLB.dnsName + '/name'
      }
    });

    greeterContainer.addPortMappings({
      containerPort: 3000
    });

    const greeterService = new ecs.Ec2Service(this, 'greeter-service', {
      cluster: cluster,
      desiredCount: 2,
      taskDefinition: greeterTaskDefinition
    });

The AWS CDK has a powerful capability to resolve expressions that you enter in your JavaScript and turn them into a CloudFormation template that resolves the correct values.

In this example, you create a reference to the DNS name of the load balancer, and indicate that you want to assign the following:

·    Environment variable = 'NAME_URL'

·    Value = 'http://' + internalLB.dnsName + '/name'

If you run cdk synth, you can see that the AWS CDK generates a CloudFormation template that dynamically inserts the proper DNS name of the load balancer at deployment time:

Type: 'AWS::ECS::TaskDefinition'
    Properties:
      ContainerDefinitions:
        - Environment:
            - Name: GREETING_URL
              Value:
                'Fn::Join':
                  - ''
                  - - 'http://'
                    - 'Fn::GetAtt':
                        - internal505AC855
                        - DNSName
                    - /greeting
            - Name: NAME_URL
              Value:
                'Fn::Join':
                  - ''
                  - - 'http://'
                    - 'Fn::GetAtt':
                        - internal505AC855
                        - DNSName
                    - /name

One final thing to add to your AWS CDK stack is an output. This gives you the DNS name of your service so you can send traffic to it:

new cdk.Output(this, 'ExternalDNS', { value: externalLB.dnsName });

Now see what is added when you deploy. Type the following command to see a preview of new or modified resources without actually doing the deployment:

cdk diff

The list of new resources being added looks good so run cdk deploy again. Again, the AWS CDK synthesizes the CloudFormation template, and initializes its deployment on your AWS account. This time, however, it creates a total of 66 resources, and gives you a URL output where the application is hosted:

To verify that your application is up and accepting traffic at that URL, load that internet ExternalDNS URL in your browser. The web application was able to talk to the two other backend services to get a greeting and a name:

Conclusion

If you’d like to try deploying this microservice stack yourself or using it as the basis for building your own AWS CDK stack, you can find the full AWS CDK example code on GitHub. Be sure to check out the AWS CDK documentation and the official AWS CDK construct for Amazon ECS on NPM.

Migrating your Amazon ECS deployment to the new ARN and resource ID format

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-deployment-to-the-new-arn-and-resource-id-format-2/

Starting today you can opt in to a new Amazon Resource Name (ARN) and resource ID format for Amazon ECS tasks, container instances, and services. The new format enables the enhanced ability to tag resources in your cluster, as well as tracking the cost of services and tasks running in your cluster.

In most cases, you don’t need to change your system beyond opting in to the new format. However, if your deployment mechanism uses regular expressions to parse the old format ARNs or task IDs, this may be a breaking change. It may also be a breaking change if you were storing the old format ARNs and IDs in a fixed-width database field or data structure.

After you have opted in, any new ECS services or tasks have the new ARN and ID format. Existing resources do not receive the new format. If you decide to opt out, any new resources that you later create then use the old format.

You can read more about the exact details of the change, including a format comparison, in the FAQ. I recommend that you test the new format in isolation before opting in globally.

Approaches to opting in

Depending on whether your software interacts with instance, service, or task ARNs, you may want to be careful about how you opt in to avoid breaking your existing implementation. There are several ways to do this.

Separate testing and production accounts

The easiest way to opt in is to use separate AWS accounts for your testing and QA environment and your production environment. In this case, you could choose to opt in your entire test account by opting in for the AWS account root user. All IAM users and roles within the account are then opted-in. You can easily reverse the opt-in decision by opting out the root user. After you have verified that all your deployments work as expected, you can opt in for your main production account.

Single region

If you use a single account to host all your resources and want to test the new ARNs in a gradual rollout, you need to be more careful about how you opt in. You may consider testing by opting in one AWS Region that you don’t normally use. Spin up your resources there to verify that they work before opting in your primary Region.

Single IAM user or role

Alternatively, you can decide to opt in for a specific test IAM user or role and conduct testing before opting in for the entire account. In this post, I explore how to opt in for a specific IAM user and role and to migrate your resources.

Prerequisites to opting in

There are a few prerequisites to opting into the new ARN and resource ID format.

Root user access

First, make sure that you can access the AWS account root user, so that you can opt in for all IAM users and roles on the account by default. Additionally, the root user can opt in for any single IAM user or role on the account, per Region. Individual IAM users and roles can still opt in or out separately.

Instance role access

In particular, getting the new format on your cluster instances involves opting in for the instance role that the instance assumes when it boots up. This role allows the instance to communicate with the ECS control plane and register itself to receive an ARN. For the instance to get a new-format ARN, the role that the instance uses must be opted in.

Access for other IAM roles

Depending on how your organization deploys clusters, you may need to opt in to the new format on other IAM users or roles that touch your ECS cluster. For example, you may have an IAM user that you use to access the console and create an ECS cluster.

You also may use AWS CodeDeploy, which has an IAM role that allows it to make deployments and update that service with the latest version of your application’s Docker container. Both roles should be opted in. Otherwise, CodeDeploy causes your service to deploy tasks with the old format.

Opt in for your IAM user

For the best results, create a new IAM user that you can use for testing the new ARN format. That way, you can continue to interact with your existing ECS deployments on your original IAM user without accidentally switching your live services to the new ARN format.

After your testing IAM user is created, access the opt-in page and turn on the new ARN format for this user.

At this point, any services or tasks that you now create in any cluster with this IAM user will have the new ARN format. However, you also need to update the ARN format on the instances in the cluster.

Create and opt-in for an instance role

To get the new instance ARN format, create an instance role. This role is used for each instance in the ECS cluster. After you opt in for the role, any instance that registers itself with the ECS control plane using that role gets the new ARN format.

Open the IAM console and choose Roles, Create role. Specify the type of role you are creating. For this example, create an AWS service-linked role and choose EC2 as the service that will use the role.

The next step is to add permissions for this role. In the search box, type AmazonEC2ContainerServiceforEC2Role. Select the check box to attach this policy to the role. This policy document is managed by AWS and has all the permissions required for an EC2 instance to act as a host in an ECS cluster.

Give the role a name and create it. Your finished role should look something like the following:

Finally, opt in for this role so that when an EC2 instance uses the role to register itself in the EC2 cluster, it receives the new ARN format. To do this, log in as the root user, and navigate to the opt-in page. This time, opt in for the instance role.

Launch a new cluster using an opted-in IAM user and instance role

Existing resources do not receive the new ARN format until they are re-created. To get the new ARNs, use the opted-in IAM user and opted-in instance role to create an opted-in cluster.

Open the cluster creation wizard, and create a new EC2 Linux + Networking cluster.

On the next screen, name your cluster and select the instance role that you previously opted in:

After the cluster launches, navigate to the ECS Instances list and verify that the instances have the new ID format.

Launch a new opted-in service

The last part is to launch a service in the cluster. Be sure to use the IAM user that you opted in earlier. Task definitions are not changed by this opt in, so you can create a clone of an existing service using the same task definitions that you are already using.

As a best practice, do a blue/green infrastructure rollout by creating a full parallel stack: a new cluster of EC2 instances, services, tasks, and any load balancers. Then, direct traffic from your previous cluster to the new cluster by switching the DNS CNAME of your website to point at the new stack’s load balancer.

This approach allows you to test the new stack without interrupting your existing stack. After you have verified that it works, you can switch traffic over to the new stack without any downtime.

Conclusion

You have now tested opting in to the new ARN and resource ID format for ECS container instances, services, and tasks. If everything worked properly, you can opt in across your full account today by using the root user to opt in.

Starting January 1, 2020, all newly created ECS resources receive the new ARN and resource ID format. I recommend testing the new ARNs now, even if you don’t intend to manually opt in your existing resources at this time.

Task Networking in AWS Fargate

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/task-networking-in-aws-fargate/

AWS Fargate is a technology that allows you to focus on running your application without needing to provision, monitor, or manage the underlying compute infrastructure. You package your application into a Docker container that you can then launch using your container orchestration tool of choice.

Fargate allows you to use containers without being responsible for Amazon EC2 instances, similar to how EC2 allows you to run VMs without managing physical infrastructure. Currently, Fargate provides support for Amazon Elastic Container Service (Amazon ECS). Support for Amazon Elastic Container Service for Kubernetes (Amazon EKS) will be made available in the near future.

Despite offloading the responsibility for the underlying instances, Fargate still gives you deep control over configuration of network placement and policies. This includes the ability to use many networking fundamentals such as Amazon VPC and security groups.

This post covers how to take advantage of the different ways of networking your containers in Fargate when using ECS as your orchestration platform, with a focus on how to do networking securely.

The first step to running any application in Fargate is defining an ECS task for Fargate to launch. A task is a logical group of one or more Docker containers that are deployed with specified settings. When running a task in Fargate, there are two different forms of networking to consider:

  • Container (local) networking
  • External networking

Container Networking

Container networking is often used for tightly coupled application components. Perhaps your application has a web tier that is responsible for serving static content as well as generating some dynamic HTML pages. To generate these dynamic pages, it has to fetch information from another application component that has an HTTP API.

One potential architecture for such an application is to deploy the web tier and the API tier together as a pair and use local networking so the web tier can fetch information from the API tier.

If you are running these two components as two processes on a single EC2 instance, the web tier application process could communicate with the API process on the same machine by using the local loopback interface. The local loopback interface has a special IP address of 127.0.0.1 and hostname of localhost.

By making a networking request to this local interface, it bypasses the network interface hardware and instead the operating system just routes network calls from one process to the other directly. This gives the web tier a fast and efficient way to fetch information from the API tier with almost no networking latency.

In Fargate, when you launch multiple containers as part of a single task, they can also communicate with each other over the local loopback interface. Fargate uses a special container networking mode called awsvpc, which gives all the containers in a task a shared elastic network interface to use for communication.

If you specify a port mapping for each container in the task, then the containers can communicate with each other on that port. For example the following task definition could be used to deploy the web tier and the API tier:

{
  "family": "myapp"
  "containerDefinitions": [
    {
      "name": "web",
      "image": "my web image url",
      "portMappings": [
        {
          "containerPort": 80
        }
      ],
      "memory": 500,
      "cpu": 10,
      "esssential": true
    },
    {
      "name": "api",
      "image": "my api image url",
      "portMappings": [
        {
          "containerPort": 8080
        }
      ],
      "cpu": 10,
      "memory": 500,
      "essential": true
    }
  ]
}

ECS, with Fargate, is able to take this definition and launch two containers, each of which is bound to a specific static port on the elastic network interface for the task.

Because each Fargate task has its own isolated networking stack, there is no need for dynamic ports to avoid port conflicts between different tasks as in other networking modes. The static ports make it easy for containers to communicate with each other. For example, the web container makes a request to the API container using its well-known static port:

curl 127.0.0.1:8080/my-endpoint

This sends a local network request, which goes directly from one container to the other over the local loopback interface without traversing the network. This deployment strategy allows for fast and efficient communication between two tightly coupled containers. But most application architectures require more than just internal local networking.

External Networking

External networking is used for network communications that go outside the task to other servers that are not part of the task, or network communications that originate from other hosts on the internet and are directed to the task.

Configuring external networking for a task is done by modifying the settings of the VPC in which you launch your tasks. A VPC is a fundamental tool in AWS for controlling the networking capabilities of resources that you launch on your account.

When setting up a VPC, you create one or more subnets, which are logical groups that your resources can be placed into. Each subnet has an Availability Zone and its own route table, which defines rules about how network traffic operates for that subnet. There are two main types of subnets: public and private.

Public subnets

A public subnet is a subnet that has an associated internet gateway. Fargate tasks in that subnet are assigned both private and public IP addresses:


A browser or other client on the internet can send network traffic to the task via the internet gateway using its public IP address. The tasks can also send network traffic to other servers on the internet because the route table can route traffic out via the internet gateway.

If tasks want to communicate directly with each other, they can use each other’s private IP address to send traffic directly from one to the other so that it stays inside the subnet without going out to the internet gateway and back in.

Private subnets

A private subnet does not have direct internet access. The Fargate tasks inside the subnet don’t have public IP addresses, only private IP addresses. Instead of an internet gateway, a network address translation (NAT) gateway is attached to the subnet:

 

There is no way for another server or client on the internet to reach your tasks directly, because they don’t even have an address or a direct route to reach them. This is a great way to add another layer of protection for internal tasks that handle sensitive data. Those tasks are protected and can’t receive any inbound traffic at all.

In this configuration, the tasks can still communicate to other servers on the internet via the NAT gateway. They would appear to have the IP address of the NAT gateway to the recipient of the communication. If you run a Fargate task in a private subnet, you must add this NAT gateway. Otherwise, Fargate can’t make a network request to Amazon ECR to download the container image, or communicate with Amazon CloudWatch to store container metrics.

Load balancers

If you are running a container that is hosting internet content in a private subnet, you need a way for traffic from the public to reach the container. This is generally accomplished by using a load balancer such as an Application Load Balancer or a Network Load Balancer.

ECS integrates tightly with AWS load balancers by automatically configuring a service-linked load balancer to send network traffic to containers that are part of the service. When each task starts, the IP address of its elastic network interface is added to the load balancer’s configuration. When the task is being shut down, network traffic is safely drained from the task before removal from the load balancer.

To get internet traffic to containers using a load balancer, the load balancer is placed into a public subnet. ECS configures the load balancer to forward traffic to the container tasks in the private subnet:

This configuration allows your tasks in Fargate to be safely isolated from the rest of the internet. They can still initiate network communication with external resources via the NAT gateway, and still receive traffic from the public via the Application Load Balancer that is in the public subnet.

Another potential use case for a load balancer is for internal communication from one service to another service within the private subnet. This is typically used for a microservice deployment, in which one service such as an internet user account service needs to communicate with an internal service such as a password service. Obviously, it is undesirable for the password service to be directly accessible on the internet, so using an internet load balancer would be a major security vulnerability. Instead, this can be accomplished by hosting an internal load balancer within the private subnet:

With this approach, one container can distribute requests across an Auto Scaling group of other private containers via the internal load balancer, ensuring that the network traffic stays safely protected within the private subnet.

Best Practices for Fargate Networking

Determine whether you should use local task networking

Local task networking is ideal for communicating between containers that are tightly coupled and require maximum networking performance between them. However, when you deploy one or more containers as part of the same task they are always deployed together so it removes the ability to independently scale different types of workload up and down.

In the example of the application with a web tier and an API tier, it may be the case that powering the application requires only two web tier containers but 10 API tier containers. If local container networking is used between these two container types, then an extra eight unnecessary web tier containers would end up being run instead of allowing the two different services to scale independently.

A better approach would be to deploy the two containers as two different services, each with its own load balancer. This allows clients to communicate with the two web containers via the web service’s load balancer. The web service could distribute requests across the eight backend API containers via the API service’s load balancer.

Run internet tasks that require internet access in a public subnet

If you have tasks that require internet access and a lot of bandwidth for communication with other services, it is best to run them in a public subnet. Give them public IP addresses so that each task can communicate with other services directly.

If you run these tasks in a private subnet, then all their outbound traffic has to go through an NAT gateway. AWS NAT gateways support up to 10 Gbps of burst bandwidth. If your bandwidth requirements go over this, then all task networking starts to get throttled. To avoid this, you could distribute the tasks across multiple private subnets, each with their own NAT gateway. It can be easier to just place the tasks into a public subnet, if possible.

Avoid using a public subnet or public IP addresses for private, internal tasks

If you are running a service that handles private, internal information, you should not put it into a public subnet or use a public IP address. For example, imagine that you have one task, which is an API gateway for authentication and access control. You have another background worker task that handles sensitive information.

The intended access pattern is that requests from the public go to the API gateway, which then proxies request to the background task only if the request is from an authenticated user. If the background task is in a public subnet and has a public IP address, then it could be possible for an attacker to bypass the API gateway entirely. They could communicate directly to the background task using its public IP address, without being authenticated.

Conclusion

Fargate gives you a way to run containerized tasks directly without managing any EC2 instances, but you still have full control over how you want networking to work. You can set up containers to talk to each other over the local network interface for maximum speed and efficiency. For running workloads that require privacy and security, use a private subnet with public internet access locked down. Or, for simplicity with an internet workload, you can just use a public subnet and give your containers a public IP address.

To deploy one of these Fargate task networking approaches, check out some sample CloudFormation templates showing how to configure the VPC, subnets, and load balancers.

If you have questions or suggestions, please comment below.

Deploying an NGINX Reverse Proxy Sidecar Container on Amazon ECS

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/nginx-reverse-proxy-sidecar-container-on-amazon-ecs/

Reverse proxies are a powerful software architecture primitive for fetching resources from a server on behalf of a client. They serve a number of purposes, from protecting servers from unwanted traffic to offloading some of the heavy lifting of HTTP traffic processing.

This post explains the benefits of a reverse proxy, and explains how to use NGINX and Amazon EC2 Container Service (Amazon ECS) to easily implement and deploy a reverse proxy for your containerized application.

Components

NGINX is a high performance HTTP server that has achieved significant adoption because of its asynchronous event driven architecture. It can serve thousands of concurrent requests with a low memory footprint. This efficiency also makes it ideal as a reverse proxy.

Amazon ECS is a highly scalable, high performance container management service that supports Docker containers. It allows you to run applications easily on a managed cluster of Amazon EC2 instances. Amazon ECS helps you get your application components running on instances according to a specified configuration. It also helps scale out these components across an entire fleet of instances.

Sidecar containers are a common software pattern that has been embraced by engineering organizations. It’s a way to keep server side architecture easier to understand by building with smaller, modular containers that each serve a simple purpose. Just like an application can be powered by multiple microservices, each microservice can also be powered by multiple containers that work together. A sidecar container is simply a way to move part of the core responsibility of a service out into a containerized module that is deployed alongside a core application container.

The following diagram shows how an NGINX reverse proxy sidecar container operates alongside an application server container:

In this architecture, Amazon ECS has deployed two copies of an application stack that is made up of an NGINX reverse proxy side container and an application container. Web traffic from the public goes to an Application Load Balancer, which then distributes the traffic to one of the NGINX reverse proxy sidecars. The NGINX reverse proxy then forwards the request to the application server and returns its response to the client via the load balancer.

Reverse proxy for security

Security is one reason for using a reverse proxy in front of an application container. Any web server that serves resources to the public can expect to receive lots of unwanted traffic every day. Some of this traffic is relatively benign scans by researchers and tools, such as Shodan or nmap:

[18/May/2017:15:10:10 +0000] "GET /YesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScanningForResearchPurposePleaseHaveALookAtTheUserAgentTHXYesThisIsAReallyLongRequestURLbutWeAreDoingItOnPurposeWeAreScann HTTP/1.1" 404 1389 - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36
[18/May/2017:18:19:51 +0000] "GET /clientaccesspolicy.xml HTTP/1.1" 404 322 - Cloud mapping experiment. Contact [email protected]

But other traffic is much more malicious. For example, here is what a web server sees while being scanned by the hacking tool ZmEu, which scans web servers trying to find PHPMyAdmin installations to exploit:

[18/May/2017:16:27:39 +0000] "GET /mysqladmin/scripts/setup.php HTTP/1.1" 404 391 - ZmEu
[18/May/2017:16:27:39 +0000] "GET /web/phpMyAdmin/scripts/setup.php HTTP/1.1" 404 394 - ZmEu
[18/May/2017:16:27:39 +0000] "GET /xampp/phpmyadmin/scripts/setup.php HTTP/1.1" 404 396 - ZmEu
[18/May/2017:16:27:40 +0000] "GET /apache-default/phpmyadmin/scripts/setup.php HTTP/1.1" 404 405 - ZmEu
[18/May/2017:16:27:40 +0000] "GET /phpMyAdmin-2.10.0.0/scripts/setup.php HTTP/1.1" 404 397 - ZmEu
[18/May/2017:16:27:40 +0000] "GET /mysql/scripts/setup.php HTTP/1.1" 404 386 - ZmEu
[18/May/2017:16:27:41 +0000] "GET /admin/scripts/setup.php HTTP/1.1" 404 386 - ZmEu
[18/May/2017:16:27:41 +0000] "GET /forum/phpmyadmin/scripts/setup.php HTTP/1.1" 404 396 - ZmEu
[18/May/2017:16:27:41 +0000] "GET /typo3/phpmyadmin/scripts/setup.php HTTP/1.1" 404 396 - ZmEu
[18/May/2017:16:27:42 +0000] "GET /phpMyAdmin-2.10.0.1/scripts/setup.php HTTP/1.1" 404 399 - ZmEu
[18/May/2017:16:27:44 +0000] "GET /administrator/components/com_joommyadmin/phpmyadmin/scripts/setup.php HTTP/1.1" 404 418 - ZmEu
[18/May/2017:18:34:45 +0000] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 390 - ZmEu
[18/May/2017:16:27:45 +0000] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 401 - ZmEu

In addition, servers can also end up receiving unwanted web traffic that is intended for another server. In a cloud environment, an application may end up reusing an IP address that was formerly connected to another service. It’s common for misconfigured or misbehaving DNS servers to send traffic intended for a different host to an IP address now connected to your server.

It’s the responsibility of anyone running a web server to handle and reject potentially malicious traffic or unwanted traffic. Ideally, the web server can reject this traffic as early as possible, before it actually reaches the core application code. A reverse proxy is one way to provide this layer of protection for an application server. It can be configured to reject these requests before they reach the application server.

Reverse proxy for performance

Another advantage of using a reverse proxy such as NGINX is that it can be configured to offload some heavy lifting from your application container. For example, every HTTP server should support gzip. Whenever a client requests gzip encoding, the server compresses the response before sending it back to the client. This compression saves network bandwidth, which also improves speed for clients who now don’t have to wait as long for a response to fully download.

NGINX can be configured to accept a plaintext response from your application container and gzip encode it before sending it down to the client. This allows your application container to focus 100% of its CPU allotment on running business logic, while NGINX handles the encoding with its efficient gzip implementation.

An application may have security concerns that require SSL termination at the instance level instead of at the load balancer. NGINX can also be configured to terminate SSL before proxying the request to a local application container. Again, this also removes some CPU load from the application container, allowing it to focus on running business logic. It also gives you a cleaner way to patch any SSL vulnerabilities or update SSL certificates by updating the NGINX container without needing to change the application container.

NGINX configuration

Configuring NGINX for both traffic filtering and gzip encoding is shown below:

http {
  # NGINX will handle gzip compression of responses from the app server
  gzip on;
  gzip_proxied any;
  gzip_types text/plain application/json;
  gzip_min_length 1000;
 
  server {
    listen 80;
 
    # NGINX will reject anything not matching /api
    location /api {
      # Reject requests with unsupported HTTP method
      if ($request_method !~ ^(GET|POST|HEAD|OPTIONS|PUT|DELETE)$) {
        return 405;
      }
 
      # Only requests matching the whitelist expectations will
      # get sent to the application server
      proxy_pass http://app:3000;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_cache_bypass $http_upgrade;
    }
  }
}

The above configuration only accepts traffic that matches the expression /api and has a recognized HTTP method. If the traffic matches, it is forwarded to a local application container accessible at the local hostname app. If the client requested gzip encoding, the plaintext response from that application container is gzip-encoded.

Amazon ECS configuration

Configuring ECS to run this NGINX container as a sidecar is also simple. ECS uses a core primitive called the task definition. Each task definition can include one or more containers, which can be linked to each other:

 {
  "containerDefinitions": [
     {
       "name": "nginx",
       "image": "<NGINX reverse proxy image URL here>",
       "memory": "256",
       "cpu": "256",
       "essential": true,
       "portMappings": [
         {
           "containerPort": "80",
           "protocol": "tcp"
         }
       ],
       "links": [
         "app"
       ]
     },
     {
       "name": "app",
       "image": "<app image URL here>",
       "memory": "256",
       "cpu": "256",
       "essential": true
     }
   ],
   "networkMode": "bridge",
   "family": "application-stack"
}

This task definition causes ECS to start both an NGINX container and an application container on the same instance. Then, the NGINX container is linked to the application container. This allows the NGINX container to send traffic to the application container using the hostname app.

The NGINX container has a port mapping that exposes port 80 on a publically accessible port but the application container does not. This means that the application container is not directly addressable. The only way to send it traffic is to send traffic to the NGINX container, which filters that traffic down. It only forwards to the application container if the traffic passes the whitelisted rules.

Conclusion

Running a sidecar container such as NGINX can bring significant benefits by making it easier to provide protection for application containers. Sidecar containers also improve performance by freeing your application container from various CPU intensive tasks. Amazon ECS makes it easy to run sidecar containers, and automate their deployment across your cluster.

To see the full code for this NGINX sidecar reference, or to try it out yourself, you can check out the open source NGINX reverse proxy reference architecture on GitHub.

– Nathan
 @nathankpeck