Tag Archives: Amazon Elastic Container Service for Kubernetes

Improving and securing your game-binaries distribution at scale

Post Syndicated from Ignacio Riesgo original https://aws.amazon.com/blogs/compute/improving-and-securing-your-game-binaries-distribution-at-scale/

This post is contributed by Yahav Biran | Sr. Solutions Architect, AWS and Scott Selinger | Associate Solutions Architect, AWS 

One of the challenges that game publishers face when employing CI/CD processes is the distribution of updated game binaries in a scalable, secure, and cost-effective way. Continuous integration and continuous deployment (CI/CD) processes enable game publishers to improve games throughout their lifecycle.

Often, CI/CD jobs contain minor changes that cause the CI/CD processes to push a full set of game binaries over the internet. This is a suboptimal approach. It negatively affects the cost of development network resources, customer network resources (output and input bandwidth), and the time it takes for a game update to propagate.

This post proposes a method of optimizing the game integration and deployments. Specifically, this method improves the distribution of updated game binaries to various targets, such as game-server farms. The proposed mechanism also adds to the security model designed to include progressive layers, starting from the Amazon EC2 instance that runs the game server. It also improves security of the game binaries, the game assets, and the monitoring of the game server deployments across several AWS Regions.

Why CI/CD in gaming is hard today

Game server binaries are usually a native application that includes binaries like graphic, sound, network, and physics assets, as well as scripts and media files. Game servers are usually developed with game engines like Unreal, Amazon Lumberyard, and Unity. Game binaries typically take up tens of gigabytes. However, because game developer teams modify only a few tens of kilobytes every day, frequent distribution of a full set of binaries is wasteful.

For a standard global game deployment, distributing game binaries requires compressing the entire binaries set and transferring the compressed version to destinations, then decompressing it upon arrival. You can optimize the process by decoupling the various layers, pushing and deploying them individually.

In both cases, the continuous deployment process might be slow due to the compression and transfer durations. Also, distributing the image binaries incurs unnecessary data transfer costs, since data is duplicated. Other game-binary distribution methods may require the game publisher’s DevOps teams to install and maintain custom caching mechanisms.

This post demonstrates an optimal method for distributing game server updates. The solution uses containerized images stored in Amazon ECR and deployed using Amazon ECS or Amazon EKS to shorten the distribution duration and reduce network usage.

How can containers help?

Dockerized game binaries enable standard caching with no implementation from the game publisher. Dockerized game binaries allow game publishers to stage their continuous build process in two ways:

  • To rebuild only the layer that was updated in a particular build process and uses the other cached layers.
  • To reassemble both packages into a deployable game server.

The use of ECR with either ECS or EKS takes care of the last mile deployment to the Docker container host.

Larger application binaries mean longer application loading times. To reduce the overall application initialization time, I decouple the deployment of the binaries and media files to allow the application to update faster. For example, updates in the application media files do not require the replication of the engine binaries or media files. This is achievable if the application binaries can be deployed in a separate directory structure. For example:

/opt/local/engine

/opt/local/engine-media

/opt/local/app

/opt/local/app-media

Containerized game servers deployment on EKS

The application server can be deployed as a single Kubernetes pod with multiple containers. The engine media (/opt/local/engine-media), the application (/opt/local/app), and the application media (/opt/local/app-media) spawn as Kubernetes initContainers and the engine binary (/opt/local/engine) runs as the main container.

apiVersion: v1
kind: Pod
metadata:
  name: my-game-app-pod
  labels:
    app: my-game-app
volumes:
      - name: engine-media-volume
          emptyDir: {}
      - name: app-volume
          emptyDir: {}
      - name: app-media-volume
          emptyDir: {}
      initContainers:
        - name: app
          image: the-app- image
          imagePullPolicy: Always
          command:
            - "sh"
            - "-c"
            - "cp /* /opt/local/engine-media"
          volumeMounts:
            - name: engine-media-volume
              mountPath: /opt/local/engine-media
        - name: engine-media
          image: the-engine-media-image
          imagePullPolicy: Always
          command:
            - "sh"
            - "-c"
            - "cp /* /opt/local/app"
          volumeMounts:
            - name: app-volume
              mountPath: /opt/local/app
        - name: app-media
          image: the-app-media-image
          imagePullPolicy: Always
          command:
            - "sh"
            - "-c"
            - "cp /* /opt/local/app-media"
          volumeMounts:
            - name: app-media-volume
              mountPath: /opt/local/app-media
spec:
  containers:
  - name: the-engine
    image: the-engine-image
    imagePullPolicy: Always
    volumeMounts:
       - name: engine-media-volume
         mountPath: /opt/local/engine-media
       - name: app-volume
         mountPath: /opt/local/app
       - name: app-media-volume
         mountPath: /opt/local/app-media
    command: ['sh', '-c', '/opt/local/engine/start.sh']

Applying multi-stage game binaries builds

In this post, I use Docker multi-stage builds for containerizing the game asset builds. I use AWS CodeBuild to manage the build and to deploy the updates of game engines like Amazon Lumberyard as ready-to-play dedicated game servers.

Using this method, frequent changes in the game binaries require less than 1% of the data transfer typically required by full image replication to the nodes that run the game-server instances. This results in significant improvements in build and integration time.

I provide a deployment example for Amazon Lumberyard Multiplayer Sample that is deployed to an EKS cluster, but this can also be done using different container orchestration technology and different game engines. I also show that the image being deployed as a game-server instance is always the latest image, which allows centralized control of the code to be scheduled upon distribution.

This example shows an update of only 50 MB of game assets, whereas the full game-server binary is 3.1 GB. With only 1.5% of the content being updated, that speeds up the build process by 90% compared to non-containerized game binaries.

For security with EKS, apply the imagePullPolicy: Always option as part of the Kubernetes best practice container images deployment option. This option ensures that the latest image is pulled every time that the pod is started, thus deploying images from a single source in ECR, in this case.

Example setup

  • Read through the following sample, a multiplayer game sample, and see how to build and structure multiplayer games to employ the various features of the GridMate networking library.
  • Create an AWS CodeCommit or GitHub repository (multiplayersample-lmbr) that includes the game engine binaries, the game assets (.pak, .cfg and more), AWS CodeBuild specs, and EKS deployment specs.
  • Create a CodeBuild project that points to the CodeCommit repo. The build image uses aws/codebuild/docker:18.09.0: the built-in image maintained by CodeBuild configured with 3 GB of memory and two vCPUs. The compute allocated for build capacity can be modified for cost and build time tradeoff.
  • Create an EKS cluster designated as a staging or an integration environment for the game title. In this case, it’s multiplayersample.

The binaries build Git repository

The Git repository is composed of five core components ordered by their size:

  • The game engine binaries (for example, BinLinux64.Dedicated.tar.gz). This is the compressed version of the game engine artifacts that are not updated regularly, hence they are deployed as a compressed file. The maintenance of this file is usually done by a different team than the developers working on the game title.
  • The game binaries (for example, MultiplayerSample_pc_Paks_Dedicated). This directory is maintained by the game development team and managed as a standard multi-branch repository. The artifacts under this directory get updated on a daily or weekly basis, depending on the game development plan.
  • The build-related specifications (for example, buildspec.yml  and Dockerfile). These files specify the build process. For simplicity, I only included the Docker build process to convey the speed of continuous integration. The process can be easily extended to include the game compilation and linked process as well.
  • The Docker artifacts for containerizing the game engine and the game binaries (for example, start.sh and start.py). These scripts usually are maintained by the game DevOps teams and updated outside of the regular game development plan. More details about these scripts can be found in a sample that describes how to deploy a game-server in Amazon EKS.
  • The deployment specifications (for example, eks-spec) specify the Kubernetes game-server deployment specs. This is for reference only, since the CD process usually runs in a separate set of resources like staging EKS clusters, which are owned and maintained by a different team.

The game build process

The build process starts with any Git push event on the Git repository. The build process includes three core phases denoted by pre_build, buildand post_build in multiplayersample-lmbr/buildspec.yml

  1. The pre_build phase unzips the game-engine binaries and logs in to the container registry (Amazon ECR) to prepare.
  2. The buildphase executes the docker build command that includes the multi-stage build.
    • The Dockerfile spec file describes the multi-stage image build process. It starts by adding the game-engine binaries to the Linux OS, ubuntu:18.04 in this example.
    • FROM ubuntu:18.04
    • ADD BinLinux64.Dedicated.tar /
    • It continues by adding the necessary packages to the game server (for example, ec2-metadata, boto3, libc, and Python) and the necessary scripts for controlling the game server runtime in EKS. These packages are only required for the CI/CD process. Therefore, they are only added in the CI/CD process. This enables a clean decoupling between the necessary packages for development, integration, and deployment, and simplifies the process for both teams.
    • RUN apt-get install -y python python-pip
    • RUN apt-get install -y net-tools vim
    • RUN apt-get install -y libc++-dev
    • RUN pip install mcstatus ec2-metadata boto3
    • ADD start.sh /start.sh
    • ADD start.py /start.py
    • The second part is to copy the game engine from the previous stage --from=0 to the next build stage. In this case, you copy the game engine binaries with the two COPY Docker directives.
    • COPY --from=0 /BinLinux64.Dedicated/* /BinLinux64.Dedicated/
    • COPY --from=0 /BinLinux64.Dedicated/qtlibs /BinLinux64.Dedicated/qtlibs/
    • Finally, the game binaries are added as a separate layer on top of the game-engine layers, which concludes the build. It’s expected that constant daily changes are made to this layer, which is why it is packaged separately. If your game includes other abstractions, you can break this step into several discrete Docker image layers.
    • ADD MultiplayerSample_pc_Paks_Dedicated /BinLinux64.Dedicated/
  3. The post_build phase pushes the game Docker image to the centralized container registry for further deployment to the various regional EKS clusters. In this phase, tag and push the new image to the designated container registry in ECR.

- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG

$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

docker push

$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG

The game deployment process in EKS

At this point, you’ve pushed the updated image to the designated container registry in ECR (/$IMAGE_REPO_NAME:$IMAGE_TAG). This image is scheduled as a game server in an EKS cluster as game-server Kubernetes deployment, as described in the sample.

In this example, I use  imagePullPolicy: Always.


containers:
…
        image: /$IMAGE_REPO_NAME:$IMAGE_TAG/multiplayersample-build
        imagePullPolicy: Always
        name: multiplayersample
…

By using imagePullPolicy, you ensure that no one can circumvent Amazon ECR security. You can securely make ECR the single source of truth with regards to scheduled binaries. However, ECR to the worker nodes via kubelet, the node agent. Given the size of a whole image combined with the frequency with which it is pulled, that would amount to a significant additional cost to your project.

However, Docker layers allow you to update only the layers that were modified, preventing a whole image update. Also, they enable secure image distribution. In this example, only the layer MultiplayerSample_pc_Paks_Dedicated is updated.

Proposed CI/CD process

The following diagram shows an example end-to-end architecture of a full-scale game-server deployment using EKS as the orchestration system, ECR as the container registry, and CodeBuild as the build engine.

Game developers merge changes to the Git repository that include both the preconfigured game-engine binaries and the game artifacts. Upon merge events, CodeBuild builds a multistage game-server image that is pushed to a centralized container registry hosted by ECR. At this point, DevOps teams in different Regions continuously schedule the image as a game server, pulling only the updated layer in the game server image. This keeps the entire game-server fleet running the same game binaries set, making for a secure deployment.

 

Try it out

I published two examples to guide you through the process of building an Amazon EKS cluster and deploying a containerized game server with large binaries.

Conclusion

Adopting CI/CD in game development improves the software development lifecycle by continuously deploying quality-based updated game binaries. CI/CD in game development is usually hindered by the cost of distributing large binaries, in particular, by cross-regional deployments.

Non-containerized paradigms require deployment of the full set of binaries, which is an expensive and time-consuming task. Containerized game-server binaries with AWS build tools and Amazon EKS-based regional clusters of game servers enable secure and cost-effective distribution of large binary sets to enable increased agility in today’s game development.

In this post, I demonstrated a reduction of more than 90% of the network traffic required by implementing an effective CI/CD system in a large-scale deployment of multiplayer game servers.

Updates to Amazon EKS Version Lifecycle

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/updates-to-amazon-eks-version-lifecycle/

Contributed by Nathan Taber and Michael Hausenblas

At re:Invent 2017 we introduced the Amazon Elastic Container Service for Kubernetes, or Amazon EKS for short. We consider these tenets as valid today as they were at launch:

  • EKS is a platform to run production-grade workloads. This means that security and reliability are our first priority. After that we focus on doing the heavy lifting for you in the control plane, including life cycle-related things like version upgrades.
  • EKS provides a native and upstream Kubernetes experience. This means, with EKS you get vanilla, un-forked Kubernetes. Of course, in keeping with our first tenant, we ensure the Kubernetes versions we run have security-related patches, even for older, supported versions as quickly as possible. However, in terms of portability there’s no special sauce and no lock in.
  • If you want to use additional AWS services, the integrations are as seamless as possible.
  • The EKS team in AWS actively contributes to the upstream Kubernetes project, both on the technical level as well as community, from communicating good practices to participation in SIGs and working groups.

The first two tenets are highlighted and that is for a good reason: on the one hand we aim to go in lock-step with the upstream release cadence as much as possible, including outcomes of the SIG PM as well as the LTS Working Group. Given that running a service for production applications is our main focus, we want to make sure that you can rely on the Kubernetes we run for you. This includes, but is not limited to, security considerations around community support for ongoing bug fixes and patches for critical vulnerabilities and exposures (CVEs).

In this post, we want to give you a heads-up on upcoming changes with out Amazon EKS is managing the lifecycle for Kubernetes versions, walk you through the process in general and then have a look at a concrete example, Kubernetes version 1.10. This version happens to be the first version that will be deprecated on Amazon EKS.

But why now?

Glad you asked. It’s really all about security. Past a certain point (usually 1 year), the Kubernetes community stops releasing bug and CVE patches. Additionally, the Kubernetes project does not encourage CVE submission for deprecated versions. This means that vulnerabilities specific to an older version of Kubernetes may not even be reported, leaving users exposed with no notice in the case of a vulnerability. We consider this to be an unacceptable security posture for our customers.

Earlier this year we announced support for Kubernetes 1.12 in EKS. That, together with our commitment to support three Kubernetes versions at any given point in time and the fact that 1.13 will land very soon in EKS means that we have to deprecate 1.10, after which the three supported versions, unsurprisingly, will be 1.11, 1.12, and (you guessed it) 1.13. OK, with that out of the way, let’s have a look at the options you have to move to the latest Kubernetes versions with Amazon EKS and then dive into the update and deprecation process in greater detail:

  • Ideally, you test a new version and move to one of the three supported ones, in time (details below).
  • If you are still on a version we deprecate, you will be upgraded automatically, after some time (details, again, below).
  • If you’re using a deprecated version beyond a certain point and we can’t upgrade the cluster, we may deactivate it.

A quick Kubernetes release cycle refresher

In a nutshell, the Kubernetes versioning and release regime is roughly following a four-releases-per-year pattern, with cadence varying between 70 and 130 days. It also lays out an expectation in terms of upgrades:

We expect users to stay reasonably up-to-date with the versions of Kubernetes they use in production, but understand that it may take time to upgrade, especially for production-critical components.

The formal API versioning allows for a strict deprecation policy which states, amongst other things, that stable (GA) API support is “12 months or 3 releases (whichever is longer)”.

Now that we’re on the same page how upstream Kubernetes releases are managed, let’s have a look at how we at AWS implement the process in EKS.

The EKS Process

In line with the Kubernetes community support for Kubernetes versions, Amazon EKS is committed to running at least three production-ready versions of Kubernetes at any given time, with a fourth version in deprecation. A new Kubernetes version is released as generally available by the Kubernetes project every 70 and 130 days (we take the average of 90 days for simplicity). New GA versions will be supported by EKS some time after GA release (typically at the first patch version release – 1.XX.1, but sometimes later). This means that the total time a version is in production with EKS should be roughly 270 days.

We will announce the deprecation of a given Kubernetes version (n) at least 60 days before the deprecation date and over time, will align the deprecation of a Kubernetes version on EKS to be on or after the date the Kubernetes project stops supporting the version upstream.

For example, we will announce deprecation of version 1.10 while 1.12 is available for EKS and complete the deprecation process after version 1.13 is available for EKS. We will announce the deprecation of 1.11 after 1.13 is available and complete the deprecation after 1.14 is available for EKS.

The following table shows how this will work:

 EKS Version

   Today

   Soon

 About +90 days

 About +180 days

 About +270 days

 Latest Available 

1.12

1.13

1.14

1.15

1.16

 Default 

1.11

1.12

1.13

1.14

1.15

 Oldest 

1.10

1.11

1.12

1.13

1.14

 In Deprecation 

1.10

1.11

1.12

1.13

When we announce the deprecation, we will give customers a specific date when new cluster creation will be disabled for the version targeted for deprecation. On this date, EKS clusters running the version targeted for deprecation will begin to be updated to the next EKS-supported version of Kubernetes. This means that if the deprecated version is 1.10, clusters will be automatically updated to version 1.11. If a cluster is automatically updated by EKS, customers will need to update the version of their worker nodes after the update is complete. Kubernetes has compatibility between masters and workers for at least 2 versions, so 1.10 workers will continue to operate when orchestrated by a 1.11 control plane.

Upcoming deprecation of Kubernetes 1.10 in EKS

Amazon EKS will deprecate Kubernetes version 1.10 on July 22, 2019. On this day, you will no longer be able to create new 1.10 clusters and all EKS clusters running Kubernetes version 1.10 will be updated to the latest available platform version of Kubernetes version 1.11.

We recommend that all Amazon EKS customers update their 1.10 clusters to Kubernetes version 1.11 or 1.12 as soon as possible.

 

Wrapping up

What can you do today to prepare? Well, first off, internalize the timeline and try to align internal processes with it. Our documentation has more information about the EKS Kubernetes version deprecation process and EKS updates. If you have any questions, send us a note on our version deprecation issue in the public containers roadmap on GitHub.

Enabling DNS resolution for Amazon EKS cluster endpoints

Post Syndicated from Anuneet Kumar original https://aws.amazon.com/blogs/compute/enabling-dns-resolution-for-amazon-eks-cluster-endpoints/

This post is contributed by Jeremy Cowan – Sr. Container Specialist Solution Architect, AWS

By default, when you create an Amazon EKS cluster, the Kubernetes cluster endpoint is public. While it is accessible from the internet, access to the Kubernetes cluster endpoint is restricted by AWS Identity and Access Management (IAM) and Kubernetes role-based access control (RBAC) policies.

At some point, you may need to configure the Kubernetes cluster endpoint to be private.  Changing your Kubernetes cluster endpoint access from public to private completely disables public access such that it can no longer be accessed from the internet.

In fact, a cluster that has been configured to only allow private access can only be accessed from the following:

  • The VPC where the worker nodes reside
  • Networks that have been peered with that VPC
  • A network that has been connected to AWS through AWS Direct Connect (DX) or a virtual private network (VPN)

However, the name of the Kubernetes cluster endpoint is only resolvable from the worker node VPC, for the following reasons:

  • The Amazon Route 53 private hosted zone that is created for the endpoint is only associated with the worker node VPC.
  • The private hosted zone is created in a separate AWS managed account and cannot be altered.

For more information, see Working with Private Hosted Zones.

This post explains how to use Route 53 inbound and outbound endpoints to resolve the name of the cluster endpoints when a request originates outside the worker node VPC.

Route 53 inbound and outbound endpoints

Route 53 inbound and outbound endpoints allow you to simplify the configuration of hybrid DNS.  DNS queries for AWS resources are resolved by Route 53 resolvers and DNS queries for on-premises resources are forwarded to an on-premises DNS resolver. However, you can also use these Route 53 endpoints to resolve the names of endpoints that are only resolvable from within a specific VPC, like the EKS cluster endpoint.

The following diagrams show how the solution works:

  • A Route 53 inbound endpoint is created in each worker node VPC and associated with a security group that allows inbound DNS requests from external subnets/CIDR ranges.
  • If the requests for the Kubernetes cluster endpoint originate from a peered VPC, those requests must be routed through a Route 53 outbound endpoint.
  • The outbound endpoint, like the inbound endpoint, is associated with a security group that allows inbound requests that originate from the peered VPC or from other VPCs in the Region.
  • A forwarding rule is created for each Kubernetes cluster endpoint.  This rule routes the request through the outbound endpoint to the IP addresses of the inbound endpoints in the worker node VPC, where it is resolved by Route 53.
  • The results of the DNS query for the Kubernetes cluster endpoint are then returned to the requestor.

If the request originates from an on-premises environment, you forego creating the outbound endpoints. Instead, you create a forwarding rule to forward requests for the Kubernetes cluster endpoint to the IP address of the Route 53 inbound endpoints in the worker node VPC.

Solution overview

For this solution, follow these steps:

  • Create an inbound endpoint in the worker node VPC.
  • Create an outbound endpoint in a peered VPC.
  • Create a forwarding rule for the outbound endpoint that sends requests to the Route 53 resolver for the worker node VPC.
  • Create a security group rule to allow inbound traffic from a peered network.
  • (Optional) Create a forwarding rule in your on-premises DNS for the Kubernetes cluster endpoint.

Prerequisites

EKS requires that you enable DNS hostnames and DNS resolution in each worker node VPC when you change the cluster endpoint access from public to private.  It is also a prerequisite for this solution and for all solutions that uses Route 53 private hosted zones.

In addition, you need a route that connects your on-premises network or VPC with the worker node VPC.  In a multi-VPC environment, this can be accomplished by creating a peering connection between two or more VPCs and updating the route table in those VPCs. If you’re connecting from an on-premises environment across a DX or an IPsec VPN, you need a route to the worker node VPC.

Configuring the inbound endpoint

When you provision an EKS cluster, EKS automatically provisions two or more cross-account elastic network interfaces onto two different subnets in your worker node VPC.  These network interfaces are primarily used when the control plane must initiate a connection with your worker nodes, for example, when you use kubectl exec or kubectl proxy. However, they can also be used by the workers to communicate with the Kubernetes API server.

When you change the EKS endpoint access to private, EKS associates a Route 53 private hosted zone with your worker node VPC.  Within this private hosted zone, EKS creates resource records for the cluster endpoint. These records correspond to the IP addresses of the two cross-account elastic network interfaces that were created in your VPC when you provisioned your cluster.

When the IP addresses of these cross-account elastic network interfaces change, for example, when EKS replaces unhealthy control plane nodes, the resource records for the cluster endpoint are automatically updated. This allows your worker nodes to continue communicating with the cluster endpoint when you switch to private access.  If you update the cluster to enable public access and disable private access, your worker nodes revert to using the public Kubernetes cluster endpoint.

By creating a Route 53 inbound endpoint in the worker node VPC, DNS queries are sent to the VPC DNS resolver of worker node VPC.  This endpoint is now capable of resolving the cluster endpoint.

Create an inbound endpoint in the worker node VPC

  1. In the Route 53 console, choose Inbound endpoints, Create Inbound endpoint.
  2. For Endpoint Name, enter a value such as <cluster_name>InboundEndpoint.
  3. For VPC in the Region, choose the VPC ID of the worker node VPC.
  4. For Security group for this endpoint, choose a security group that allows clients or applications from other networks to access this endpoint. For an example, see the Route 53 resolver diagram shown earlier in the post.
  5. Under IP addresses section, choose an Availability Zone that corresponds to a subnet in your VPC.
  6. For IP address, choose Use an IP address that is selected automatically.
  7. Repeat steps 7 and 8 for the second IP address.
  8. Choose Submit.

Or, run the following AWS CLI command:

export DATE=$(date +%s)
export INBOUND_RESOLVER_ID=$(aws route53resolver create-resolver-endpoint --name 
<name> --direction INBOUND --creator-request-id $DATE --security-group-ids <sgs> \
--ip-addresses SubnetId=<subnetId>,Ip=<IP address> SubnetId=<subnetId>,Ip=<IP address> \
| jq -r .ResolverEndpoint.Id)
aws route53resolver list-resolver-endpoint-ip-addresses --resolver-endpoint-id \
$INBOUND_RESOLVER_ID | jq .IpAddresses[].Ip

This outputs the IP addresses assigned to the inbound endpoint.

When you are done creating the inbound endpoint, select the endpoint from the console and choose View details.  This shows you a summary of the configuration for the endpoint.  Record the two IP addresses that were assigned to the inbound endpoint, as you need them later when configuring the forwarding rule.

Connecting from a peered VPC

An outbound endpoint is used to send DNS requests that cannot be resolved “locally” to an external resolver based on a set of rules.

If you are connecting to the EKS cluster from a peered VPC, create an outbound endpoint and forwarding rule in that VPC or expose an outbound endpoint from another VPC. For more information, see Forwarding Outbound DNS Queries to Your Network.

Create an outbound endpoint

  1. In the Route 53 console, choose Outbound endpoints, Create outbound endpoint.
  2. For Endpoint name, enter a value such as <cluster_name>OutboundEnpoint.
  3. For VPC in the Region, select the VPC ID of the VPC where you want to create the outbound endpoint, for example the peered VPC.
  4. For Security group for this endpoint, choose a security group that allows clients and applications from this or other network VPCs to access this endpoint. For an example, see the Route 53 resolver diagram shown earlier in the post.
  5. Under the IP addresses section, choose an Availability Zone that corresponds to a subnet in the peered VPC.
  6. For IP address, choose Use an IP address that is selected automatically.
  7. Repeat steps 7 and 8 for the second IP address.
  8. Choose Submit.

Or, run the following AWS CLI command:

export DATE=$(date +%s)
export OUTBOUND_RESOLVER_ID=$(aws route53resolver create-resolver-endpoint --name 
<name> --direction OUTBOUND --creator-request-id $DATE --security-group-ids <sgs> \
--ip-addresses SubnetId=<subnetId>,Ip=<IP address> SubnetId=<subnetId>,Ip=<Ip address> \
| jq -r .ResolverEndpoint.Id)
aws route53resolver list-resolver-endpoint-ip-addresses --resolver-endpoint-id \
$OUTBOUND_RESOLVER_ID | jq .IpAddresses[].Ip

This outputs the IP addresses that get assigned to the outbound endpoint.

Create a forwarding rule for the cluster endpoint

A forwarding rule is used to send DNS requests that cannot be resolved by the local resolver to another DNS resolver.  For this solution to work, create a forwarding rule for each cluster endpoint to resolve through the outbound endpoint. For more information, see Values That You Specify When You Create or Edit Rules.

  1. In the Route 53 console, choose Rules, Create rule.
  2. Give your rule a name, such as <cluster_name>Rule.
  3. For Rule type, choose Forward.
  4. For Domain name, type the name of the cluster endpoint for your EKS cluster.
  5. For VPCs that use this rule, select all of the VPCs to which this rule should apply.  If you have multiple VPCs that must access the cluster endpoint, include them in the list of VPCs.
  6. For Outbound endpoint, select the outbound endpoint to use to send DNS requests to the inbound endpoint of the worker node VPC.
  7. Under the Target IP addresses section, enter the IP addresses of the inbound endpoint that corresponds to the EKS endpoint that you entered in the Domain name field.
  8. Choose Submit.

Or, run the following AWS CLI command:

export DATE=$(date +%s)
aws route53resolver create-resolver-rule --name <name> --rule-type FORWARD \
--creator-request-id $DATE --domain-name <cluster_endpoint> --target-ips \
Ip=<IP of inbound endpoint>,Port=53 --resolver-endpoint-id <Id of outbound endpoint>

Accessing the cluster endpoint

After creating the inbound and outbound endpoints and the DNS forwarding rule, you should be able to resolve the name of the cluster endpoints from the peered VPC.

$ dig 9FF86DB0668DC670F27F426024E7CDBD.sk1.us-east-1.eks.amazonaws.com 

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.68.rc1.58.amzn1 <<>> 9FF86DB0668DC670F27F426024E7CDBD.sk1.us-east-1.eks.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7168
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;9FF86DB0668DC670F27F426024E7CDBD.sk1.us-east-1.eks.amazonaws.com. IN A
;; ANSWER SECTION:
9FF86DB0668DC670F27F426024E7CDBD.sk1.us-east-1.eks.amazonaws.com. 60 IN A 192.168.109.77
9FF86DB0668DC670F27F426024E7CDBD.sk1.us-east-1.eks.amazonaws.com. 60 IN A 192.168.74.42
;; Query time: 12 msec
;; SERVER: 172.16.0.2#53(172.16.0.2)
;; WHEN: Mon Apr 8 22:39:05 2019
;; MSG SIZE rcvd: 114

Before you can access the cluster endpoint, you must add the IP address range of the peered VPCs to the EKS control plane security group. For more information, see Tutorial: Creating a VPC with Public and Private Subnets for Your Amazon EKS Cluster.

Add a rule to the EKS cluster control plane security group

  1. In the EC2 console, choose Security Groups.
  2. Find the security group associated with the EKS cluster control plane.  If you used eksctl to provision your cluster, the security group is named as follows: eksctl-<cluster_name>-cluster/ControlPlaneSecurityGroup.
  3. Add a rule that allows port 443 inbound from the CIDR range of the peered VPC.
  4. Choose Save.

Run kubectl

With the proper security group rule in place, you should now be able to issue kubectl commands from a machine in the peered VPC against the cluster endpoint.

$ kubectl get nodes
NAME                             STATUS    ROLES     AGE       VERSION
ip-192-168-18-187.ec2.internal   Ready     <none>    22d       v1.11.5
ip-192-168-61-233.ec2.internal   Ready     <none>    22d       v1.11.5

Connecting from an on-premises environment

To manage your EKS cluster from your on-premises environment, configure a forwarding rule in your on-premises DNS to forward DNS queries to the inbound endpoint of the worker node VPCs. I’ve provided brief descriptions for how to do this for BIND, dnsmasq, and Windows DNS below.

Create a forwarding zone in BIND for the cluster endpoint

Add the following to the BIND configuration file:

zone "<cluster endpoint FQDN>" {
    type forward;
    forwarders { <inbound endpoint IP #1>; <inbound endpoint IP #2>; };
};

Create a forwarding zone in dnsmasq for the cluster endpoint

If you’re using dnsmasq, add the --server=/<cluster endpoint FQDN>/<inbound endpoint IP> flag to the startup options.

Create a forwarding zone in Windows DNS for the cluster endpoint

If you’re using Windows DNS, create a conditional forwarder.  Use the cluster endpoint FQDN for the DNS domain and the IPs of the inbound endpoints for the IP addresses of the servers to which to forward the requests.

Add a security group rule to the cluster control plane

Follow the steps in Adding A Rule To The EKS Cluster Control Plane Security Group. This time, use the CIDR of your on-premises network instead of the peered VPC.

Conclusion

When you configure the EKS cluster endpoint to be private only, its name can only be resolved from the worker node VPC. To manage the cluster from another VPC or your on-premises network, you can use the solution outlined in this post to create an inbound resolver for the worker node VPC.

This inbound endpoint is a feature that allows your DNS resolvers to easily resolve domain names for AWS resources. That includes the private hosted zone that gets associated with your VPC when you make the EKS cluster endpoint private. For more information, see Resolving DNS Queries Between VPCs and Your Network.  As always, I welcome your feedback about this solution.

Running your game servers at scale for up to 90% lower compute cost

Post Syndicated from Roshni Pary original https://aws.amazon.com/blogs/compute/running-your-game-servers-at-scale-for-up-to-90-lower-compute-cost/

This post is contributed by Yahav Biran, Chad Schmutzer, and Jeremy Cowan, Solutions Architects at AWS

Many successful video games such Fortnite: Battle Royale, Warframe, and Apex Legends use a free-to-play model, which offers players access to a portion of the game without paying. Such games are no longer low quality and require premium-like quality. The business model is constrained on cost, and Amazon EC2 Spot Instances offer a viable low-cost compute option. The casual multiplayer games naturally fit the Spot offering. With the orchestration of Amazon EKS containers and the mechanism available to minimize the player impact and optimize the cost when running multiplayer game-servers workloads, both casual and hardcore multiplayer games fit the Spot Instance offering.

Spot Instances offer spare compute capacity available in the AWS Cloud at steep discounts compared to On-Demand Instances. Spot Instances enable you to optimize your costs and scale your application’s throughput up to 10 times for the same budget. Spot Instances are best suited for fault-tolerant workloads. Multiplayer game-servers are no exception: a game-server state is updated using real-time player inputs, which makes the server state transient. Game-server workloads can be disposable and take advantage of Spot Instances to save up to 90% on compute cost. In this blog, we share how to architect your game-server workloads to handle interruptions and effectively use Spot Instances.

Characteristics of game-server workloads

Simply put, multiplayer game servers spend most of their life updating current character position and state (mostly animation). The rest of the time is spent on image updates that result from combat actions, moves, and other game-related events. More specifically, game servers’ CPUs are busy doing network I/O operations by accepting client positions, calculating the new game state, and multi-casting the game state back to the clients. That makes a game server workload a good fit for general-purpose instance types for casual multiplayer games and, preferably, compute-optimized instance types for the hardcore multiplayer games.

AWS provides a wide variety for both compute-optimized (C5 and C4) and general-purpose (M5) instance types with Amazon EC2 Spot Instances. Because capacities fluctuate independently for each instance type in an Availability Zone, you can often get more compute capacity for the same price when using a wide range of instance types. For more information on Spot Instance best practices, see Getting Started with Amazon EC2 Spot Instances

One solution that customers use for running dedicated game-servers is Amazon GameLift. This solution deploys a fleet of Amazon GameLift FleetIQ and Spot Instances in an AWS Region. FleetIQ places new sessions on game servers based on player latencies, instance prices, and Spot Instance interruption rates so that you don’t need to worry about Spot Instance interruptions. For more information, see Reduce Cost by up to 90% with Amazon GameLift FleetIQ and Spot Instances on the AWS Game Tech Blog.

In other cases, you can use game-server deployment patterns like containers-based orchestration (such as Kubernetes, Swarm, and Amazon ECS) for deploying multiplayer game servers. Those systems manage a large number of game-servers deployed as Docker containers across several Regions. The rest of this blog focuses on this containerized game-server solution. Containers fit the game-server workload because they’re lightweight, start quickly, and allow for greater utilization of the underlying instance.

Why use Amazon EC2 Spot Instances?

A Spot Instance is the choice to run a disposable game server workload because of its built-in two-minute interruption notice that provides graceful handling. The two-minute termination notification enables the game server to act upon interruption. We demonstrate two examples for notification handling through Instance Metadata and Amazon CloudWatch. For more information, see “Interruption handling” and “What if I want my game-server to be redundant?” segments later in this blog.

Spot Instances also offer a variety of EC2 instances types that fit game-server workloads, such as general-purpose and compute-optimized (C4 and C5). Finally, Spot Instances provide low termination rates. The Spot Instance Advisor can help you choose a good starting point for determining which instance types have lower historical interruption rates.

Interruption handling

Avoiding player impact is key when using Spot Instances. Here is a strategy to avoid player impact that we apply in the proposed reference architecture and code examples available at Spotable Game Server on GitHub. Specifically, for Amazon EKS, node drainage requires draining the node via the kubectl drain command. This makes the node unschedulable and evicts the pods currently running on the node with a graceful termination period (terminationGracePeriodSeconds) that might impact the player experience. As a result, pods continue to run while a signal is sent to the game to end it gracefully.

Node drainage

Node drainage requires an agent pod that runs as a DaemonSet on every Spot Instance host to pull potential spot interruption from Amazon CloudWatch or instance metadata. We’re going to use the Instance Metadata notification. The following describes how termination events are handled with node drainage:

  1. Launch the game-server pod with a default of 120 seconds (terminationGracePeriodSeconds). As an example, see this deploy YAML file on GitHub.
  2. Provision a worker node pool with a mixed instances policy of On-Demand and Spot Instances. It uses the Spot Instance allocation strategy with the lowest price. For example, see this AWS CloudFormation template on GitHub.
  3. Use the Amazon EKS bootstrap tool (/etc/eks/bootstrap.sh in the recommended AMI) to label each node with its instances lifecycle, either nDemand or Spot. For example:
    • OnDemand: “–kubelet-extra-args –node labels=lifecycle=ondemand,title=minecraft,region=uswest2”
    • Spot: “–kubelet-extra-args –node-labels=lifecycle=spot,title=minecraft,region=uswest2”
  4. A daemon set deployed on every node pulls the termination status from the instance metadata endpoint. When a termination notification arrives, the `kubectl drain node` command is executed, and a SIGTERM signal is sent to the game-server pod. You can see these commands in the batch file on GitHub.
  5. The game server keeps running for the next 120 seconds to allow the game to notify the players about the incoming termination.
  6. No new game-server is scheduled on the node to be terminated because it’s marked as unschedulable.
  7. A notification to an external system such as a matchmaking system is sent to update the current inventory of available game servers.

Optimization strategies for Kubernetes specifications

This section describes a few recommended strategies for Kubernetes specifications that enable optimal game server placements on the provisioned worker nodes.

  • Use single Spot Instance Auto Scaling groups as worker nodes. To accommodate the use of multiple Auto Scaling groups, we use Kubernetes nodeSelector to control the game-server scheduling on any of the nodes in any of the Spot Instance–based Auto Scaling groups.
    nodeSelector:
         lifecycle: spot
            title: your game title

  • The lifecycle label is populated upon node creation through the AWS CloudFormation template in the following section:
    BootstrapArgumentsForSpotFleet:
    	Description: Sets Node Labels to set lifecycle as Ec2Spot
    	    Default: "--kubelet-extra-args --node-labels=lifecycle=spot,title=minecraft,region=uswest2"
    	
    	    Type: String

  • You might have a case where the incoming player actions are served by UDP and masking the interruption from the player is required. Here, the game-server allocator (a Kubernetes scheduler for us) schedules more than one game server as target upstream servers behind a UDP load balancer that multicasts any packet received to the set of game servers. After the scheduler terminates the game server upon node termination, the failover occurs seamlessly. For more information, see “What if I want my game-server to be redundant?” later in this blog.

Reference architecture

The following architecture describes an instance mix of On-Demand and Spot Instances in an Amazon EKS cluster of multiplayer game servers. Within a single VPC, control plane node pools (Master Host and Storage Host) are highly available and thus run On-Demand Instances. The game-server hosts/nodes uses a mix of Spot and On-Demand Instances. The control plane, the API server is accessible via an Amazon Elastic Load Balancing Application Load Balancer with a preconfigured allowed list.

What if I want my game server to be redundant?

A game server is a sessionful workload, but it traditionally runs as a single dedicated game server instance with no redundancy. For game servers that use TCP as the transport network layer, AWS offers Network Load Balancers as an option for distributing player traffic across multiple game servers’ targets. Currently, game servers that use UDP don’t have similar load balancer solutions that add redundancy to maintain a highly available game server.

This section proposes a solution for the case where game servers deployed as containerized Amazon EKS pods use UDP as the network transport layer and are required to be highly available. We’re using the UDP load balancer because of the Spot Instances, but the choice isn’t limited to when you’re using Spot Instances.

The following diagram shows a reference architecture for the implementation of a UDP load balancer based on Amazon EKS. It requires a setup of an Amazon EKS cluster as suggested above and a set of components that simulate architecture that supports multiplayer game services. For example, this includes game-server inventory that captures the running game-servers, their status, and allocation placement.The Amazon EKS cluster is on the left, and the proposed UDP load-balancer system is on the right. A new game server is reported to an Amazon SQS queue that persists in an Amazon DynamoDB table. When a player’s assignment is required, the match-making service queries an API endpoint for an optimal available game server through the game-server inventory that uses the DynamoDB tables.

The solution includes the following main components:

  • The game server (see mockup-udp-server at GitHub). This is a simple UDP socket server that accepts a delta of a game state from connected players and multicasts the updated state based on pseudo computation back to the players. It’s a single threaded server whose goal is to prove the viability of UDP-based load balancing in dedicated game servers. The model presented here isn’t limited to this implementation. It’s deployed as a single-container Kubernetes pod that uses hostNetwork: true for network optimization.
  • The load balancer (udp-lb). This is a containerized NGINX server loaded with the stream module. The load balance upstream set is configured upon initialization based on the dedicated game-server state that is stored in the DynamoDB table game-server-status-by-endpoint. Available load balancer instances are also stored in a DynamoDB table, lb-status-by-endpoint, to be used by core game services such as a matchmaking service.
  • An Amazon SQS queue that captures the initialization and termination of game servers and load balancers instances deployed in the Kubernetes cluster.
  • DynamoDB tables that persist the state of the cluster with regards to the game servers and load balancer inventory.
  • An API operation based on AWS Lambda (game-server-inventory-api-lambda) that serves the game servers and load balancers for an updated list of resources available. The operation supports /get-available-gs needed for the load balancer to set its upstream target game servers. It also supports /set-gs-busy/{endpoint} for labeling already claimed game servers from the available game servers inventory.
  • A Lambda function (game-server-status-poller-lambda) that the Amazon SQS queue triggers and that populates the DynamoDB tables.

Scheduling mechanism

Our goal in this example is to reduce the chance that two game servers that serve the same load-balancer game endpoint are interrupted at the same time. Therefore, we need to prevent the scheduling of the same game servers (mockup-UDP-server) on the same host. This example uses advanced scheduling in Kubernetes where the pod affinity/anti-affinity policy is being applied.

We define two soft labels, mockup-grp1 and mockup-grp2, in the podAffinity section as follows:

      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - mockup-grp1
              topologyKey: "kubernetes.io/hostname"

The requiredDuringSchedulingIgnoredDuringExecution tells the scheduler that the subsequent rule must be met upon pod scheduling. The rule says that pods that carry the value of key: “app” mockup-grp1 will not be scheduled on the same node as pods with key: “app” mockup-grp2 due to topologyKey: “kubernetes.io/hostname”.

When a load balancer pod (udp-lb) is scheduled, it queries the game-server-inventory-api endpoint for two game-server pods that run on different nodes. If this request isn’t fulfilled, the load balancer pod enters a crash loop until two available game servers are ready.

Try it out

We published two examples that guide you on how to build an Amazon EKS cluster that uses Spot Instances. The first example, Spotable Game Server, creates the cluster, deploys Spot Instances, Dockerizes the game server, and deploys it. The second example, Game Server Glutamate, enhances the game-server workload and enables redundancy as a mechanism for handling Spot Instance interruptions.

Conclusion

Multiplayer game servers have relatively short-lived processes that last between a few minutes to a few hours. The current average observed life span of Spot Instances in US and EU Regions ranges between a few hours to a few days, which makes Spot Instances a good fit for game servers. Amazon GameLift FleetIQ offers native and seamless support for Spot Instances, and Amazon EKS offers mechanisms to significantly minimize the probability of interrupting the player experience. This makes the Spot Instances an attractive option for not only casual multiplayer game server but also hardcore game servers. Game studios that use Spot Instances for multiplayer game server can save up to 90% of the compute cost, thus benefiting them as well as delighting their players.

Making Cluster Updates Easy with Amazon EKS

Post Syndicated from Brandon Chavis original https://aws.amazon.com/blogs/compute/making-cluster-updates-easy-with-amazon-eks/

Kubernetes is rapidly evolving, with frequent feature releases, functionality updates, and bug fixes. Additionally, AWS periodically changes the way it configures Amazon Elastic Container Service for Kubernetes (Amazon EKS) to improve performance, support bug fixes, and enable new functionality. Previously, moving to a new Kubernetes version required you to re-create your cluster and migrate your applications. This is a time-consuming process that can result in application downtime.

Today, I’m excited to announce that EKS now performs managed, in-place cluster upgrades for both Kubernetes and EKS platform versions. This simplifies cluster operations and lets you quickly take advantage of the latest Kubernetes features, as well as the updates to EKS configuration and security patches, without any downtime. EKS also now supports Kubernetes version 1.11.5 for all new EKS clusters.

Updates for Kubernetes and EKS

There are two types of updates that you can apply to your EKS cluster, Kubernetes version updates and EKS platform version updates. Today, EKS supports upgrades between Kubernetes minor versions 1.10 and 1.11.

As new Kubernetes versions are released and validated for use with EKS, we will support three stable Kubernetes versions as part of the update process at any given time.

EKS platform versions

The EKS platform version contains Kubernetes patches and changes to the API server configuration. Platform versions are separate from but associated with Kubernetes minor versions.

When a new Kubernetes version is made available for EKS, its initial control plane configuration is released as the “eks.1” platform version. AWS releases new platform versions as needed to enable Kubernetes patches. AWS also releases new versions when there are EKS API server configuration changes that could affect cluster behavior.

Using this versioning scheme makes it possible to independently update the configuration of different Kubernetes versions. For example, AWS might need to release a patch for Kubernetes version 1.10 that is incompatible with Kubernetes version 1.11.

Currently, platform version updates are automatic. AWS plans to provide manual control over platform version updates through the UpdateClusterVersion API operation in the future.

Using the update API operations

There are three new EKS API operations to enable cluster updates:

  • UpdateClusterVersion
  • ListUpdates
  • DescribeUpdates

The UpdateClusterVersion operation can be used through the EKS CLI to start a cluster update between Kubernetes minor versions:

aws eks update-cluster-version --name Your-EKS-Cluster --kubernetes-version 1.11

You only need to pass in a cluster name and the desired Kubernetes version. You do not need to pick a specific patch version for Kubernetes. We pick patch versions that are stable and well-tested. This CLI command returns an “update” API object with several important pieces of information:

{
    "update" : {
        "updateId" : UUID,
        "updateStatus" : PENDING,
        "updateType" : VERSION-UPDATE
        "createdAt" : Timestamp
     }
 }

This update object lets you track the status of your requested modification to your cluster. This can show you if there was there an error due to a misconfiguration on your cluster and if the update in progress, completed, or failed.

You can also list and describe the status of the update independently, using the following operations:

aws eks list-updates --name Your-EKS-Cluster

This returns the in-flight updates for your cluster:

{
    "updates" : {
        "UUID-1",
        "UUID-2"
     },
     "nextToken" : null
 }

Finally, you can also describe a particular update to see details about the update’s status:

aws eks describe-update --name Your-EKS-Cluster --update-id UUID

{
    "update" : {
        "updateId" : UUID,
        "updateStatus" : FAILED,
        "updateType" : VERSION-UPDATE
        "createdAt" : Timestamp
        "error": {
            "errorCode" : DependentResourceNotFound
            "errorMessage" : The Role used for creating the cluster is deleted.
            "resources" : ["aws:iam:arn:role"] 
     }
 }

Considerations when updating

New Kubernetes versions introduce significant changes. I highly recommend that you test the behavior of your application against a new Kubernetes version before performing the update on a production cluster.

Generally, I recommend integrating EKS into your existing CI workflow to test how your application behaves on a new version before updating your production clusters.

Worker node updates

Today, EKS does not update your Kubernetes worker nodes when you update the EKS control plane. You are responsible for updating EKS worker nodes. You can find an overview of this process in Worker Node Updates.

The EKS team releases a set of EKS-optimized AMIs for worker nodes that correspond with each version of Kubernetes supported by EKS. You can find these AMIs listed in the documentation, and you can find the build configuration in a version-specific branch of the Amazon-EKS-AMI GitHub repository .

Getting started

You can start using Kubernetes version 1.11 today for all new EKS clusters. Use cluster updates to move to version 1.11 for all existing EKS clusters. You can learn more about the update process and APIs in our documentation.

AWS Cloud Map: Easily create and maintain custom maps of your applications

Post Syndicated from Abby Fuller original https://aws.amazon.com/blogs/aws/aws-cloud-map-easily-create-and-maintain-custom-maps-of-your-applications/

Companies are increasingly building their applications as microservices (many separate services that each do a single job). Microservices often allow companies to iterate and deploy more quickly. Many of these microservice-based modern applications are built using various types of cloud resources and deployed on dynamically changing infrastructure. Previously you had to use configuration files to manage the location of your application resource. However, dependencies in a microservices-based application can quickly become too complex to easily manage through configuration files. Additionally, many applications are built using containers that scale dynamically, reacting on the changes in traffic load. That increases your application responsiveness, but poses a new class of problem – now your application components need to discover and connect to the upstream services at runtime. This problem of connectivity in dynamically changing infrastructures and microservices is commonly addressed by service discovery.

Introducing AWS Cloud Map

 

AWS Cloud Map keeps track of all your application components, their locations, attributes and health status. Now your applications can simply query AWS Cloud Map using AWS SDK, API or even DNS to discover the locations of its dependencies. That allows your applications to scale dynamically and connect to upstream services directly, increasing the responsiveness of your applications.

When you register your web services and cloud resources in AWS Cloud Map, you can describe them using custom attributes, such as deployment stage and version. Your applications then can make discovery calls specifying the required deployment stage and version. AWS Cloud Map will return the locations of resources that match the supplied parameters. It simplifies your deployments and reduces the operational complexity for your applications.

Integrated health checking for IP-based resources, registered with AWS Cloud Map, automatically stops routing traffic to unhealthy endpoints. Additionally, you have APIs to describe the health status of your services, so that you can learn about potential issues with your infrastructure. That increases the resilience of your applications.

AWS Cloud Map in Action
Getting started with AWS Cloud Map is easy. You can use the AWS console or CLI to create a namespace, such as myapp.com . For this example, I’ll use the CLI. Let’s create a namespace:

aws servicediscovery create-public-dns-namespace --name myapp.com (http://myapp.com/)

At this point, you’ll need to decide whether your want your applications to discover resources only via the AWS SDK and API calls, or if you need optional discovery via DNS. When you enable DNS discovery for a namespace, you’ll need to provide IP addresses for all the resources that you register. If you plan to register other cloud resources, such as DynamoDB tables by ARN or the URLs of the APIs deployed on Amazon API Gateway, you need to select API discovery mode.

Once your namespace is created, it’s time to create services. A service represents your application components, such as users , auth, or payment and can be comprised of many dynamically changing resources. You can specify a friendly name for your service, then select the DNS discovery and health checking options. You can create a service like this:

aws servicediscovery create-service --name frontend --namespace-id %namespace_id%”

After you create a service, you can register service instances with custom attributes:

aws servicediscovery register-instance --service-id %service_id% --instance-id %id%
--attributes AWS_INSTANCE_IPV4=54.20.10.1,stage=beta,version=1.0,active=yes

aws servicediscovery register-instance --service-id %service_id% --instance-id %id%
--attributes AWS_INSTANCE_IPV4=54.20.10.2,stage=beta,version=2.0,active=no

Now, your applications can make API calls to discover the service instances, optionally providing query parameters to filter the results:

aws servicediscovery discover-instances --namespace-name myapp.com --service-name frontend --query-parameters version=1.0,active=yes
-->
{
"Instances": [
{
"InstanceId": "1",
"NamespaceName": "myapp.com",
"ServiceName": "users",
"HealthStatus": "HEALTHY",
"Attributes": {
"version":"1.0",
"active":"yes",
"stage":"beta",
"AWS_INSTANCE_IPV4": "54.20.10.2" }
}
]
}

And that’s it! Amazon Elastic Container Service (ECS) and AWS Fargate are tightly integrated with AWS Cloud Map. When you create your service and enable service discovery, all the task instances are automatically registered in AWS Cloud Map on scale up, and deregistered on scale down. ECS also ensures that only healthy task instances are returned on the discovery calls by publishing always up-to-date health information to AWS Cloud Map.

For Amazon Elastic Container Service for Kubernetes (EKS), you can automatically publish the external IPs of the services running in EKS in AWS Cloud Map. To do this, we’ve released an update to an open source project, ExternalDNS, to make Kubernetes resources discoverable via AWS Cloud Map. You can find out more details about Kubernetes External DNS here.

 

Now Generally Available
You can start building your applications with AWS Cloud Map and enjoy the integration with Amazon ECS and EKS, rich and secure API query interface, ubiquitous DNS name resolution and integrated health checking support today. Want to try it out? Head to https://console.aws.amazon.com/cloudmap/home.  To test out the integration with ECS, head to https://console.aws.amazon.com/ecs/home and enable Service Discovery to get started.

How to rotate a WordPress MySQL database secret using AWS Secrets Manager in Amazon EKS

Post Syndicated from Paavan Mistry original https://aws.amazon.com/blogs/security/how-to-rotate-a-wordpress-mysql-database-secret-using-aws-secrets-manager-in-amazon-eks/

AWS Secrets Manager recently announced a feature update to rotate credentials for all Amazon RDS database types. This allows you to automatically rotate credentials for all types of databases hosted on Amazon RDS. In this post, I show you how to rotate database secrets for a non-RDS database using AWS Secrets Manager. I use a containerized WordPress application with a MySQL database to demonstrate the secret rotation.

Enabling regular rotation of database secrets helps secure application databases, protects customer data, and helps meet compliance requirements. You’ll use Amazon Elastic Container Service for Kubernetes (Amazon EKS) to help deploy, manage, and scale the WordPress application, and Secrets Manager to perform secret rotation on a containerized MySQL database.

Prerequisites

You’ll need an Amazon EKS or a Kubernetes cluster running and accessible for this post. Getting Started with Amazon EKS provides instructions on setting up the cluster and node environment. I recommend that you have a basic understanding of Kubernetes concepts, but Kubernetes reference document links are provided throughout this walk-through. For this post, I use the placeholder EKSClusterName to denote the existing EKS cluster. Remember to replace this with the name of your EKS cluster.

You’ll also need AWS Command Line Interface (AWS CLI) installed and configured on your machine. For this blog, I assume that the default AWS CLI region is set to Oregon (us-west-2) and that you have access to the AWS services described in this post. If you use other regions, you should check the availability of AWS services in those regions.

Architecture overview

 

Figure 1: Architecture and data flow diagram within Amazon EKS nodes

Figure 1: Architecture and data flow diagram within Amazon EKS nodes

The architecture diagram shows the overall deployment architecture with two data flows, a user data flow and a Secrets Manager data flow within the EKS three-node cluster VPC (virtual private cloud).

To access the WordPress site, user data flows through an internet gateway and an external load balancer as part of the WordPress frontend Kubernetes deployment. The AWS Secrets Manager data flow uses the recently announced Secrets Manager VPC Endpoint and an AWS Lambda function within the VPC that rotates the MySQL database secret through an internal load balancer. MySQL database and the internal load balancer are provisioned as part of the WordPress MySQL Kubernetes deployment. The internal load balancer allows the database service to be exposed internally for Secrets Manager to perform the rotation within a VPC. It removes the need for the database to be exposed to the Internet for secret rotation, which is not a good security practice.

Solution overview

The blog post consists of the following steps:

  1. Host WordPress and MySQL services on Amazon EKS.
  2. Store the database secret in AWS Secrets Manager.
  3. Set up the rotation of the database secret using Secrets Manager VPC Endpoint.
  4. Rotate the database secret and update the WordPress frontend deployment.

Note: This post helps you implement MySQL secret rotation for non-RDS databases. It should be used as a reference guide on database secret rotation. For guidance on using WordPress on AWS, please refer to the WordPress: Best Practices on AWS whitepaper.

Step 1: Host WordPress and MySQL services on Amazon EKS

To deploy WordPress on an EKS cluster, you’ll use three YAML templates.

  1. First, create a StorageClass in Amazon EKS that uses Amazon Elastic Block Store (Amazon EBS) for persistent volumes, using the command below if the volumes don’t exist. Then, create a Kubernetes namespace. The Kubernetes namespace creates a separate environment within your Kubernetes cluster to create objects specific to this walk-through. This helps with object management and logical separation.
    
    kubectl create -f https://raw.githubusercontent.com/paavan98pm/eks-secret-rotation/master/templates/gp2-storage-class.yaml
    
    kubectl create namespace wp
    

  2. Next, use Kubernetes secrets to store your MySQL database password with an Amazon EKS cluster. The password is generated by the AWS Secret Manager get-random-password API using the command below. This allows a random password to be created and stored in the Kubernetes secret object through the Secrets Manager API feature without the need to manually create it. The get-random-password API allows various password length and type restrictions to be enforced based on your organizational security policy.
    
    kubectl create secret generic mysql-pass --from-literal=password=$(aws secretsmanager get-random-password --password-length 20 --no-include-space | jq -r .RandomPassword) --namespace=wp
    

You’ll use the Kubernetes secret mysql-pass to create your MySQL and WordPress deployments using the YAML manifests in the next step.

Deploying MySQL and WordPress in Amazon EKS

Now, you’ll run the MySQL and WordPress templates provided by the Kubernetes community to deploy your backend MySQL services, deployments, and persistent volume claims.

  1. To deploy an internal load balancer that AWS Secrets Manager will use to perform its secret rotation, you’ll use the Kubernetes service annotation service.beta.kubernetes.io/aws-load-balancer-internal within the MySQL service YAML. For WordPress deployment, you’ll add three replicas for availability across the three availability zones.
    
    kubectl create -f https://raw.githubusercontent.com/paavan98pm/eks-secret-rotation/master/templates/mysql-deployment.yaml --namespace=wp
    
    kubectl create -f https://raw.githubusercontent.com/paavan98pm/eks-secret-rotation/master/templates/wordpress-deployment.yaml --namespace=wp
    

    Run the commands above with the Kubernetes YAML manifests to create MySQL and WordPress services, deployments, and persistent volume claims. As shown in the architecture diagram, these services also provision an external and an internal load balancer. The external load balancer is accessible over the internet for your WordPress frontend, while you’ll use the internal load balancer to rotate database secrets. It takes a few minutes for the load balancers to be provisioned.

  2. Once the load balancers are provisioned, you should be able to access your WordPress site in your browser by running the following command. The browser will open to show the WordPress setup prompt in Figure 2.
    
    open http://$(kubectl get svc -l app=wordpress --namespace=wp -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
    

     

    Figure 2: WordPress setup page for the Amazon EKS deployment

    Figure 2: WordPress setup page for the Amazon EKS deployment

  3. From here, follow the WordPress instructions to complete installation and set up credentials to access your WordPress administration page. This completes the hosting and setup of your WordPress blog on Amazon EKS.

Step 2: Store the database secret in AWS Secrets Manager

Next, you’re going to store your database secret in AWS Secrets Manager. AWS Secrets Manager is a fully managed AWS service that enables you to rotate, manage, and retrieve secrets such as database credentials and application programming interface keys (API keys) throughout their lifecycle. Follow the steps below to use Secrets Manager to store the MySQL database secret and the rotation configuration details.
 

Figure 3: Store a new secret in AWS Secrets Manager

Figure 3: Store a new secret in AWS Secrets Manager

From the Secrets Manager console, select Store a new secret to open the page shown in Figure 3 above. Follow the instructions below to store the MySQL database secret parameters.

  1. Select Credentials for other database and provide the username and password you created in the previous step. The username value will be root, and you should paste the password value from the output of the command below. This command copies the MySQL password stored in the Kubernetes secret object, allowing you to store it in Secrets Manager.

    Password:

    
    kubectl get secret mysql-pass -o=jsonpath='{.data.password}' --namespace=wp | base64 --decode | pbcopy
    

  2. Secrets Manager encrypts secrets by default using the service-specific encryption key for your AWS account. Since you’re storing the secret for MySQL database, choose the MySQL tile and provide the server details. For Server address, you can paste the output of the command below. The database name and port values are mysql and 3306, respectively.

    Server address:

    
    kubectl get svc -l app=wordpress-mysql --namespace=wp -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}' | pbcopy
    

  3. Select Next and give the secret a name and a description that enables you to reference and manage the secret easily. I use the placeholder yourSecretName to reference your secret throughout the rest of my post; be sure to swap this placeholder with your own secret name.
  4. On the next screen, accept the default setting: Disable automatic rotation.
  5. Finally, select Store to store your MySQL secret in AWS Secrets Manager.

Step 3: Set MySQL secret rotation using Secrets Manager VPC Endpoint

Next, you’ll create a Lambda Function within the Amazon EKS node cluster VPC to rotate this secret. Within the node cluster VPC, you’re also going to configure the AWS Secrets Manager recently announced support for Amazon Virtual Private Cloud (VPC) endpoints powered by AWS PrivateLink.

Note: If you have a database hosted on Amazon RDS, Secrets Manager natively supports rotating secrets. However, if you’re using a container-based MySQL database, you must create and configure the Lambda rotation function for AWS Secrets Manager, and then provide the Amazon Resource Name (ARN) of the completed function to the secret.

Creating a Lambda rotation function

The following commands apply a WordPress MySQL rotation template to a new Lambda function. CloudFormation uses an AWS Serverless Application Repository template to automate most of the steps for you. The values you need to replace for your environment are denoted in red.

  1. The command below will return a ChangeSetId. It uses the Secrets Manager MySQL single user template from the AWS Serverless Application Repository and creates a CloudFormation Change Set by providing the Secrets Manager endpoint and Lambda function name as parameters. You need to change the Secrets Manager region within the endpoint parameter and provide a name for the Lambda function that you’re creating.
    
    aws serverlessrepo create-cloud-formation-change-set --application-id arn:aws:serverlessrepo:us-east-1:297356227824:applications/SecretsManagerRDSMySQLRotationSingleUser --stack-name MyLambdaCreationStack --parameter-overrides '[{"Name":"endpoint","Value":"https://secretsmanager.<REGION>.amazonaws.com"},{"Name":"functionName","Value":"<EKSMySQLRotationFunction>"}]' | jq -r '.ChangeSetId'
    

  2. The next command runs the change set that you just created to create a Lambda function. The change-set-name parameter comes from the ChangeSetId output of the previous command. Copy the ChangeSetId value into the instruction below.
    
    aws cloudformation execute-change-set --change-set-name <ChangeSetId>
    

  3. The following command grants Secrets Manager permission to call the Lambda function on your behalf. You should provide the Lambda function name that you set earlier.
    
    aws lambda add-permission --function-name <EKSMySQLRotationFunction> --principal secretsmanager.amazonaws.com --action lambda:InvokeFunction --statement-id SecretsManagerAccess
    

Configuring the Lambda rotation function to use Secrets Manager VPC Endpoint

Instead of connecting your VPC to the internet, you can connect directly to Secrets Manager through a private endpoint that you configure within your VPC. When you use a VPC service endpoint, communication between your VPC and Secrets Manager occurs entirely within the AWS network and requires no public internet access.

  1. To enable AWS Secrets Manager VPC endpoint, first store the Node Instance security group in a NODE_INSTANCE_SG environment variable to use it as an argument for creating the Secrets Manager VPC Endpoint. Using the VPC ID from the EKS cluster, this command filters the Security Groups attached to the nodes and stores them in the NODE_INSTANCE_SG environment variable. This variable will be used as an argument in the next step to create a Secrets Manager VPC endpoint.
    
    export NODE_INSTANCE_SG=$(aws ec2 describe-security-groups --filters Name=vpc-id,Values=$(aws eks describe-cluster --name <EKSClusterName> | jq -r '.cluster.resourcesVpcConfig.vpcId') Name=tag:aws:cloudformation:logical-id,Values=NodeSecurityGroup | jq -r '.SecurityGroups[].GroupId')
    

  2. Next, create a Secrets Manager VPC endpoint attached to the EKS node cluster subnets and the related security group. This command retrieves the VPC ID and Subnet IDs from the EKS cluster to create a Secrets Manager VPC endpoint.
    
    aws ec2 create-vpc-endpoint --vpc-id $(aws eks describe-cluster --name <EKSClusterName> | jq -r '.cluster.resourcesVpcConfig.vpcId') --vpc-endpoint-type Interface --service-name com.amazonaws.<region>.secretsmanager --subnet-ids $(aws eks describe-cluster --name <EKSClusterName> | jq -r '.cluster.resourcesVpcConfig.subnetIds | join(" ")') --security-group-id $NODE_INSTANCE_SG --private-dns-enabled
    

  3. Next, update the Lambda function to attach it to the EKS node cluster subnets and the related security group. This command updates the Lambda function’s configuration by retrieving its ARN and EKS cluster Subnet IDs and Security Group ID.
    
    aws lambda update-function-configuration --function-name $(aws lambda list-functions | jq -r '.Functions[] | select(.FunctionName == "<EKSMySQLRotationFunction>") | .FunctionArn') --vpc-config SubnetIds=$(aws eks describe-cluster --name <EKSClusterName> | jq -r '.cluster.resourcesVpcConfig.subnetIds | join(",")'),SecurityGroupIds=$NODE_INSTANCE_SG
    

Step 4: Rotate the database secret and update your WordPress frontend deployment

Now that you’ve configured the Lambda rotation function, use the rotate-secret command to schedule a rotation of this secret. The command below uses the previously stored secret name with the Lambda function ARN to rotate your secret automatically every 30 days. You can adjust the rotation frequency value, if you want. The minimum rotation frequency is 1 day.


aws secretsmanager rotate-secret --secret-id <yourSecretName> --rotation-lambda-arn $(aws lambda list-functions | jq -r '.Functions[] | select(.FunctionName == "<EKSMySQLRotationFunction>") | .FunctionArn') --rotation-rules AutomaticallyAfterDays=30

The figure below shows the Secrets Manager data flow for secret rotation using the VPC endpoint.
 

Figure 4: MySQL database secret rotation steps

Figure 4: MySQL database secret rotation steps

Based on the frequency of the rotation, Secrets Manager will follow the steps below to perform automatic rotations:

  1. Secrets Manager will invoke <EKSMySQLRotationFunction> within the EKS node VPC using the Secrets Manager VPC endpoint.
  2. Using the credentials provided for MySQL database, <EKSMySQLRotationFunction> will rotate the database secret and update the secret value in Secrets Manager.

Once the Lambda function has executed, you can test the updated database secret value that’s stored in AWS Secrets Manager by using the command below.


aws secretsmanager get-secret-value --secret-id <yourSecretName>

Next, update the Kubernetes secret (mysql-pass) with the rotated secret stored in AWS Secrets Manager, then update the WordPress frontend deployment with the rotated MySQL password environment variable using the command below. These instructions should be automated using Kubernetes and AWS API once you’ve completed your secret rotation.

The command below retrieves the rotated secret from Secrets Manager using the GetSecretValue API call and passes the updated value to the Kubernetes secret. The next command patches the WordPress deployment for the frontend WordPress containers to use the updated Kubernetes secret. It also adds a rotation date annotation for the deployment.


kubectl create secret generic mysql-pass --namespace=wp --from-literal=password=$(aws secretsmanager get-secret-value --secret-id <yourSecretName> | jq --raw-output '.SecretString' | jq -r .password) -o yaml --dry-run | k replace -f -

kubectl patch deploy wordpress -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"rotation_date\":\"`date +'%s'`\"}}}}}" --namespace=wp

You should now be able to access the updated WordPress site with database secret rotation enabled.


open http://$(kubectl get svc -l app=wordpress --namespace=wp -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')

Clean-up

To delete the environment used in this walk-through in EKS, the secret in Secrets Manager, and the accompanying Lambda function, run the following commands. You only need to follow these steps if you intend to remove the secret, Lambda function, VPC endpoint, and EKS objects created in this walk-through.


kubectl delete namespace wp

aws secretsmanager delete-secret --secret-id <yourSecretName> --force-delete-without-recovery

aws lambda delete-function --function-name <EKSMySQLRotationFunction>

aws ec2 delete-vpc-endpoints --vpc-endpoint-ids $(aws ec2 describe-vpc-endpoints | jq -r '.VpcEndpoints[] | select(.ServiceName == "com.amazonaws.<REGION>.secretsmanager") | .VpcEndpointId')

Pricing

Next, I review the pricing and estimated cost of this example. AWS Secrets Manager offers a 30-day trial period that starts when you store your first secret. Storage of each secret costs $0.40 per secret per month. For secrets that are stored for less than a month, the price is prorated based on the number of hours. There is an additional cost of $0.05 per 10,000 API calls. You can learn more by visiting the AWS Secrets Manager pricing service details page.

You pay $0.20 per hour for each Amazon EKS cluster that you create. You can use a single Amazon EKS cluster to run multiple applications by taking advantage of Kubernetes namespaces and IAM security policies. You pay for AWS resources (for example, EC2 instances or EBS volumes) that you create to run your Kubernetes worker nodes. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments. See detailed pricing information on the Amazon EC2 pricing page.

Assuming you allocate two hours to follow the blog instructions, the cost will be approximately $1.

  1. Amazon EKS – 2 hours x ($0.20 per EKS cluster + 3 nodes x $0.096 m5.large instance)
  2. AWS Secrets Manager – 2 hours x ($0.40 per secret per month / 30 days / 24 hours + $0.05 per 10,000 API calls)

Summary

In this post, I showed you how to rotate WordPress database credentials in Amazon EKS using AWS Secrets Manager.

For more details on secrets management within Amazon EKS, check out the Github workshop for Kubernetes, particularly the section on ConfigMaps and Secrets, to understand different secret management alternatives available. To get started using Kubernetes on AWS, open the Amazon EKS console. To learn more, read the EKS documentation. To get started managing secrets, open the Secrets Manager console. To learn more, read the Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the EKS forum or Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Author

Paavan Mistry

Paavan is a Security Specialist Solutions Architect at AWS where he enjoys solving customers’ cloud security, risk, and compliance challenges. Outside of work, he enjoys reading about leadership, politics, law, and human rights.

Compute Abstractions on AWS: A Visual Story

Post Syndicated from Massimo Re Ferre original https://aws.amazon.com/blogs/architecture/compute-abstractions-on-aws-a-visual-story/

When I joined AWS last year, I wanted to find a way to explain, in the easiest way possible, all the options it offers to users from a compute perspective. There are many ways to peel this onion, but I want to share a “visual story” that I have created.

I define the compute domain as “anything that has CPU and Memory capacity that allows you to run an arbitrary piece of code written in a specific programming language.” Your mileage may vary in how you define it, but this is broad enough that it should cover a lot of different interpretations.

A key part of my story is around the introduction of different levels of compute abstractions this industry has witnessed in the last 20 or so years.

Separation of duties

The start of my story is a line. In a cloud environment, this line defines the perimeter between the consumer role and the provider role. In the cloud, there are things that AWS will do and things that the consumer will do. The perimeter of these responsibilities varies depending on the services you opt to use. If you want to understand more about this concept, read the AWS Shared Responsibility Model documentation.

The different abstraction levels

The reason why the line above is oblique is because it needs to intercept different compute abstraction levels. If you think about what happened in the last 20 years of IT, we have seen a surge of different compute abstractions that changed the way people consume CPU and Memory resources. It all started with physical (x86) servers back in the 80s, and then we have seen the industry adding abstraction layers over the years (for example, hypervisors, containers, functions).

The higher you go in the abstraction levels, the more the cloud provider can add value and can offload the consumer from non-strategic activities. A lot of these activities tend to be “undifferentiated heavy lifting.” We define this as something that AWS customers have to do but that don’t necessarily differentiate them from their competitors (because those activities are table-stakes in that particular industry).

What we found is that supporting millions of customers on AWS requires a certain degree of flexibility in the services we offer because there are many different patterns, use cases, and requirements to satisfy. Giving our customers choices is something AWS always strives for.

A couple of final notes before we dig deeper. The way this story builds up through the blog post is aligned to the progression of the launch dates of the various services, with a few noted exceptions. Also, the services mentioned are all generally available and production-grade. For full transparency, the integration among some of them may still be work-in-progress, which I’ll call out explicitly as we go.

The instance (or virtual machine) abstraction

This is the very first abstraction we introduced on AWS back in 2006. Amazon Elastic Compute Cloud (Amazon EC2) is the service that allows AWS customers to launch instances in the cloud. When customers intercept us at this level, they retain responsibility of the guest operating system and above (middleware, applications, etc.) and their lifecycle. AWS has the responsibility for managing the hardware and the hypervisor including their lifecycle.

At the very same level of the stack there is also Amazon Lightsail, which “is the easiest way to get started with AWS for developers, small businesses, students, and other users who need a simple virtual private server (VPS) solution. Lightsail provides developers compute, storage, and networking capacity and capabilities to deploy and manage websites and web applications in the cloud.”

And this is how these two services appear in our story:

The container abstraction

With the rise of microservices, a new abstraction took the industry by storm in the last few years: containers. Containers are not a new technology, but the rise of Docker a few years ago democratized access. You can think of a container as a self-contained environment with soft boundaries that includes both your own application as well as the software dependencies to run it. Whereas an instance (or VM) virtualizes a piece of hardware so that you can run dedicated operating systems, a container technology virtualizes an operating system so that you can run separated applications with different (and often incompatible) software dependencies.

And now the tricky part. Modern containers-based solutions are usually implemented in two main logical pieces:

  • A containers control plane that is responsible for exposing the API and interfaces to define, deploy, and lifecycle containers. This is also sometimes referred to as the container orchestration layer.
  • A containers data plane that is responsible for providing capacity (as in CPU/Memory/Network/Storage) so that those containers can actually run and connect to a network. From a practical perspective this is typically a Linux host or less often a Windows host where the containers get started and wired to the network.

Arguably, in a specific compute abstraction discussion, the data plane is key, but it is as important to understand what’s happening for the control plane piece.

In 2014, Amazon launched a production-grade containers control plane called Amazon Elastic Container Service (ECS), which “is a highly scalable, high performance container management service that supports Docker … Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.”

In 2017, Amazon also announced the intention to release a new service called Amazon Elastic Container Service for Kubernetes (EKS) based on Kubernetes, a successful open source containers control plane technology. Amazon EKS was made generally available in early June 2018.

Just like for ECS, the aim for this service is to free AWS customers from having to manage a containers control plane. In the past, AWS customers would spin up EC2 instances and deploy/manage their own Kubernetes masters (masters is the name of the Kubernetes hosts running the control plane) on top of an EC2 abstraction. However, we believe many AWS customers will leave to AWS the burden of managing this layer by either consuming ECS or EKS, depending on their use cases. A comparison between ECS and EKS is beyond the scope of this blog post.

You may have noticed that what we have discussed so far is about the container control plane. How about the containers data plane? This is typically a fleet of EC2 instances managed by the customer. In this particular setup, the containers control plane is managed by AWS while the containers data plane is managed by the customer. One could argue that, with ECS and EKS, we have raised the abstraction level for the control plane, but we have not yet really raised the abstraction level for the data plane as the data plane is still comprised of regular EC2 instances that the customer has responsibility for.

There is more on that later on but, for now, this is how the containers control plane and the containers data plane services appear:

The function abstraction

At re:Invent 2014, AWS introduced another abstraction layer: AWS Lambda. Lambda is an execution environment that allows an AWS customer to run a single function. So instead of having to manage and run a full-blown OS instance to run your code, or having to track all software dependencies in a user-built container to run your code, Lambda allows you to upload your code and let AWS figure out how to run it at scale.

What makes Lambda so special is its event-driven model. Not only can you invoke Lambda directly (for example, via the Amazon API Gateway), but you can trigger a Lambda function upon an event in another AWS service (for example, an upload to Amazon S3 or a change in an Amazon DynamoDB table).

The key point about Lambda is that you don’t have to manage the infrastructure underneath the function you are running. No need to track the status of the physical hosts, no need to track the capacity of the fleet, no need to patch the OS where the function will be running. In a nutshell, no need to spend time and money on the undifferentiated heavy lifting.

And this is how the Lambda service appears:

The bare metal abstraction

Also known as the “no abstraction.”

As recently as re:Invent 2017, we announced (the preview of) the Amazon EC2 bare metal instances. We made this service generally available to the public in May 2018.

This announcement is part of Amazon’s strategy to provide choice to our customers. In this case, we are giving customers direct access to hardware. To quote from Jeff Barr’s post:

“…. (AWS customers) wanted access to the physical resources for applications that take advantage of low-level hardware features such as performance counters and Intel® VT that are not always available or fully supported in virtualized environments, and also for applications intended to run directly on the hardware or licensed and supported for use in non-virtualized environments.”

This is how the bare metal Amazon EC2 i3.metal instance appears:

As a side note, and also as alluded to by Jeff, i3.metal is the foundational EC2 instance type on top of which VMware created their own VMware Cloud on AWS service. We are now offering the ability to any AWS user to provision bare metal instances. This doesn’t necessarily mean you can load your hypervisor of choice out of the box, but you can certainly do things you wouldn’t be able to do with a traditional EC2 instance (note: this was just a Saturday afternoon hack).

More seriously, a question I get often asked is whether users could install ESXi on i3.metal on their own. Today this cannot be done, but I’d be interested in hearing your use case for this.

The full container abstraction (for lack of a better term)

Now that we covered all the abstractions, it is time to go back and see if there are other optimizations we can provide for AWS customers. When we discussed the container abstraction, we called out that while there are two different fully managed containers control planes (ECS and EKS), there wasn’t a managed option for the data plane.

Some customers were (and still are) happy about being in full control of said instances. Others have been very vocal that they wanted to get out of the (undifferentiated heavy-lifting) business of managing the lifecycle of that piece of infrastructure.

Enter AWS Fargate, a production-grade service that provides compute capacity to AWS containers control planes. Practically speaking, Fargate is making the containers data plane fall into the “Provider space” responsibility. This means the compute unit exposed to the user is the container abstraction, while AWS will manage transparently the data plane abstractions underneath.

This is how the Fargate service appears:

Now ECS has two “launch types”: one called “EC2” (where your tasks get deployed on a customer-managed fleet of EC2 instances), and the other one called “Fargate” (where your tasks get deployed on an AWS-managed fleet of EC2 instances).

For EKS, the strategy will be very similar, but as of this writing it was not yet available. If you’re interested in some of the exploration being done to make this happen, this is a good read.

Conclusions

We covered the spectrum of abstraction levels available on AWS and how AWS customers can intercept them depending on their use cases and where they sit on their cloud maturity journey. Customers with a “lift & shift” approach may be more akin to consume services on the left-hand side of the slide, whereas customers with a more mature cloud native approach may be more interested in consuming services on the right-hand side of the slide.

In general, customers tend to use higher-level services to get out of the business of managing non-differentiating activities. For example, I recently talked to a customer interested in using Fargate. The trigger there was the fact that Fargate is ISO, PCI, SOC and HIPAA compliant, which was a huge time and money saver for them because it’s easier to point to an AWS document during an audit than having to architect and document for compliance the configuration of a DIY containers data plane.

As a recap, here’s our visual story with all the abstractions available:

I hope you found it useful. Any feedback is greatly appreciated.

About the author

Massimo is a Principal Solutions Architect at AWS. For about 25 years, he specialized on the x86 ecosystem starting with operating systems and virtualization technologies, and lately he has been head down learning about cloud and how application architectures are evolving in that space. Massimo has a blog at www.it20.info and his Twitter handle is @mreferre.

Run your Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS

Post Syndicated from Roshni Pary original https://aws.amazon.com/blogs/compute/run-your-kubernetes-workloads-on-amazon-ec2-spot-instances-with-amazon-eks/

Contributed by Madhuri Peri, Sr. EC2 Spot Specialist SA, and Shawn OConnor, AWS Enterprise Solutions Architect

Many organizations today are using containers to package source code and dependencies into lightweight, immutable artifacts that can be deployed reliably to any environment.

Kubernetes (K8s) is an open-source framework for automated scheduling and management of containerized workloads. In addition to master nodes, a K8s cluster is made up of worker nodes where containers are scheduled and run.

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that removes the need to manage the installation, scaling, or administration of master nodes and the etcd distributed key-value store. It provides a highly available and secure K8s control plane.

This post demonstrates how to use Spot Instances as K8s worker nodes, and shows the areas of provisioning, automatic scaling, and handling interruptions (termination) of K8s worker nodes across your cluster.

What this post does not cover

This post focuses primarily on EC2 instance scaling. This post also assumes a default interruption mode of terminate for EC2 instances, though there are other interruption types, stop and hibernate. For stateless K8s sessions, I recommend choosing the interruption mode of terminate.

Spot Instances

Amazon EC2 Spot Instances are spare EC2 capacity that offer discounts of 70-90% over On-Demand prices. The Spot price is determined by term trends in supply and demand and the amount of On-Demand capacity on a particular instance size, family, Availability Zone, and AWS Region.

If the available On-Demand capacity of a particular instance type is depleted, the Spot Instance is sent an interruption notice two minutes ahead to gracefully wrap up things. I recommend a diversified fleet of instances, with multiple instance types created by Spot Fleets or EC2 Fleets.

You can use Spot Instances for various fault-tolerant and flexible applications. In a workload that uses container orchestration and management platforms like EKS or Amazon Elastic Container Service (Amazon ECS), the schedulers have built-in mechanisms to identify any pods or containers on these interrupted EC2 instances. The interrupted pods or containers are then replaced on other EC2 instances in the cluster.

Solution architecture

There are three goals to accomplish with this solution:

  1.  The cluster must scale automatically to match the demands of an application.
  2. Optimize for cost by using Spot Instances.
  3. The cluster must be resilient to Spot Instance interruptions.

These goals are accomplished with the following components:

Solution componentRole in solutionCodeDeployment
Cluster AutoscalerScales EC2 instances in or outOpen sourceK8s pod DaemonSet on On-Demand Instances
Auto Scaling groupProvisions Spot or On-Demand InstancesAWSVia CloudFormation
Spot Instance interrupt handlerSets K8s nodes to drain state, when the Spot Instance is interruptedOpen sourceK8s pod DaemonSet on all K8s nodes with the label lifecycle=EC2Spot

Here’s a diagram of the solution architecture.

There are a few important things to note in this architecture:

  • Cluster Autoscaler is being used to control all scaling activities, with changes to the MinSize and DesiredCapacity parameters of the Auto Scaling group. This separation of duties ensures that there are no race conditions.
  • The Auto Scaling groups are used purely to replace any lost instances automatically (for example, terminations or interruptions) and maintain the desired number of instances. There are no scaling policies attached to the groups.
  • Auto Scaling, at the time of this post, supports a single instance type. As noted by Jeff Barr’s post EC2 Fleet – Manage Thousands of On-Demand and Spot Instances with One Request, in H2 2018, Auto Scaling groups will support mixed instance types. At that point, multiple groups will not be required, and can collapse into a single group specifying all instance types.

Here’s a further breakdown on the components.

Cluster Autoscaler

Automatic scaling in K8s comes in two forms:

  • Horizontal Pod Autoscaler scales the pods in a deployment or replica set. It is implemented as a K8s API resource and a controller. The controller manager queries the resource utilization against the metrics specified in each HorizontalPodAutoscaler definition. It obtains the metrics from either the resource metrics API (for per-pod resource metrics), or the custom metrics API (for all other metrics).
  • Cluster Autoscaler scales the worker nodes available for pods to be placed. Cluster Autoscaler is the focus for this post.

Cluster Autoscaler is the default K8s component that can be used to perform pod scaling as well as scaling nodes in a cluster. It automatically increases the size of an Auto Scaling group so that pods have a place to run. And it attempts to remove idle nodes, that is, nodes with no running pods.

When a pod cannot be scheduled due to lack of available resources, Cluster Autoscaler determines that the cluster must scale up. Expander interfaces allow you to apply different pod placement strategies. Currently, the following strategies are supported:

  • Random – Randomly select an available node group.
  • Most Pods – Selects the group that can schedule the largest quantity of nodes. This can be used balance the load across groups of nodes.
  • Least Waste – This is commonly referred to as ‘bin packing.’ It selects the node-group with the least available tied resource (CPU or memory). This helps to reduce the total node footprint, and is the strategy used in this post.

Although Cluster Autoscaler is the de facto standard for automatic scaling in K8s, it is not part of the main release. Deploy it like any other pod in the kube-system namespace, like other management pods. Those management pods would prevent the cluster from scaling down. Override this default behavior by passing in the –-skip-nodes-with-system-pods=false flag.

But how do you reliably control scale-down operations so that you do not remove the pods that you need? This is accomplished using a pod disruption budget (PDB). A PDB limits the number of replicated pods that can be down at a given time. Create a PDB to ensure that you always have at least one Cluster Autoscaler pod running

In summary, Cluster Autoscaler does not remove nodes under the following scenarios:

  • Pods with a restrictive PDB.
  • Pods running in the kube-system namespace that are deployed (that is, not run on the node by default or which do not have a PDB).
  • Pods not backed by a controller object (not created by a deployment, replica set, job, stateful-set, and so on).
  • Pods running with local storage.
  • Pods running that cannot be moved elsewhere due to various constraints (lack of resources, non-matching node selectors or affinity, matching anti-affinity, and so on).

Auto Scaling Group

With Spot Instances, each instance type in each Availability Zone is a pool with its own Spot price based on the available capacity. A recommended best practice when working with Spot Instances is to use a diversified fleet of instances with multiple instance types, as created by Spot Fleet or EC2 Fleet. These APIs aim to fulfill the specified TargetCapacity across the instance types to launch the number of Spot Instances and optionally, On-Demand Instances.

Unfortunately, Cluster Autoscaler does not support Spot Fleets at this time. You need a different strategy to provide diversification. Cluster Autoscaler for AWS provides integration with Auto Scaling groups. It enables users to choose from four different options of deployment:

  • One Auto Scaling group
  • Multiple Auto Scaling groups
  • Auto-Discovery
  • Master Node setup

For this post, you use the Multi-ASG deployment option. For Cluster Autoscaler and other cluster administration and management pods that run on EKS worker nodes, create a small Auto Scaling group using On-Demand Instances. This ensures that the health of the cluster is not impacted by Spot interruptions.

In K8s, label selectors are used to control where pods are placed. Use the K8s node label selector to place the appropriate pods on Spot or On-Demand Instances.

Interrupt handler

The last component to consider handles how the cluster responds to the interruption of a Spot Instance. The workflow can be summarized as:

  • Identify that a Spot Instance is being reclaimed.
  • Use the 2-minute notification window to gracefully prepare the node for termination.
  • Taint the node and cordon it off to prevent new pods from being placed.
  • Drain connections on the running pods.
  • To maintain desired capacity, replace the pods on remaining nodes.

Spot interruptions are reported in the following ways:

For this post, you use a K8s DaemonSet, which means running one pod per node. The pod periodically polls the EC2 metadata service for a Spot termination notice. If a termination notice is received (HTTP status 200), then it tries to gracefully stop and restart on other nodes before the 2-minute grace period expires. This approach is based on an existing project at the kube-spot-termination-notice-handler GitHub repo.

 Walkthrough

Here’s the suggested workflow for this solution:

  1. Provision the worker nodes with EC2 instances using CloudFormation templates.
  2. Deploy the K8s Cluster Autoscaler pods as a DaemonSet, with a PDB.
  3. Deploy the Spot Instance interrupt handler pods as a DaemonSet.
  4. Deploy the sample application

Prerequisites

You should have the following resources or configurations before starting this walkthrough:

  • An EKS cluster master endpoint
  • An EKS service role ARN
  • Subnet IDs and the control plane security group values
  • EKS master cluster certificates
  • Configuration of kubectl against the master EKS endpoint

For more information, see Amazon EKS – Now Generally Available and Deploy a Kubernetes Application with Amazon Elastic Container Service for Kubernetes.

When you describe the EKS cluster, you get a response like the following sample output:

    "cluster": {
        "name": " DemoSpotClusterScale",
        "arn": "arn:aws:eks:us-west-2: 0123456789012:cluster/ DemoSpotClusterScale",
        "createdAt": 1528317531.751,
        "version": "1.10",
        "endpoint": "https://B960845ED5E21A3439ABB5E12F09CE88.sk1.us-west-2.eks.amazonaws.com",
        "roleArn": "arn:aws:iam::0123456789012:role/eksServiceRoleGA",
        "resourcesVpcConfig": {
            "subnetIds": [
                "subnet-3326464a",
                "subnet-c2b93b89",
                "subnet-13225b49"
            ],
            "securityGroupIds": [
                "sg-7fd0b70e"
            ],
            "vpcId": "vpc-c7c8c4be"
        },
        "status": "ACTIVE",
        "certificateAuthority": {
            "data": "<Your ca data here>"
        }
    }
}

I use the cluster name DemoSpotClusterScale throughout this post. Replace that with your cluster name in the following commands.

Get started

git clone https://github.com/awslabs/ec2-spot-labs.git

cd ec2-spot-labs/ec2-spot-eks-solution

Provision the worker nodes

Add worker nodes to your cluster so that you can deploy your applications. Worker nodes can be either Spot or On-Demand Instances. In this example, use Spot Instances for worker nodes.

You can use this customized AWS CloudFormation template to create the Auto Scaling groups described earlier. This template also labels the node with a lifecycle key value indicating whether it is an On-Demand or Spot Instance node.

The template deploys Auto Scaling groups dedicated to the following instance types:

  • Spot Instances, m4.large, across three Availability Zones.
  • Spot Instances, t2.medium, across three Availability Zones.
  • On-Demand Instances, across three Availability Zones.

Make sure that you apply the aws-auth-cm.yaml file with the appropriate NodeInstanceRole value, as provisioned by the CloudFormation template. Find this parameter on the Resources tab.

kubectl apply -f aws-auth-cm.yaml

If the kubectl get nodes command worked as documented, then you are ready to proceed to the next section

Deploying Cluster Autoscaler and PDB

  1. Download the manifest file cluster-autoscaler-ds.yaml. There are six K8s resources that enable the cluster-autoscaler add-on to work in the EKS environment:
    • Service account
    • Cluster role
    • Role
    • Cluster role binding
    • Role binding
    • Two Auto Scaling groups created by the CloudFormation template for Spot and On-Demand Instances

    You also see the cluster-autoscaler command with configured parameters.

  2. Edit the cluster-autoscaler-ds.yaml file to replace the [OD-NodeGroup-Name], [Spot-NodeGroup1-Name], [Spot-NodeGroup2-Name] sections in lines 141-143 with the resources created in your worker node cloudformation template as shown in screenshot above. Deploy the cluster-autoscaler-ds.yaml manifest
    $ kubectl create -f cluster-autoscaler/cluster-autoscaler-ds.yaml

  3. Monitor the deployment:
    $ kubectl logs cluster-autoscaler-<podgeneratedID> --namespace=kube-system

  4. Download and deploy the Cluster Autoscaler PDB:
    $ kubectl create -f cluster-autoscaler/cluster-autoscaler-pdb.yaml

Deploy the Spot Instance interrupt handler

Each K8s EC2 node being launched must have the lifecycle=Ec2Spot value for -node-label, as in the following example. This line is an excerpt from the CloudFormation template:

“sed -i s,MAX_PODS,”, !Join [ “”, [ “‘”, { “Fn::FindInMap”: [ MaxPodsPerNode, { Ref: SpotNode2InstanceType }, MaxPods ] }, ” –node-labels “, “lifecycle=Ec2Spot” , “‘” ] ], “,g /etc/systemd/system/kubelet.service”, “\n”,

The Docker image contains the instance metadata poll script, as shown in entrypoint.sh. Publish this image to your repository. In the following screenshot, I used my ECR repository. A sample image is available on Docker Hub.

Deploy the Spot interrupt handler pod using spec. This sets up the DaemonSet only on the instances that have a K8s label of lifecycle=Ec2Spot.

kubectl apply -f spot-termination-handler/deploy-k8-pod/spot-interrupt-handler.yaml

When the Spot Instance is interrupted, this pod catches the interruption and vacates the pods.

Deploy the sample application and test out scaling up & down

Deploy a sample application with three replicas. Create a new manifest file named greeter-sample.yaml from the code below, or download it from here

You are using node affinity to prefer deployment on Spot Instances. If the Ec2Spot label is unavailable, the manifest file allows the application to run elsewhere

$ kubectl create -f sample/greeter-sample.yaml

Scale up, and watch Cluster Autoscaler manage the Auto Scaling groups. Verify that Cluster Autoscaler is working by scaling up the sample service beyond the current limits of the cluster.

$ kubectl scale --replicas=50 deployment/greeter-sample

Check the AWS Management Console to confirm that the Auto Scaling groups are scaling up to meet demand. This may take a few minutes. You can also follow along with the pod deployment from the command line. You should see the pods transition from pending to running as nodes are scaled up.

$ kubectl get pods -o wide --watch

Scale down, and watch Cluster Autoscaler manage the Auto Scaling groups:

$ kubectl scale --replicas=1 deployment/greeter

Check the K8s logs to watch the terminations occur:

$ kubectl logs deployment/cluster-autoscaler-<podgeneratedID> –namespace=kube-system

Conclusion

In this post, I showed you how to use Spot Instances with K8s workloads, by provisioning, scaling, and managing terminations effectively in EKS clusters to leverage both cost and scale optimizations. Happy coding!

Running GPU-Accelerated Kubernetes Workloads on P3 and P2 EC2 Instances with Amazon EKS

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/running-gpu-accelerated-kubernetes-workloads-on-p3-and-p2-ec2-instances-with-amazon-eks/

This post contributed by Scott Malkie, AWS Solutions Architect

Amazon EC2 P3 and P2 instances, featuring NVIDIA GPUs, power some of the most computationally advanced workloads today, including machine learning (ML), high performance computing (HPC), financial analytics, and video transcoding. Now Amazon Elastic Container Service for Kubernetes (Amazon EKS) supports P3 and P2 instances, making it easy to deploy, manage, and scale GPU-based containerized applications.

This blog post walks through how to start up GPU-powered worker nodes and connect them to an existing Amazon EKS cluster. Then it demonstrates an example application to show how containers can take advantage of all that GPU power!

Prerequisites

You need an existing Amazon EKS cluster, kubectl, and the aws-iam-authenticator set up according to Getting Started with Amazon EKS.

Two steps are required to enable GPU workloads. First, join Amazon EC2 P3 or P2 GPU compute instances as worker nodes to the Kubernetes cluster. Second, configure pods to enable container-level access to the node’s GPUs.

Spinning up Amazon EC2 GPU instances and joining them to an existing Amazon EKS Cluster

To start the worker nodes, use the standard AWS CloudFormation template for Amazon EKS worker nodes, specifying the AMI ID of the new Amazon EKS-optimized AMI for GPU workloads. This AMI is available on AWS Marketplace.

Subscribe to the AMI and then launch it using the AWS CloudFormation template. The template takes care of networking, configuring kubelets, and placing your worker nodes into an Auto Scaling group, as shown in the following image.

This template creates an Auto Scaling group with up to two p3.8xlarge Amazon EC2 GPU instances. Powered by up to eight NVIDIA Tesla V100 GPUs, these instances deliver up to 1 petaflop of mixed-precision performance per instance to significantly accelerate ML and HPC applications. Amazon EC2 P3 instances have been proven to reduce ML training times from days to hours and to reduce time-to-results for HPC.

After the AWS CloudFormation template completes, the Outputs view contains the NodeInstanceRole parameter, as shown in the following image.

NodeInstanceRole needs to be passed in to the AWS Authenticator ConfigMap, as documented in the AWS EKS Getting Started Guide. To do so, edit the ConfigMap template and run the command kubectl apply -f aws-auth-cm.yaml in your terminal to apply the ConfigMap. You can then run kubectl get nodes —watch to watch the two Amazon EC2 GPU instances join the cluster, as shown in the following image.

Configuring Kubernetes pods to access GPU resources

First, use the following command to apply the NVIDIA Kubernetes device plugin as a daemon set on the cluster.

kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.10/nvidia-device-plugin.yml

This command produces the following output:

Once the daemon set is running on the GPU-powered worker nodes, use the following command to verify that each node has allocatable GPUs.

kubectl get nodes \
"-o=custom-columns=NAME:.metadata.name,GPU:.status.allocatable.nvidia\.com/gpu"

The following output shows that each node has four GPUs available:

Next, modify any Kubernetes pod manifests, such as the following one, to take advantage of these GPUs. In general, adding the resources configuration (resources: limits:) to pod manifests gives containers access to one GPU. A pod can have access to all of the GPUs available to the node that it’s running on.

apiVersion: v1
kind: Pod
metadata:
  name: pod-name
spec:
  containers:
  - name: container-name
    ...
    resources:
      limits:
        nvidia.com/gpu: 4

As a more specific example, the following sample manifest displays the results of the nvidia-smi binary, which shows diagnostic information about all GPUs visible to the container.

apiVersion: v1
kind: Pod
metadata:
  name: nvidia-smi
spec:
  restartPolicy: OnFailure
  containers:
  - name: nvidia-smi
    image: nvidia/cuda:latest
    args:
    - "nvidia-smi"
    resources:
      limits:
        nvidia.com/gpu: 4

Download this manifest as nvidia-smi-pod.yaml and launch it with kubectl apply -f nvidia-smi-pod.yaml.

To confirm successful nvidia-smi execution, use the following command to examine the log.

kubectl logs nvidia-smi

The above commands produce the following output:

Existing limitations

  • GPUs cannot be overprovisioned – containers and pods cannot share GPUs
  • The maximum number of GPUs that you can schedule to a pod is capped by the number of GPUs available to that pod’s node
  • Depending on your account, you might have Amazon EC2 service limits on how many and which type of Amazon EC2 GPU compute instances you can launch simultaneously

For more information about GPU support in Kubernetes, see the Kubernetes documentation. For more information about using Amazon EKS, see the Amazon EKS documentation. Guidance setting up and running Amazon EKS can be found in the AWS Workshop for Kubernetes on GitHub.

Please leave any comments about this post and share what you’re working on. I can’t wait to see what you build with GPU-powered workloads on Amazon EKS!

Amazon EKS – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/

We announced Amazon Elastic Container Service for Kubernetes and invited customers to take a look at a preview during re:Invent 2017. Today I am pleased to be able to let you know that Amazon EKS is available for use in production form. It has been certified as Kubnernetes conformant, and is ready to run your existing Kubernetes workloads.

Based on the most recent data from the Cloud Native Computing Foundation, we know that AWS is the leading environment for Kubernetes, with 57% of all companies who run Kubernetes choosing to do so on AWS. Customers tell us that Kubernetes is core to their IT strategy, and are already running hundreds of millions of containers on AWS every week. Amazon EKS simplifies the process of building, securing, operating, and maintaining Kubernetes clusters, and brings the benefits of container-based computing to organizations that want to focus on building applications instead of setting up a Kubernetes cluster from scratch.

AWS Inside
Amazon EKS takes advantage of the fact that it is running in the AWS Cloud, making great use of many AWS services and features, while ensuring that everything you already know about Kubernetes remains applicable and helpful. Here’s an overview:

Multi-AZ – The Kubernetes control plane (the API server and the etcd database) are run in high-availability fashion across three AWS Availability Zones. Master nodes are monitored and replaced if they fail, and are also patched and updated automatically.

IAM IntegrationAmazon EKS uses the Heptio Authenticator for authentication. You can make use of IAM roles and avoid the pain that comes with managing yet another set of credentials.

Load Balancer Support – You can route traffic to your worker nodes using the AWS Network Load Balancer, the AWS Application Load Balancer, or the original (classic) Elastic Load Balancer.

EBS – Kubernetes PersistentVolumes (used for cluster storage) are implemented as Amazon Elastic Block Store (EBS) volumes.

Route 53 – The External DNS project allows services in Kubernetes clusters to be accessed via Route 53 DNS records. This simplifies service discovery and supports load balancing.

Auto Scaling – Your clusters can make use of Auto Scaling, growing and shrinking in response to changes in load.

Container Interface – The Container Network Interface for Kubernetes uses Elastic Network Interfaces to provide static IP addresses for Kubernetes Pods.

For a more detailed look at these features, read about Amazon Elastic Container Service for Kubernetes.

Amazon EKS is built around a shared-responsibility model; the control plane nodes are managed by AWS and you run the worker nodes. This gives you high availability and simplifies the process of moving existing workloads to EKS. Here’s a very high-level overview:

 

Creating an Amazon EKS Cluster
To create a cluster, I provision the control plane, provision and connect the worker cluster, and launch my containers. In the example below I will create a new VPC for my worker cluster, but I can also use an existing one, as long as the desired subnets are tagged with the name of my Kubernetes cluster.

Following the directions in the Amazon EKS Getting Started Guide, I begin by creating an IAM role. Kubernetes assumes this role and uses it to create AWS resources such as Elastic Load Balancers. Once created, this role can be used for all of my clusters. I simply create a CloudFormation stack using the template referred to in the Getting Started Guide:

I acknowledge that the stack will create a role, and click Create to proceed:

The role is created in seconds, and the ARN is shown in the stack’s Output tab (I’ll need it later):

Next, I create a VPC (Virtual Private Cloud) using the sample template from the Getting Started Guide, with the following parameters:

The template creates a VPC that has two subnets, along with all of the necessary route tables, gateways, and security groups):

As is the case with the ARN, I will need the ID of the security group later.

Next, I download kubectl and set it up to use the Heptio Authenticator. The authenticator allows kubectl to make use of IAM authentication when it accesses my Kubernetes clusters. Instructions for downloading and setup are in the Getting Started Guide and I follow them as directed.

To wrap up the setup process, I ensure that I am running the latest version of the AWS Command Line Interface (CLI) (If I was running an older version, the eks command would not be available):

With my IAM role, my VPC, and my tooling all in place, I am ready to create my first Amazon EKS cluster!

I log in to the EKS Console using an IAM user that has administrative privileges (root credentials cannot be used due to the way that the Heptio Authenticator works) and click Create cluster:

I enter a name for my cluster (which must match the one that I entered when I created the VPC, because Kubernetes relies on tagging of subnets), along with the subnet IDs and the security group ID, both for the VPC, and click Create:

My control plane cluster starts out in CREATING status, and transitions to ACTIVE in 10 minutes or less:

Now I need to configure kubectl so that it can access my cluster. Before I can do this, I need to use the CLI to retrieve the certificate authority data:

$ aws eks describe-cluster --region us-west-2 --cluster-name jeff1 --query cluster.certificateAuthority.data

This command returns a long string of data that I’ll need in a minute.

I also retrieve the cluster endpoint from the console:

I make sure that I am in my home directory, create sub-directory .kube, and create file config-jeff1 in it. Then I open config-jeff1 in my editor, copy the templated config file from the Getting Started Guide and finalize the cluster endpoint, certificate, and cluster name. My file looks like this:

apiVersion: v1
clusters:
- cluster:
    server: https://FDA1964D96C9EEF2B76684C103F31C67.sk1.us-west-2.eks.amazonaws.com
    certificate-authority-data: "...."
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"

Before I test kubectl, I need to ensure that my CLI is configured to use the same IAM user that I used when I logged in to the console to create the cluster:

And now I can run a quick test to verify that everything is working as expected:

At this point I have set up my master VPC and my Kubernetes control plane. I’m ready to create some worker nodes (EC2 instances). Once again, this is done using a CloudFormation template:

The stack is created in a couple of minutes and sets up IAM roles, security groups, and auto scaling:

Now I need to set up a configurator map so that the worker nodes know how to join the cluster. I download the map, add the ARN of the NodeInstanceRole from the stack, and apply the configuration:

Then I check and see that my nodes are ready:

Running the Guest Book Sample
My Kubnernetes cluster is all set and I can use the Guest Book application to test it out. I create the Kubernetes replication controllers and services:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/redis-master-controller.json
replicationcontroller "redis-master" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/redis-master-service.json
service "redis-master" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/redis-slave-controller.json
replicationcontroller "redis-slave" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/redis-slave-service.json
service "redis-slave" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/guestbook-controller.json
replicationcontroller "guestbook" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/guestbook-service.json
service "guestbook" created

I list the running services and capture the external IP address & port:

and visit the address in my web browser:

Things to Know
We make upstream contributions to the Kubernetes repo and to projects such as the CNI Plugin, the Heptio AWS Authenticator, and Virtual Kubelet. We are currently looking for Systems Development Engineers, DevOps Engineers, Product Managers, and Solution Architects with Kubernetes experience; check out the full list of open positions to learn more.

Amazon EKS is available today in the US East (N. Virginia) and US West (Oregon) Regions and will be expanding to others very soon. We have a detailed roadmap and plan to crank out plenty of additional features this year.

You pay $0.20 per hour for the EKS Control Plane, and usual EC2, EBS, and Load Balancing prices for resources that run in your account. See the EKS Pricing page for more information.

Jeff;