All posts by Martin Beeby

AWS ECS Cluster Auto Scaling is Now Generally Available

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/aws-ecs-cluster-auto-scaling-is-now-generally-available/

Today, we have launched AWS ECS Cluster Auto Scaling. This new capability improves your cluster scaling experience by increasing the speed and reliability of cluster scale-out, giving you control over the amount of spare capacity maintained in your cluster, and automatically managing instance termination on cluster scale-in.

To enable ECS Cluster Auto Scaling, you will need to create a new ECS resource type called a Capacity Provider. A Capacity Provider can be associated with an EC2 Auto Scaling Group (ASG). When you associate an ECS Capacity Provider with an ASG and add the Capacity Provider to an ECS cluster, the cluster can now scale your ASG automatically by using two new features of ECS:

  1. Managed scaling, with an automatically-created scaling policy on your ASG, and a new scaling metric (Capacity Provider Reservation) that the scaling policy uses; and
  2. Managed instance termination protection, which enables container-aware termination of instances in the ASG when scale-in happens.

These new features will give customers greater control of when and how Amazon ECS clusters scale-in and scale-out.

Capacity Provider Reservation
The new metric, called capacity provider reservation, measures the total percentage of cluster resources needed by all ECS workloads in the cluster, including existing workloads, new workloads, and changes in workload size. This metric enables the scaling policy to scale out quicker and more reliably than it could when using CPU or memory reservation metrics. Customers can also use this metric to reserve spare capacity in their clusters. Reserving spare capacity allows customers to run more containers immediately if needed, without waiting for new instances to start.

Managed Instance Termination Protection
With instance termination protection, ECS controls which instances the scaling policy is allowed to terminate on scale-in, to minimize disruptions of running containers. These improvements help customers achieve lower operational costs and higher availability of their container workloads running on ECS.

How This Help Customers
Customers running scalable container workloads on ECS often use metric-based scaling policies to automatically scale their ECS clusters. These scaling policies use generic metrics such as average cluster CPU and memory reservation percentages to determine when the policy should add or remove cluster instances.

Clusters running a single workload, or workloads that scale-out slowly, often work well with such policies. However, customers running multiple workloads in the same cluster, or workloads that scale-out rapidly, are more likely to experience problems with cluster scaling. Ideally, increases in workload size that cannot be accommodated by the current cluster should trigger the policy to scale the cluster out to a larger size.

Because the existing metrics are not container-specific and account only for resources already in use, this may happen slowly or be unreliable. Furthermore, because the scaling policy does not know where containers are running in the cluster, it can unnecessarily terminate containers when scaling in. These issues can reduce the availability of container workloads. Mitigations such as over-provisioning, custom tooling, or manual intervention often impose high operational costs.

Enough Talk, Let’s Scale
To understand these new features more clearly, I think it’s helpful to work through an example.

Amazon ECS Cluster Auto Scaling can be set up and configured using the AWS Management Console, AWS CLI, or Amazon ECS API. I’m going to open up my terminal and create a cluster.

Firstly, I create two files. The first file is called demo-launchconfig.json and defines the instance configuration for the Amazon Elastic Compute Cloud (EC2) instances that will make up my auto scaling group.

{
    "LaunchConfigurationName": "demo-launchconfig",
    "ImageId": "ami-01f07b3fa86406c96",
    "SecurityGroups": [
        "sg-0fa5be8c3749f3aa0"
    ],
    "InstanceType": "t2.micro",
    "BlockDeviceMappings": [
        {
            "DeviceName": "/dev/xvdcz",
            "Ebs": {
                "VolumeSize": 22,
                "VolumeType": "gp2",
                "DeleteOnTermination": true,
                "Encrypted": true
                }
        }
    ],
    "InstanceMonitoring": {
        "Enabled": false
    },
    "IamInstanceProfile": "arn:aws:iam::365489315573:role/ecsInstanceRole",
    "AssociatePublicIpAddress": true
}

The second file is demo-userdata.txt, and it contains the user data that will be added to each EC2 instance. The ECS_CLUSTER name included in the file must be the same as the name of the cluster we are going to create. In my case, the name is demo-news-blog-scale.

#!/bin/bash
echo ECS_CLUSTER=demo-news-blog-scale >> /etc/ecs/ecs.config

Using the create-launch-configuration command, I pass the two files I created as inputs, this will create the launch configuration that I will use in my auto scaling group.

aws autoscaling create-launch-configuration --cli-input-json file://demo-launchconfig.json --user-data file://demo-userdata.txt

Next, I create a file called demo-asgconfig.json and define my requirements.

{
    "LaunchConfigurationName": "demo-launchconfig", 
    "MinSize": 0,
    "MaxSize": 100,
    "DesiredCapacity": 0,
    "DefaultCooldown": 300,
    "AvailabilityZones": [ 
        "ap-southeast-1c" ], 
    "HealthCheckType": "EC2", 
    "HealthCheckGracePeriod": 300, 
    "VPCZoneIdentifier": "subnet-abcd1234", 
    "TerminationPolicies": [ 
        "DEFAULT" 
    ],
    "NewInstancesProtectedFromScaleIn": true, 
    "ServiceLinkedRoleARN": "arn:aws:iam::111122223333:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
} 

I then use the create-auto-scaling-group command to create an auto scaling group called demo-asg using the above file as an input.

aws autoscaling create-auto-scaling-group --auto-scaling-group-name demo-asg --cli-input-json file://demo-asgconfig.json

I am now ready to create a capacity provider. I create a file called demo-capacityprovider.json, importantly, I set the managedTerminationProtection property to ENABLED.

{
    "name": "demo-capacityprovider", "autoScalingGroupProvider": {
    "autoScalingGroupArn": "arn:aws:autoscaling:ap-southeast-1:365489315573:autoScalingGroup:e9c2f0c4-9a4c-428e-b81e-b22411a52954:autoScalingGroupName/demo-ASG",
            "managedScaling": {
                "status": "ENABLED",
                "targetCapacity": 100,
                "minimumScalingStepSize": 1,
                "maximumScalingStepSize": 100
            },
            "managedTerminationProtection": "ENABLED"
    }
}

I then use the new create-capacity-provider command to create a provider using the file as an input.

aws ecs create-capacity-provider --cli-input-json file://demo-capacityprovider.json

Now all the components have been created, I can finally create a cluster. I add the capacity provider and set the default capacity provider for the cluster as demo-capacityprovider.

aws ecs create-cluster --cluster-name demo-news-blog-scale --capacity-providers demo-capacityprovider --default-capacity-provider-strategy<br />capacityProvider=demo-capacityprovider,weight=1

I now need to wait until the cluster has moved into the active state. I use the following command to get details about the cluster.

aws ecs describe-clusters --clusters demo-news-blog-scale --include ATTACHMENTS

Now that my cluster is set up, I can register some tasks. Firstly I will need to create a task definition. Below is a file I. have created called demo-sleep-taskdef.json. All this definition does is define a container that sleeps for infinity.

{
    "family": "demo-sleep-taskdef",
    "containerDefinitions": [
        {
            "name": "sleep",
            "image": "amazonlinux:2",
            "memory": 20,
            "essential": true,
            "command": [
                "sh",
                "-c",
                "sleep infinity"] 
        }],
    "requiresCompatibilities": [
        "EC2"] 
} 

I then register the task definition using the register-task-definition command.

aws ecs register-task-definition --cli-input-json file://demo-sleep-taskdef.json

Finally, I can create my tasks. In this case, I have created 5 tasks based on the demo-sleep-taskdef:1 definition that I just registered.

aws ecs run-task --cluster demo-news-blog-scale --count 5 --task-definition demo-sleep-taskdef:1

Now because instances are not yet available to run the tasks, the tasks go into a provisioning state, which means they are waiting for capacity to become available. The capacity provider I configured will now scale-out the auto scaling group so that instances start up and join the cluster – at which point the tasks get placed on the instances. This gives a true “scale from zero” capability, which did not previously exist.

Things To Know
AWS ECS Cluster Auto Scaling is now available in all regions where Amazon ECS and AWS Auto Scaling are available – check the region table for the latest list.

Happy Scaling!

— Martin

 

AWS Fargate Spot now Generally Available

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/aws-fargate-spot-now-generally-available/

Today at AWS re:Invent 2019 we announced AWS Fargate Spot. Fargate Spot is a new capability on AWS Fargate that can run interruption tolerant Amazon ECS Tasks at up to a 70% discount off the Fargate price.

If you are familiar with EC2 Spot Instances, the concept is the same. We use spare capacity in the AWS cloud to run your tasks. When the capacity for Fargate Spot is available, you will be able to launch tasks based on your specified request. When AWS needs the capacity back, tasks running on Fargate Spot will be interrupted with two minutes of notification. If the capacity for Fargate Spot stops being available, Fargate will scale down tasks running on Fargate Spot while maintaining any regular tasks you are running.

As your tasks could be interrupted, you should not run tasks on Fargate Spot that cannot tolerate interruptions. However, for your fault-tolerant workloads, this feature enables you to optimize your costs.

The service is an obvious fit for parallelizable workloads like image rendering, Monte Carlo simulations, and genomic processing. However, customers can also use Fargate Spot for tasks that run as a part of ECS services such as websites and APIs which require high availability.

When configuring your Service Autoscaling policy, you can specify the minimum number of regular tasks that should run at all times and then add tasks running on Fargate Spot to improve service performance in a cost-efficient way. When the capacity for  Fargate Spot is available, the Scheduler will launch tasks to meet your request. If the capacity for Fargate Spot stops being available,  Fargate Spot will scale down, while maintaining the minimum number of regular tasks to ensure the application’s availability.

So let us take a look at how we can get started using  AWS Fargate Spot.

First, I create a new Fargate cluster inside of the ECS console, I choose Networking only, and I follow the wizard to complete the process.

Once my cluster is created, I need to add a capacity provider, by default, my cluster has two capacity providers FARGATE and FARGATE_SPOT.


To use the FARGATE_SPOT capacity provider, I update my cluster and set the default provider to use FARGATE_SPOT, I press the Update Cluster button and then select FARGATE_SPOT as the default capacity provider and click Update.

I then run a task in the cluster in the usual way. I select my task definition and enter that I want 10 tasks. Then after configuring VPC and security groups, I click Run Task

Now the 10 tasks run, but rather than using regular Fargate infrastructure, they use Fargate Spot. If I peek inside one of the tasks, I can verify that the task is indeed using the FARGATE-SPOT capacity provider.

So that’s how you get started with Fargate Spot, you can try yourself right now.

A few weeks ago, we saw the release of Compute Savings Plans (of which Fargate is a part) and now with Fargate Spot, customers can save a great deal of money and run many different types of applications; there has never been a better time to be using Fargate.

AWS Fargate Spot is available in all regions where AWS Fargate is available, so you can try it yourself today.

— Martin

AWS Transit Gateway Adds Multicast and Inter-regional Peering

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/aws-transit-gateway-adds-multicast-and-inter-regional-peering/

AWS Transit Gateway is a service that enables customers to connect thousands of Amazon Virtual Private Clouds (VPCs) and their on-premises networks using a single gateway. Customers have been enjoying the reduction in operational costs and the overall simplicity that this service brings. Still, today things got even better, with the release of two new features, AWS Transit Gateway Inter-region peering and AWS Transit Gateway Multicast .

Inter-region peering
As customers expand workloads on AWS, they need to scale their networks across multiple accounts and VPCs, customers can connect pairs of VPCs using peering or use PrivateLink to expose private service endpoints from one VPC to another. However, managing this is complicated. AWS Transit Gateway Inter-region peering addresses this and makes it easy to create secure and private global networks across multiple AWS regions. Using Inter-region peering customers can create centralised routing policies between the different networks in their organisation and simplify management and reduce costs.

All the traffic that flows through Inter-region peering is anonymized and encrypted and is carried by the AWS backbone, ensuring it always takes the optimal path between regions in the most secure way.

Multicast
AWS Transit Gateway Multicast makes it easy for customers to build multicast applications in the cloud and distribute data across thousands of connected Virtual Private Cloud networks.

Multicast delivers a single stream of data to many users simultaneously. It is a preferred protocol to stream multimedia content and subscription data such as news articles and stock quotes, to a group of subscribers.

AWS is the first cloud provider to offer a native multicast solution which will enable customers to migrate their applications to the cloud and take advantage of the elasticity and scalability that AWS provides.

With this release we are introducing multicast domains in Transit Gateways. Similar to routing domains, multicast domains allow you to segment your multicast network into different domains and makes the Transit Gateway act as multiple multicast routers.

Available Now
These two new features are ready and waiting for you to try today. Inter-region peering is available in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) and Multicast is available in US East (N. Virginia).

— Martin

Amazon EKS on AWS Fargate Now Generally Available

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/amazon-eks-on-aws-fargate-now-generally-available/

Starting today, you can start using Amazon Elastic Kubernetes Service to run Kubernetes pods on AWS Fargate. EKS and Fargate make it straightforward to run Kubernetes-based applications on AWS by removing the need to provision and manage infrastructure for pods.

With AWS Fargate, customers don’t need to be experts in Kubernetes operations to run a cost-optimized and highly-available cluster. Fargate eliminates the need for customers to create or manage EC2 instances for their Amazon EKS clusters.

Customers no longer have to worry about patching, scaling, or securing a cluster of EC2 instances to run Kubernetes applications in the cloud. Using Fargate, customers define and pay for resources at the pod-level. This makes it easy to right-size resource utilization for each application and allow customers to clearly see the cost of each pod.

I’m now going to use the rest of this blog to explore this new feature further and deploy a simple Kubernetes-based application using Amazon EKS on Fargate.

Let’s Build a Cluster
The simplest way to get a cluster set up is to use eksctl, the official CLI tool for EKS. The command below creates a cluster called demo-newsblog with no worker nodes.

eksctl create cluster --name demo-newsblog --region eu-west-1 --fargate

This single command did quite a lot under the hood. Not only did it create a cluster for me, amongst other things, it also created a Fargate profile.

A Fargate profile, lets me specify which Kubernetes pods I want to run on Fargate, which subnets my pods run in, and provides the IAM execution role used by the Kubernetes agent to download container images to the pod and perform other actions on my behalf.

Understanding Fargate profiles is key to understanding how this feature works. So I am going to delete the Fargate profile that was automatically created for me and recreate it manually.

To create a Fargate profile, I head over to the Amazon Elastic Kubernetes Service console and choose the cluster demo-newsblog. On the details, Under Fargate profiles, I choose Add Fargate profile.

I then need to configure my new Fargate profile. For the name, I enter demo-default.

In the Pod execution role, only IAM roles with the eks-fargate-pods.amazonaws.com service principal are shown. The eksctl tool creates an IAM role called AmazonEKSFargatePodExecutionRole, the documentation shows how this role can be created from scratch.

In the Subnets section, by default, all subnets in my cluster’s VPC are selected. However, only private subnets are supported for Fargate pods, so I deselect the two public subnets.

When I click next, I am taken to the Pod selectors screen. Here it asks me to enter a namespace. I add default, meaning that I want any pods that are created in the default Kubernetes namespace to run on Fargate. It’s important to understand that I don’t have to modify my Kubernetes app to get the pods running on Fargate, I just need a Fargate Profile – if a pod in my Kubernetes app matches the namespace defined in my profile, that pod will run on Fargate.

There is also a Match labels feature here, which I am not using. This allows you to specify the labels of the pods that you want to select, so you can get even more specific with which pods run on this profile.

Finally, I click Next and then Create. It takes a minute for the profile to create and become active.

In this demo, I also want everything to run on Fargate, including the CoreDNS pods that are part of Kubernetes. To get them running on Fargate, I will add a second Fargate profile for everything in the kube-system namespace. This time, to add a bit of variety to the demo, I will use the command line to create my profile.

Technically, I do not need to create a second profile for this. I could have added an additional namespace to the first profile, but this way, I get to explore an alternative way of creating a profile.

First, I create the file below and save it as demo-kube-system-profile.json.

{
    "fargateProfileName": "demo-kube-system",
    "clusterName": "demo-news-blog",
    "podExecutionRoleArn": "arn:aws:iam::xxx:role/AmazonEKSFargatePodExecutionRole",
    "subnets": [
        "subnet-0968a124a4e4b0afe",
        "subnet-0723bbe802a360eb9"
    ],
    "selectors": [
        {
            "namespace": "kube-system"
        }
    ]
}

I then navigate to the folder that contains the file above and run the create-fargate-profile command in my terminal.

aws eks create-fargate-profile --cli-input-json file://demo-kube-system-profile.json

I am now ready to deploy a container to my cluster. To keep things simple, I deploy a single instance of nginx using the following kubectl command.

kubectl create deployment demo-app --image=nginx

I then check to see the state of my pods by running the get pods command.

kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
demo-app-6dbfc49497-67dxk   0/1     Pending   0          13s

If I run get nodes  I have three nodes (two for coreDNS and one for nginx). These nodes represent the compute resources that have instantiated for me to run my pods.

kubectl get nodes
NAME                                                   STATUS   ROLES    AGE     VERSION
fargate-ip-192-168-218-51.eu-west-1.compute.internal   Ready    <none>   4m45s   v1.14.8-eks
fargate-ip-192-168-221-91.eu-west-1.compute.internal   Ready    <none>   2m20s   v1.14.8-eks
fargate-ip-192-168-243-74.eu-west-1.compute.internal   Ready    <none>   4m40s   v1.14.8-eks

After a short time, I rerun the get pods command, and my demo-app now has a status of Running. Meaning my container has been successfully deployed onto Fargate.

kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
demo-app-6dbfc49497-67dxk   1/1     Running   0          3m52s

Pricing and Limitations
With AWS Fargate, you pay only for the amount of vCPU and memory resources that your pod needs to run. This includes the resources the pod requests in addition to a small amount of memory needed to run Kubernetes components alongside the pod. Pods running on Fargate follow the existing pricing model. vCPU and memory resources are calculated from the time your pod’s container images are pulled until the pod terminates, rounded up to the nearest second. A minimum charge for 1 minute applies. Additionally, you pay the standard cost for each EKS cluster you run, $0.20 per hour.

There are currently a few limitations that you should be aware of:

  • There is a maximum of 4 vCPU and 30Gb memory per pod.
  • Currently there is no support for stateful workloads that require persistent volumes or file systems.
  • You cannot run Daemonsets, Privileged pods, or pods that use HostNetwork or HostPort.
  • The only load balancer you can use is an Application Load Balancer.

Get Started Today
If you want to explore Amazon EKS on AWS Fargate yourself, you can try it now by heading on over to the EKS console in the following regions: US East (N. Virginia), US East (Ohio), Europe (Ireland), and Asia Pacific (Tokyo).

— Martin

New AWS Program to Help Future-proof Your End-of-Support Windows Server Applications

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/new-program-to-future-proof-windows-server-applications/

If you have been in business a while, you might find yourself in a situation where you have legacy Windows Server applications, critical to your business, that you can’t move to a newer, supported version of Windows Server. Customers give us many reasons why they can’t move these legacy applications – perhaps the application has dependencies on a specific version of Windows Server, or the customer has no expertise with the application, it may even be the case that the installation media or source code has been lost.

On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. Having an application that can run only on an unsupported version of Windows Server is problematic as you will no longer get free security patch updates, leaving you vulnerable to security and compliance risks. It is also difficult to move an application like this to the cloud without significant refactoring.

If you have legacy applications that run only on unsupported versions of Windows Server, then it’s often tempting to spend money on extended support. However, this is simply delaying the inevitable and customers tell us that they want a long-term solution which future-proofs their legacy applications.

We have a long-term solution
To help you with this, today, we are launching the AWS End-of-Support Migration Program (EMP) for Windows Server. This new program combines technology with expert guidance, to migrate your legacy applications running on outdated versions of Windows Server to newer, supported versions on AWS.

If you are facing Windows Server 2008 end-of-support, this program offers a unique solution and path forward that solves the issue for the long-term as opposed to just delaying the decision for another day. It is important to note, you don’t need to make a single code change in the legacy application and you also do not require the original installation media or source code.

In a moment, I will demonstrate how the technology portion of this program works. However, you should know, you will need to engage an AWS Partner or Professional Services to actually conduct the migration, the product page lists the network of partners who you can speak to about pricing and your specific requirements.

So let us look at how this works, I’ll run you through the steps that a partner might take when they migrate your application where the installation media is available.

On Windows Server 2016, I attempt to run the setup file that would install Microsoft SQL Server 2000. I am prompted by Windows that this application can’t run on this version of Windows Server. In this case, I also cannot run the application in compatibility mode.

I then move over to an old Windows Server 2003 that is running locally in my office. I run a tool that is a key piece of technology used in the AWS EMP for Windows Server, we use this tool to migrate the application, decoupling it from the underlying operating system. First, I have to select a folder to place my completed application package after the tool completes.

Next, I start a capture which takes a snapshot of my machine. This is used later to understand what has been changed by the installation process.

The tool then prompts me to install the application that I want to migrate. The tool is listening and capturing all of the changes that take place on the machine.

I run the application and go through the installation process, setting up the application as I would normally do.

The tool identifies all of the shortcuts created by the application installer, and I am asked to run the application using one of these entry points and to run through a typical workflow inside the application. The whole time, the tool is watching what process and system-level APIs are called, so it creates a picture of the application’s dependencies.

Once the capture is complete, I am presented with all of the files changed by the monitored application. I then need to work through these and manually confirm that the folders are indeed part of the installation process.

I go through the same process for registry keys. I manually verify that the registry keys are indeed related to the installation.

Finally, I give the package a name. The package includes all the application files, run times, components, deployment tools, as well as an engine that redirects the API calls from your application to files within the package. This resolves the dependencies and decouples the application from the underlying OS.

Once the package is complete, there are a few manual configuration adjustments that will need to be made. I won’t show these, but I mention it as it highlights why it’s required to have an AWS Partner or Professional Services conduct this process – some of the migration process requires in-depth knowledge and experience.

I then move over to a Windows Server 2016, and run the packaged application. Below you can see that my application is now running on a server it was previously incompatible with.

The AWS EMP for Windows Server supports even your most complex applications, including the ones with tight dependencies on older versions of the operating system, registries, libraries, and other files.

If you want to get started with future-proofing your legacy Windows Server workloads on AWS, head over to the AWS EMP for Windows Server page, where we list partners that can help you on your migration journey. You can also contact us directly about this program by completing this form.

— Martin

 

AWS Cloud Development Kit (CDK) – Java and .NET are Now Generally Available

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/aws-cloud-development-kit-cdk-java-and-net-are-now-generally-available/

Today, we are happy to announce that Java and .NET support inside the AWS Cloud Development Kit (CDK) is now generally available. The AWS CDK is an open-source software development framework to model and provision your cloud application resources through AWS CloudFormationAWS CDK also offers support for TypeScript and Python.

With the AWS CDK, you can design, compose, and share your own custom resources that incorporate your unique requirements. For example, you can use the AWS CDK to model a VPC, with its associated routing and security configurations. You could then wrap that code into a construct and then share it with the rest of your organization. In this way, you can start to build up libraries of these constructs that you can use to standardize the way your organization creates AWS resources.

I like that by using the AWS CDK, you can build your application, including the infrastructure, in your favorite IDE, using the same programming language that you use for your application code. As you code your AWS CDK model in either .NET or Java, you get productivity benefits like code completion and inline documentation, which make it faster to model your infrastructure.

How the AWS CDK Works
Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as a Amazon Simple Storage Service (S3) bucket or a Amazon Simple Notification Service (SNS) topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. You compose constructs together into stacks that you can deploy into an AWS environment, and apps – a collection of one or more stacks.

The AWS CDK includes the AWS Construct Library, which contains constructs representing AWS resources.

How to use the AWS CDK
I’m going to use the AWS CDK to build a simple queue, but rather than handcraft a CloudFormation template in YAML or JSON, the AWS CDK allows me to use a familiar programming language to generate and deploy AWS CloudFormation templates.

To get started, I need to install the AWS CDK command-line interface using NPM. Once this download completes, I can code my infrastructure in either TypeScript, Python, JavaScript, Java, or, .NET.

npm i -g aws-cdk

On my local machine, I create a new folder and navigate into it.

mkdir cdk-newsblog-dotnet && cd cdk-newsblog-dotnet

Now I have installed the CLI I can execute commands such as cdk init and pass a language switch, in this instance, I am using .NET, and the sample app with the csharp language switch.

cdk init sample-app --language csharp

If I wanted to use Java rather than .NET, I would change the --language switch to java.

cdk init sample-app --language java

Since I am in the terminal, I type code . which is a shortcut to open the current folder in VS Code. You could, of course, use any editor, such as Visual Studio or JetBrains Rider. As you can see below, the init command has created a basic .NET AWS CDK project.

If I look into the Program.cs, the Main void creates an App and then a CDKDotnetStack. This stack CDKDotnetStack is defined in the CDKDotnetStack.cs file. This is where the meat of the project resides and where all the AWS resources are defined.

Inside the CDKDotnetStack.cs file, some code creates a Amazon Simple Queue Service (SQS) then a topic and then finally adds a Amazon Simple Notification Service (SNS) subscription to the topic.

Now that I have written the code, the next step is to deploy it. When I do, the AWS CDK will compile and execute this project, converting my .NET code into a AWS CloudFormation template.

If I were to just deploy this now, I wouldn’t actually see the CloudFormation template, so the AWS CDK provides a command cdk synth that takes my application, compiles it, executes it, and then outputs a CloudFormation template.

This is just standard CloudFormation, if you look through it, you will find the following items:

  • AWS::SQS::Queue – The queue I added.
  • AWS::SQS::QueuePolicy – An IAM policy that allows my topic to send messages to my queue. I didn’t actually define this in code, but the AWS CDK is smart enough to know I need one of these, and so creates one.
  • AWS::SNS::Topic – The topic I created.
  • AWS::SNS::Subscription – The subscription between the queue and the topic.
  • AWS::CDK::Metadata This section is specific to the AWS CDK and is automatically added by the toolkit to every stack. It is used by the AWS CDK team for analytics and to allow us to identify versions if there are any issues.

Before I deploy this project to my AWS account, I will use cdk bootstrap. The bootstrap command will create a Amazon Simple Storage Service (S3) bucket for me, which will be used by the AWS CDK to store any assets that might be required during deployment. In this example, I am not using any assets, so technically, I could skip this step. However, it is good practice to bootstrap your environment from the start, so you don’t get deployment errors later if you choose to use assets.

I’m now ready to deploy my project and to do that I issue the following command cdk deploy

This command first creates the AWS CloudFormation template then deploys it into my account. Since my project will make a security change, it asks me if I wish to deploy these changes. I select yes, and a CloudFormation changeset is created, and my resources start building.

Once complete, I can go over to the CloudFormation console and see that all the resources are now part of a AWS CloudFormation stack.

That’s it, my resources have been successfully built in the cloud, all using .NET.

With the addition of Java and .NET, the AWS CDK now supports 5 programming languages in total, giving you more options in how you build your AWS resources. Why not install the AWS CDK today and give it a try in whichever language is your favorite?

— Martin

 

Improving Containers by Listening to Customers

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/improving-containers/

At AWS, we build our product roadmap based upon feedback from our customers. The following three new features have all come about because customers have asked us to solve specific issues they have face when building and operating sophisticated container-based applications.

Managed Node Groups for Amazon Elastic Kubernetes Service
Our customers have told us that they want to focus on building innovative solutions for their customers, and focus less on the heavy lifting of managing Kubernetes infrastructure.

Amazon Elastic Kubernetes Service already provides you with a standard, highly-available Kubernetes cluster control plane, and now, AWS can also manage the nodes (Amazon Elastic Compute Cloud (EC2) instances) for your Kubernetes cluster. Amazon Elastic Kubernetes Service makes it easy to apply bug fixes and security patches to nodes, and updates them to the latest Kubernetes versions along with the cluster.

The Amazon Elastic Kubernetes Service console and API give you a single place to understand the state of your cluster, you no longer have to jump around different services to see all of the resources that make up your cluster.

You can provision managed nodes today when you create a new Amazon EKS cluster. There is no additional cost to use Amazon EKS managed node groups, you only pay for the Amazon EKS cluster and AWS resources they provision. To find out more check out this blog: Extending the EKS API: Managed Node Groups.

Managing your container Logs with AWS FireLens
Customers building container-based applications told us that they wanted more flexibility when it came to logging; however, they didn’t wish to to install, configure, or troubleshoot logging agents.

AWS FireLens, gives you this flexibility as you can now forward container logs to storage and analytics tools by configuring your task definition in Amazon ECS or AWS Fargate.

This means that developers have their containers send logs to Stdout and then FireLens picks up these logs and forwards them to the destination that has been configured.

FireLens works with the open-source projects Fluent Bit and Fluentd, which means that you can send logs to any destination supported by either of those projects.

There are a lot of configuration options with FireLens, and you can choose to filter logs and even have logs sent to multiple destinations. For more information, you can take a look at the demo I wrote earlier in the week: Announcing Firelens – A New Way to Manage Container Logs.

If’ you would like a deeper understanding of how the technology works and was built, Wesley Pettit goes into even further depth on the Containers Blog in his article: Under the hood: FireLens for Amazon ECS Tasks.

Amazon Elastic Container Registry EventBridge Support
Customers using Amazon Elastic Container Registry have told us they want to be able to start a build process when new container images are pushed to Elastic Container Registry.

We have therefore added Amazon Elastic Container Registry EventBridge support.

Using events that Elastic Container Registry now publishes to EventBridge, you can trigger actions such as starting a pipeline or posting a message to somewhere like Amazon Chime or Slack when your image is successfully pushed.

To learn more about this new feature, check out the following blog post where I give a more detailed explication and demo: EventBridge support in Amazon Elastic Container Registry.

More to come
These 3 new releases add to other great releases we have already had this year such as Savings Plans, Amazon EKS Windows Containers support, and Native Container Image Scanning in Amazon ECR.

We are still listening, and we need your feedback, so if you have a feature request or a pain point with your container applications, please let us know by creating or commenting on issues in our public containers roadmap. Sometime in the future I might one-day writing about a new feature that was inspired by you.

Martin

 

EventBridge Support in Amazon Elastic Container Registry

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/eventbridge-support-in-amazon-elastic-container-registry/

Many of our customers require a secure and private place to store their container images, and that’s why they use our fully managed container registry Amazon Elastic Container Registry. We recently added support for Amazon EventBridge so that you can trigger actions when images are pushed or deleted. These actions can trigger a continuous integration, continuous deployment pipeline when an image is pushed or post a message to your DevOps team Slack channel when an image has been deleted.

This new capability can even enable complicated workflows, for example, customers can use the image push event on a base image to trigger a rebuild of images built on top of that base. In this scenario, a base image might be rebuilt weekly to pick up the latest security patches. A push event from the base image repository can trigger other builds, so that all derivative images are patched, too.

To show you how to go about using this new capability, I thought I’d open up the console and work through an example of how all the pieces fit together.

In the Amazon EventBridge console, I create a new rule, and I enter a unique name and description.

Next, I scroll down to Define pattern and begin to customise the type of event pattern that I want to use. I leave the default Event pattern radio button selected and also that I want to use a Pre-defined patten by service. Since Elastic Container Registry is an AWS service, I select AWS as the Service Provider.

In the Service Name section, you can select one of the many different AWS services as the event source. I am going to choose the newest addition to this list Elastic Container Registry (ECR). Lastly, in this section, I select ECR Image Action as the Event type. This ECR Image Action contains both DELETE and PUSH as action types.

Next, I’m asked to configure which event bus I want to use. For this example, I select the AWS default event bus that comes with every AWS account.

Now that I have identified where my events are coming from, I now need to say where I want them to go. We call these targets, and there are plenty of options here. For example, I could send the event to a Lambda Function, a Kinesis stream, or any one of the wide variety of AWS targets.

To keep things simple, I’m going to choose to invoke a Amazon Simple Notification Service (SNS) topic. This topic is called ImageAction, and I have subscribed to this topic so that I receive an email when new messages are received by this topic.

Back over on my laptop, I push a new version of my container to my repository in to Elastic Container Registry.

If I go over to the Elastic Container Registry console, I can see that my Docker Image was successfully pushed, I’m now going to select the image and click the Delete button, which will delete my new image.

This will have sent both a PUSH and a DELETE event through to my SNS topic which in turn deliver two emails to me as a subscriber to that topic.

 

If I open up Outlook, sure enough, I have two (admittedly not pretty) emails that have both the respective action-type of PUSH and DELETE.

So there you have it, you can now wire up events in Elastic Container Registry and enable exciting and wonderful things to happen as a result. Amazon EventBridge support in Amazon Elastic Container Registry is available in all public AWS Regions and GovCloud (US). Try it now in the Amazon EventBridge console.

Happy Eventing!

Martin

Announcing Firelens – A New Way to Manage Container Logs

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/announcing-firelens-a-new-way-to-manage-container-logs/

Today, the fantastic team that builds our container services at AWS have launched an excellent new tool called AWS FireLens that will make dealing with logs a whole lot easier.

Using FireLens, customers can direct container logs to storage and analytics tools without modifying deployment scripts, manually installing extra software or writing additional code. With a few configuration updates on Amazon ECS or AWS Fargate, you select the destination and optionally define filters to instruct FireLens to send container logs to where they are needed.

FireLens works with either Fluent Bit or Fluentd, which means that you can send logs to any destination supported by either of those open-source projects. We maintain a web page where you can see a list of AWS Partner Network products that have been reviewed by AWS Solution Architects. You can send log data or events to any of these products using FireLens.

I find the simplest way to understand FireLens is to use it, so in the rest of this blog post, I’m going to demonstrate using FireLens with a container in Amazon ECS, forwarding the container logs on to Amazon CloudWatch.

First, I need to configure a task definition, I got an example definition from the Amazon ECS FireLens Examples on GitHub.

I replaced the AWS Identity and Access Management (IAM) roles with my own taskRoleArn and executionRoleArn IAM roles, I also added port mappings so that I could access the NGINX container from a browser.

{
	"family": "firelens-example-cloudwatch",
	"taskRoleArn": "arn:aws:iam::365489000573:role/ecsInstanceRole",
	"executionRoleArn": "arn:aws:iam::365489300073:role/ecsTaskExecutionRole",
	"containerDefinitions": [
		{
			"essential": true,
			"image": "906394416424.dkr.ecr.us-east-1.amazonaws.com/aws-for-fluent-bit:latest",
			"name": "log_router",
			"firelensConfiguration": {
				"type": "fluentbit"
			},
			"logConfiguration": {
				"logDriver": "awslogs",
				"options": {
					"awslogs-group": "firelens-container",
					"awslogs-region": "us-west-2",
					"awslogs-create-group": "true",
					"awslogs-stream-prefix": "firelens"
				}
			},
			"memoryReservation": 50
		 },
		 {
			 "essential": true,
			 "image": "nginx",
			 "name": "app",
			 "portMappings": [
				{
				  "containerPort": 80,
				  "hostPort": 80
				}
			  ],
			 "logConfiguration": {
				 "logDriver":"awsfirelens",
				 "options": {
					"Name": "cloudwatch",
					"region": "us-west-2",
					"log_group_name": "firelens-fluent-bit",
					"auto_create_group": "true",
					"log_stream_prefix": "from-fluent-bit"
				}
			},
			"memoryReservation": 100
		}
	]
}

I saved the task definition to a local folder and then used the AWS Command Line Interface (CLI) to register the task definition.

aws ecs register-task-definition --cli-input-json file://cloudwatch_task_definition.json

I already have an ECS cluster set up, but if you don’t, you can learn how to do that from the ECS documentation. The command below creates a service on my ECS cluster using my newly registered task definition.

aws ecs create-service --cluster demo-cluster --service-name demo-service --task-definition firelens-example-cloudwatch --desired-count 1 --launch-type "EC2"

After logging into the Amazon ECS console and drilling into my service, and my tasks, I find the container definition that exposes an External Link. This IP address is exposed since I asked for the container to map port 80 of the container port to port 80 of the host port inside of the task definition.

If I go to that IP adress in a browser then the NGINX container which I used as my app, serves its default page. The NGINX container logs any requests that it receives to Stdout and so FireLens will now forward these logs on to CloudWatch. I added a little message to the URL so that when I take a look at the logs, I should be able to quickly identify this request from all the others.

I then navigated over to the Amazon CloudWatch console and drilled down into the firelens-fluent-bit log group. If you remember this is the log group name that I set up in the original task definition. Below you will notice I have several logs in my log stream and the last one is the request that I just made in the browser. If you look closely at the log, you will find that “IT WORKS” is passed in as part of the GET request.

So there we have it, I successfully set up FireLens and had it forward my container logs on to CloudWatch I could, of course, have chosen a different destination, for example, a third-party provider like Datadog or an AWS destination like Amazon Kinesis Data Firehose.

If you want to try FireLens, it is available today in all regions that support Amazon ECS, and AWS Fargate.

Happy Logging!

New Automation Features In AWS Systems Manager

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/new-automation-features-in-aws-systems-manager/

Today we are announcing additional automation features inside of AWS Systems Manager. If you haven’t used Systems Manager yet, it’s a service that provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.

With this new release, it just got even more powerful. We have added additional capabilities to AWS Systems Manager that enables you to build, run, and share automations with others on your team or inside your organisation — making managing your infrastructure more repeatable and less error-prone.

Inside the AWS Systems Manager console on the navigation menu, there is an item called Automation if I click this menu item I will see the Execute automation button.

When I click on this I am asked what document I want to run. AWS provides a library of documents that I could choose from, however today, I am going to build my own so I will click on the Create document button.

This takes me to a a new screen that allows me to create a document (sometimes referred to as an automation playbook) that amongst other things executes Python or PowerShell scripts.

The console gives me two options for editing a document: A YAML editor or the “Builder” tool that provides a guided, step-by-step user interface with the ability to include documentation for each workflow step.

So, let’s take a look by building and running a simple automation. When I create a document using the Builder tool, the first thing required is a document name.

Next, I need to provide a description. As you can see below, I’m able to use Markdown to format the description. The description is an excellent opportunity to describe what your document does, this is valuable since most users will want to share these documents with others on their team and build a library of documents to solve everyday problems.

Optionally, I am asked to provide parameters for my document. These parameters can be used in all of the scripts that you will create later. In my example, I have created three parameters: imageId, tagValue, and instanceType. When I come to execute this document, I will have the opportunity to provide values for these parameters that will override any defaults that I set.

When someone executes my document, the scripts that are executed will interact with AWS services. A document runs with the user permissions for most of its actions along with the option of providing an Assume Role. However, for documents with the Run a Script action, the role is required when the script is calling any AWS API.

You can set the Assume role globally in the builder tool; however, I like to add a parameter called assumeRole to my document, this gives anyone that is executing it the ability to provide a different one.

You then wire this parameter up to the global assumeRole by using the {{assumeRole}}syntax in the Assume role property textbox (I have called my parameter name assumeRole but you could call it what you like, just make sure that the name you give the parameter is what you put in the double parentheses syntax e.g.{{yourParamName}}).

Once my document is set up, I then need to create the first step of my document. Your document can contain 1 or more steps, and you can create sophisticated workflows with branching, for example based on a parameter or failure of a step. Still, in this example, I am going to create three steps that execute one after another. Again you need to give the step a name and a description. This description can also include markdown. You need to select an Action Type, for this example I will choose Run a script.

With the ‘Run a script’ action type, I get to run a script in Python or PowerShell without requiring any infrastructure to run the script. It’s important to realise that this script will not be running on one of your EC2 instances. The scripts run in a managed compute environment. You can configure a Amazon CloudWatch log group on the preferences page to send outputs to a CloudWatch log group of your choice.

In this demo, I write some Python that creates an EC2 instance. You will notice that this script is using the AWS SDK for Python. I create an instance based upon an image_id, tag_value, and instance_type that are passed in as parameters to the script.

To pass parameters into the script, in the Additional Inputs section, I select InputPayload as the input type. I then use a particular YAML format in the Input Value text box to wire up the global parameters to the parameters that I am going to use in the script. You will notice that again I have used the double parentheses syntax to reference the global parameters e.g. {{imageId}}

In the Outputs section, I also wire up an output parameter than can be used by subsequent steps.

Next, I will add a second step to my document . This time I will poll the instance to see if its status has switched to ok. The exciting thing about this code is the InstanceId, is passed into the script from a previous step. This is an example of how the execution steps can be chained together to use outputs of earlier steps.

def poll_instance(events, context):
    import boto3
    import time

    ec2 = boto3.client('ec2')

    instance_id = events['InstanceId']

    print('[INFO] Waiting for instance to enter Status: Ok', instance_id)

    instance_status = "null"

    while True:
    res = ec2.describe_instance_status(InstanceIds=[instance_id])

    if len(res['InstanceStatuses']) == 0:
        print("Instance Status Info is not available yet")
        time.sleep(5)
        continue

    instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status']

    print('[INFO] Polling get status of the instance', instance_status)

    if instance_status == 'ok':
        break

    time.sleep(10)

    return {'Status': instance_status, 'InstanceId': instance_id}

To pass the parameters into the second step, notice that I use the double parentheses syntax to reference the output of a previous step. The value in the Input value textbox {{launchEc2Instance.payload}} is the name of the step launchEc2Instance and then the name of the output parameter payload.

Lastly, I will add a final step. This step will run a PowerShell script and use the AWS Tools for PowerShell. I’ve added this step purely to show that you can use PowerShell as an alternative to Python.

You will note on the first line that I have to Install the AWSPowerShell.NetCore module and use the -Force switch before I can start interacting with AWS services.

All this step does is take the InstanceId output from the LaunchEc2Instance step and use it to return the InstanceType of the ECS instance.

It’s important to note that I have to pass the parameters from LaunchEc2Instance step to this step by configuring the Additional inputs in the same way I did earlier.

Now that our document is created we can execute it. I go to the Actions & Change section of the menu and select Automation, from this screen, I click on the Execute automation button. I then get to choose the document I want to execute. Since this is a document I created, I can find it on the Owned by me tab.

If I click the LaunchInstance document that I created earlier, I get a document details screen that shows me the description I added. This nicely formatted description allows me to generate documentation for my document and enable others to understand what it is trying to achieve.

When I click Next, I am asked to provide any Input parameters for my document. I add the imageId and ARN for the role that I want to use when executing this automation. It’s important to remember that this role will need to have permissions to call any of the services that are requested by the scripts. In my example, that means it needs to be able to create EC2 instances.

Once the document executes, I am taken to a screen that shows the steps of the document and gives me details about how long each step took and respective success or failure of each step. I can also drill down into each step and examine the logs. As you can see, all three steps of my document completed successfully, and if I go to the Amazon Elastic Compute Cloud (EC2) console, I will now have an EC2 instance that I created with tag LaunchedBySsmAutomation.

These new features can be found today in all regions inside the AWS Systems Manager console so you can start using them straight away.

Happy Automating!

— Martin;

Amazon EKS Windows Container Support now Generally Available

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/amazon-eks-windows-container-support-now-generally-available/

In March of this year, we announced a preview of Windows Container support on Amazon Elastic Kubernetes Service and invited customers to experiment and provide us with feedback. Today, after months of refining the product based on that feedback, I am delighted to announce that  Windows Container support is now generally available.

Many development teams build and support applications designed to run on Windows Servers and with this announcement they can now deploy them on Kubernetes alongside Linux applications. This ability will provide more consistency in system logging, performance monitoring, and code deployment pipelines.

Amazon Elastic Kubernetes Service simplifies the process of building, securing, operating, and maintaining Kubernetes clusters, and allows organizations to focus on building applications instead of operating Kubernetes. We are proud to be the first Cloud provider to have General Availability of Windows Containers on Kubernetes and look forward to customers unlocking the business benefits of Kubernetes for both their Windows and Linux workloads.

To show you how this feature works, I will need an Amazon Elastic Kubernetes Service cluster. I am going to create a new one, but this will work with any cluster that is using Kubernetes version 1.14 and above. Once the cluster has been configured, I will add some new Windows nodes and deploy a Windows application. Finally, I will test the application to ensure it is running as expected.

The simplest way to get a cluster set up is to use eksctl, the official CLI tool for EKS. The command below creates a cluster called demo-windows-cluster and adds two Linux nodes to the cluster. Currently, at least one Linux node is required to support Windows node and pod networking, however, I have selected two for high availability and we would recomend that you do the same.

eksctl create cluster \
--name demo-windows-cluster \
--version 1.14 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 2 \
--nodes-min 1 \
--nodes-max 3 \
--node-ami auto

Starting with eksctl version 0.7, a new utility has been added called install-vpc-controllers. This utility installs the required VPC Resource Controller and VPC Admission Webhook into the cluster. These components run on Linux nodes and are responsible for enabling networking for incoming pods on Windows nodes. To use the tool we run the following command.

eksctl utils install-vpc-controllers --name demo-windows-cluster --approve

If you don’t want to use eksctl we also provide guides in the documentation on how you can run PowerShell or Bash scripts, to achieve the same outcome.

Next, I will need to add some Windows Nodes to our cluster. If you use eksctl to create the cluster then the command below will work. If you are working with an existing cluster, check out the documentation for instructions on how to create a Windows node group and connect it to your cluster.

eksctl create nodegroup \
--region us-west-2 \
--cluster demo-windows-cluster \
--version 1.14 \
--name windows-ng \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--node-ami-family WindowsServer2019FullContainer \
--node-ami ami-0f85de0441a8dcf46

The most up to date Windows AMI ID for your region can be found by querying the AWS SSM Parameter Store. Instructions to do this can be found in the Amazon EKS documentation.

Now I have the nodes up and running I can deploy a sample application. I am using a YAML file from the AWS containers roadmap GitHub repository. This file configures an app that consists of a single container that runs IIS which in turn hosts a basic HTML page.

kubectl apply -f https://raw.githubusercontent.com/aws/containers-roadmap/master/preview-programs/eks-windows-preview/windows-server-IIS.yaml

These are Windows containers, which are often a little larger than Linux containers and therefore take a little longer to download and start-up. I monitored the progress of the deployment by running the following command.

kubectl get pods -o wide --watch

I waited for around 5 minutes for the pod to transition to the Running state. I then executed the following command, which connects to the pod and initializes a PowerShell session inside the container. The windows-server-iis-66bf9745b-xsbsx property is the name of the pod, if you are following along with this your name will be different.

kubectl exec -it windows-server-iis-66bf9745b-xsbsx powershell

Once you are conected to the PowerShell session you can now execute PowerShell as if you were using the terminal inside the container. Therefore if we run the command below we should get some information back about the news blog

Invoke-WebRequest -Uri https://aws.amazon.com/blogs/aws/ -UseBasicParsing

To exit the PowerShell session I type exit and it returns me to my terminal. From there I can inspect the service that was deployed by the sample application, I type the following command:

kubectl get svc windows-server-iis-service

This gives me the following output that describes the service:

NAME				TYPE 		CLUSTER-IP 	EXTERNAL-IP 				PORT(S) 	AGE
windows-server-iis-service 	LoadBalancer 	xx.xx.xxx.xxx 	unique.us-west-2.elb.amazonaws.com 	80:32750/TCP 	54s

The External IP should be the address of a load balancer. If I type this URL into a browser and append /default.html then it will load a HTML page that was created by the sample application deployment. This is being served by our IIS server from one of the Windows containers I deployed.

A Website saying Hello EKS

So there we have it, Windows Containers running on Amazon Elastic Kubernetes Service. For more details, please check out the documentation. Amazon EKS Windows Container Support is available in all the same regions as Amazon EKS is available, and pricing details can be found over here.

We have a long roadmap for Amazon Elastic Kubernetes Service, but we are eager to get your feedback and will use it to drive our prioritization process. Please take a look at this new feature and let us know what you think!

Now Open – AWS Middle East (Bahrain) 

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/now-open-aws-middle-east-bahrain/

The first AWS Region in the Middle East is now open and it is available for you to use today. The official name is Middle East (Bahrain) and the API name is me-south-1.

With today’s launch, the AWS Cloud now spans 69 Availability Zones within 22 geographic regions around the world.

The Middle East (Bahrain) Region consists of three Availability Zones (AZ’s). Having three Availability Zones enables Middle East organizations to meet business continuity and disaster recovery requirements and also build highly available, fault-tolerant, and scalable applications.

Instances and Services
Applications running in this 3-AZ region can use C5, C5d, D2, I3, M5, M5d, R5, R5d, and T3 instances, and can make use of a long list of AWS services including Amazon API Gateway, Application Auto Scaling, AWS Certificate Manager (ACM), AWS Artifact, AWS CloudFormation, Amazon CloudFront, AWS CloudTrail, Amazon CloudWatch,CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Config, AWS Config Rules, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Auto Scaling, EC2 Dedicated Hosts, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), Elastic Container Registry, Amazon ECS, Application Load Balancers (Classic, Network, and Application), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), Amazon Kinesis Data Streams, AWS Key Management Service (KMS), AWS Lambda, AWS License Manager, AWS Marketplace, Amazon Neptune, AWS Organizations, AWS Personal Health Dashboard, AWS Resource Groups, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Aurora, Amazon Route 53 (including Private DNS for VPCs), AWS Shield, AWS Server Migration Service, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS Step Functions, AWS Support API, AWS Systems Manager, AWS Transit Gateway, AWS Trusted Advisor, Amazon Virtual Private Cloud, and VM Import/Export.

Edge Locations and Direct Connect
Amazon CloudFront edge locations are already operational in two cities in the Middle East (Dubai, and Fujairah, in the United Arab Emirates).

These cities also host existing AWS Direct Connect locations (Equinix DX1 in Dubai, and Etisalat Smart Hub datacenter in Fujairah). In addition we are now opening a new AWS Direct Connect location, AWS Bahrain DC53, in Manama, Bahrain. This new location is the first to be operated by AWS and features 1 Gbps and 10 Gbps connections. See the AWS Direct Connect website for details on our locations, associated home regions, logical redundancy, and pricing.

AWS Customers in the Middle East
Our customers in the Middle East are already using AWS to do incredible things, for example:

Emirates NBD is a leading banking Group in the Middle East that is utilizing AWS’s artificial intelligence and machine learning services, data analytics, and natural language processing technologies to better engage with customers and simplify banking.

Al Tayer Group, one of the largest privately held family-run conglomerates in the Middle East and North Africa (MENA) with interests in automobile sales and services, luxury and lifestyle retail, and hospitality, relies on several AWS managed database services such as Amazon Aurora, and was able to cut costs by 40%, while increasing performance by four times, compared with proprietary database technologies.

Bahrain’s National Bureau of Revenue (NBR), needed a solution that would enable a seamless VAT rollout, without delays and with the highest levels of security. By launching SAP S/4 Hana on AWS, the NBR was able to go to market in just under two months while lowering costs by 40%.

The most well-known and fastest growing businesses in the Middle East are also choosing AWS to launch and grow their businesses. Careem, the leading technology platform for the region, started working with AWS in 2012 to help it scale fast and wide across 14 countries in under seven years. Today, Careem hosts some 33 million customers and one million drivers on its platform, and has expanded its services to offer on demand deliveries as well its original core business, ride hailing.

And that Makes 22
AWS Cloud now has 22 regions with 69 Availability Zones, and we’re not close to being done. We are currently working on nine more Availability Zones and three more AWS Regions in Cape Town, Jakarta, and Milan.

Investing in the Future
To support the growth in cloud adoption across the region, AWS has made significant investments in education, training, and certification programs to help those interested in the latest cloud computing technologies, best practices, and architectures, to advance their technical skills and further support Middle East organizations in their digital transformation.

AWS Activate – This global program provides startups with credits, training, and support so that they can build their businesses on AWS.

AWS Educate – This global program teaches students about cloud computing. It provides AWS credits to educators and students, along with discounts on training, access to curated content, personalized learning pathways, and collaboration tools.

AWS Academy – This global program is designed to bridge the gap between academia and industry by giving students the knowledge that they need to have in order to qualify for jobs that require cloud skills. The program is built around hands-on experience, and includes an AWS-authored curriculum, access to AWS certification, accreditation for educators.

Training and Certification – This global program helps developers to build cloud skills using digital or classroom training and to validate those skills by earning an industry-recognized credential. It includes learning paths for Cloud Practitioners, Architects, Developers, and Operations.

The Middle East (Bahrain) Region is now open and you can start creating your AWS resources in it today!

— Martin

OpsCenter – A New Feature to Streamline IT Operations

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/opscenter-a-new-feature-to-streamline-it-operations/

The AWS teams are always listening to customers and trying to understand how they can improve services to make customers more productive. A new feature in AWS Systems Manager called OpsCenter exemplifies this approach by enabling customers to aggregate issues, events and alerts, across services. So customers can go to one place to view, investigate, and remediate issues reducing the need to navigate across multiple different AWS services.

Issues, events and alerts appear as operations items (OpsItems) in this new console and provide contextual information, historical guidance, and quick solution steps. The feature aims to improve the mean time to resolution, making engineers more productive by ensuring key investigation data is available in one place.

Engineers working on an OpsItem get access to information such as:

  • Event, resource and account details
  • Past OpsItems with similar characteristics
  • Related AWS Config changes and relationships
  • AWS CloudTrail logs
  • Amazon CloudWatch alarms
  • AWS CloudFormation Stack information
  • Other quick-links to access logs and metrics
  • List of runbooks and recommended runbooks
  • Additional information passed to OpsCenter through AWS services

This information helps engineers to investigate and remediate operational issues faster. Engineers can use OpsCenter to view and address problems using the Systems Manager console or via the Systems Manager OpsCenter APIs.

I’ll spend the rest of this blog exploring the capabilities of this new feature. To get started, I open the Systems Manager Console, make sure that I am in the region of interest, and click OpsCenter inside the Operations Management menu which is on the far left of the screen.

After arriving at the OpsCenter screen for the first time and clicking on “Getting Started” I am prompted with a configure sources screen. This screen sets up the systems with some example CloudWatch rules that will create OpsItems when specific rules trigger. For example, one of the CloudWatch rules will alert if an AutoScaling EC2 instance is stopped or terminated. On this screen, you need to configure and add the ARN of an IAM role that has permission to create OpsItems. This security role is used by the CloudWatch rules to create the OpsItems. You can, of course, create your OpsItems through the API or by creating custom CloudWatch rules.

Now the system has set me up some CloudWatch rules I thought I would test it out by triggering an alert. In the EC2 console, I will intentionally deregister (delete) the Amazon Machine Image that is associated with my AutoScaling Group. I will then increase the Desired Capacity of my AutoScaling group from 2 to 4. The AutoScaling group will later try to create new instances; however, it will fail because I have deleted the AMI.

As I expected this triggered the CloudWatch rule to create an OpsItem in OpsCenter console. There is now one item open in the OpsItem status summary dashboard. I click on this to get more detail on the open OpsItems.

This gives me a list of all the open OpsItems, and I can see that I have one with the title “Auto Scaling EC2 instance launch failed” which has been created by CloudWatch rules because I deleted the AMI associated with the AutoScaling group. Clicking on that OpsItems takes me to more detail of the OpsItem.

I can from this overview screen start to explore the item. Looking around this screen, I can find out more information about this OpsItem and see it is collecting data from numerous services and presenting it in one place.

Further down the screen I can see other Similar OpsItems and can explore them. In a real situation, this might give me contextual information as to how similar problems were solved in the past, ensuring that operations teams learn from their previous collective experience. I can also manually add a relationship between OpsItems if they are connected. Importantly the Operational data section gives me information about the cause. The status message is particularly useful since it’s calling out the issue: that the AMI does not exist.

On the related resources details screen, I can find out more information about this OpsItem. For example, I can see tag information about the resources alongside relevant CloudWatch alarms. I can explore details from AWS config as well as drill into CloudTrail logs. I can even see if the resources are associated with any CloudFormation stacks.

Earlier on, I created a CloudWatch alarm that will alert when the number of instances on my AutoScaling group falls below the desired instance threshold (4 Instances). As you can see, I don’t need to go into the CloudWatch console to view this, I can see right from this screen that I have an Alarm State for Booking App Instance Count Low.

The Runbooks section is fascinating; what it is offering me is automated ways in which I can resolve this issue. There are several built-in Runbooks; however, I have a custom one which, luckily enough, automates the fix for this exact problem. It will create a new AMI based upon one of the healthy instances in my AutoScaling Group and then update the config to use that new AMI when it creates instances. To run this automation, I select the runbook and press execute.

It asks me to provide some parameters for the automation job. I paste the AutoScaling Group Name (BookingsAppASG) as the only required parameter and press Execute.

After a minute or so a green success signifier appears in the Latest Status column of the runbook and I am now able to view the logs and even save the output to operation data on the OpsItem so that other engineers can clearly see what I have done.

 

Back in the OpsCenter OpsItem related resource details screen, I can now see that my CloudWatch alarm is green and in an OK state, signifying that my AutoScalling group currently has four instances running and I am safe to resolve the OpsItem.

This service is available now, and you can start using it today in all public AWS regions so why not open up the console and start exploring all the ways that you can save you and your team valuable time.