All posts by Martin Yip

Optimizing Network Intensive Workloads on Amazon EC2 A1 Instances

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/optimizing-network-intensive-workloads-on-amazon-ec2-a1-instances/

This post courtesy of Ali Saidi, AWS, Principal Engineer

At re:Invent 2018, AWS announced the Amazon EC2 A1 instance. The A1 instances are powered by our internally developed Arm-based AWS Graviton processors and are up to 45% less expensive than other instance types with the same number of vCPUs and DRAM. These instances are based on the AWS Nitro System, and offer enhanced-networking of up to 10 Gbps with Elastic Network Adapters (ENA).

One of the use cases for the A1 instance is key-value stores and in this post, we describe how to get the most performance from the A1 instance running memcached. Some simple configuration options increase the performance of memcached by 3.9X over the out-of-the-box experience as we’ll show below. Although we focus on memcached, the configuration advice is similar for any network intensive workload running on A1 instances. Typically, the performance of network intensive workloads will improve by tuning some of these parameters, however depending on the particular data rates and processing requirements the values below could change.

irqbalance

Most Linux distributions enable irqbalance by default which load-balance interrupts to different CPUs during runtime. It does a good job to balance interrupt load, but in some cases, we can do better by pinning interrupts to specific CPUs. For our optimizations we’re going to temporarily disable irqbalance, however, if this is a production configuration that needs to survive a server reboot, irqbalance would need to be permanently disabled and the changes below would need to be added to the boot sequence.

Receive Packet Steering (RPS)

RPS controls which CPUs process packets are received by the Linux networking stack (softIRQs). Depending on instance size and the amount of application processing needed per packet, sometimes the optimal configuration is to have the core receiving packets also execute the Linux networking stack, other times it’s better to spread the processing among a set of cores. For memcached on EC2 A1 instances, we found that using RPS to spread the load out is helpful on the larger instance sizes.

Networking Queues

A1 instances with medium, large, and xlarge instance sizes have a single queue to send and receive packets while 2xlarge and 4xlarge instance sizes have two queues. On the single queue droplets, we’ll pin the IRQ to core 0, while on the dual-queue droplets we’ll use either core 0 or core 0 and core 8.

Instance TypeIRQ settingsRPS settingsApplication settings
a1.xlargeCore 0Core 0Run on cores 1-3
a1.2xlargeBoth on core 0Core 0-3, 4-7Run on core 1-7
a1.4xlargeCore 0 and core 8Core 0-7, 8-15Run on cores 1-7 and 9-15

 

 

 

 

 

The following script sets up the Linux kernel parameters:

#!/bin/bash 

sudo systemctl stop irqbalance.service
set_irq_affinity() {
  grep eth0 /proc/interrupts | awk '{print $1}' | tr -d : | while read IRQ; 
do
    sudo sh -c "echo $1 > /proc/irq/$IRQ/smp_affinity_list"
    shift
  done
}
 
case `grep ^processor /proc/cpuinfo  | wc -l ` in
  (4) sudo sh -c 'echo 1 > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      set_irq_affinity 0
      ;;
  (8) sudo sh -c 'echo f > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      sudo sh -c 'echo f0 > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      set_irq_affinity 0 0
      ;;
  (16) sudo sh -c 'echo ff > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      sudo sh -c 'echo ff00 > /sys/class/net/eth0/queues/rx-0/rps_cpus'
      set_irq_affinity 0 08
      ;;
  *)  echo "Script only supports 4, 8, 16 cores on A1 instances"
      exit 1;
      ;;
esac

Summary

Some simple tuning parameters can significantly improve the performance of network intensive workloads on the A1 instance. With these changes we get 3.9X the performance on an a1.4xlarge and the other two instance sizes see similar improvements. While the particular values listed here aren’t applicable to all network intensive benchmarks, this article demonstrates the methodology and provides a starting point to tune the system and balance the load across CPUs to improve performance. If you have questions about your own workload running on A1 instances, please don’t hesitate to get in touch with us at [email protected] .

Getting started with the A1 instance

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/getting-started-with-the-a1-instance/

This post courtesy of Ali Saidi, Annapurna Labs, Principal Systems Developer

At re:Invent 2018 AWS announced the Amazon EC2 A1 instance. These instances are based on the AWS Nitro System that powers all of our latest generation of instances, and are the first instance types powered by the AWS Graviton Processor. These processors feature 64-bit Arm Neoverse cores and are the first general-purpose processor design by Amazon specifically for use in AWS. The instances are up to 40% less expensive than the same number of vCPUs and DRAM available in other instance types. A1 instances are currently available in the US East (N. Virginia and Ohio), US West (Oregon) and EU (Ireland) regions with the following configurations:

ModelvCPUsMemory (GiB)Instance StoreNetwork BandwidthEBS Bandwidth
a1.medium12EBS OnlyUp to 10 GbpsUp to 3.5 Gbps
a1.large24EBS OnlyUp to 10 GbpsUp to 3.5 Gbps
a1.xlarge48EBS OnlyUp to 10 GbpsUp to 3.5 Gbps
a1.2xlarge816EBS OnlyUp to 10 GbpsUp to 3.5 Gbps
a1.4xlarge1632EBS OnlyUp to 10 GbpsUp to 3.5 Gbps

For further information about the instance itself, developers can watch this re:Invent talk and visit the A1 product details page.

Since introduction, we’ve been expanding the available operating systems for the instance and working with the Arm software ecosystem. This blog will provide a summary of what’s supported and how to use it.

Operating System Support

If you’re running on an open source stack, as many customers who build applications that scale-out in the cloud are, the Arm ecosystem is well developed and likely already supports your application.

The A1 instance requires AMIs and software built for Arm processors. When we announced A1, we had support for Amazon Linux 2, Ubuntu 16.04 and 18.04, as well as Red Hat Enterprise Linux 7.6. A little over two months later and the available operating systems for our customers has increased to include Red Hat Enterprise Linux 8.0 Beta, NetBSD, Fedora Rawhide, Ubuntu 18.10, and Debian 9.8. We expect to see more operating systems, linux distributions and AMIs available in the coming months.

These operating systems and Linux distributions are offering the same level of support for their Arm AMIs as they do for their existing x86 AMIs. In almost every case, if you’re installing packages with aptor yum those packages exist for Arm in the OS of your choice and will run in the same way.

For example, to install PHP 7.2 on the Arm version of Amazon Linux 2 or Ubuntu we follow the exact same steps we would on an x86 based instance type:

$ sudo amazon-linux-extras php72
$ sudo yum install php

Or on Ubuntu 18.04:

$ sudo apt update
$ sudo apt install php

Containers

Containers are one of the most popular application deployment mechanisms for A1. The Elastic Container Service (ECS) already supports the A1 instance and there’s an Amazon ECS-Optimized Amazon Linux 2 AMI, and we’ll soon be launching support for Elastic Kubernetes Service (EKS). The majority of Docker official-images hosted in Docker Hub already have support for 64-bit Arm systems along with x86.

We’ve further expanded support for running containers at scale with AWS Batch support for A1.

Running a container on A1

In this section we show how to run the container on Amazon Linux 2. Many Docker official images (at least 76% as of this writing) already support 64-bit Arm systems, and the majority of the ones that don’t either have pending patches to add support or are based on commercial software

$ sudo yum install -y docker
$ sudo service docker start
$ sudo docker run hello-world
 
$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
3b4173355427: Pull complete
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
 
Hello from Docker!
This message shows that your installation appears to be working correctly.
...

Running WordPress on A1

As an example of automating the running of a LAMP (Linux, Apache HTTPd, MariaDB, and PHP) stack on an A1 instance, we’ve updated a basic CloudFormation template to support the A1 instance type. We made some changes to the template to support Amazon Linux 2, but otherwise the same template works for all our instance types. The template is here and it can be launched like any other CloudFormation template.

It defaults to running on an A1 Arm instance. After the template is launched, the output is the URL of the running instance which can be accessed from a browser to observe the default WordPress home page is being served.

Summary

If you’re using open source software, everything you rely on most likely works on Arm systems today, and over the coming months we’ll be working on increasing the support and improving performance of software running on the A1 instances. If you have an open source based web-tier or containerized application, give the A1 instances a try and let us know what you think. If you run into any issues please don’t hesitate to get in touch at [email protected] , via the AWS Compute Forum, or reach out through your usual AWS support contacts, we love customer’s feedback.

Best Practices for Porting Applications to the EC2 A1 Instance Type

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/best-practices-for-porting-applications-to-the-ec2-a1-instance-type/

This post courtesy of Dr. Jonathan Shapiro-Ward, AWS Solutions Architect

The new Amazon EC2 A1 instance types are powered by an ARM based AWS Graviton CPU. A1 instances are extremely cost effective and are ideal for scale-out scenarios where a large number of smaller instances are required.

Prior the launch of A1 instances, all AWS instances were x86 based. There are a number of significant differences between the two instruction sets. The predominant difference is that ARM follows a RISC (Reduced Instruction Set Computer) design, whereas x86 is a CISC (Complex Instruction Set Computer) architecture. In short, ARM is a comparatively simple architecture composed of simple instructions which execute within a single cycle. Meanwhile, x86 has mostly complex instructions that execute over multiple cycles. This key difference has a range of implications for compiler and hardware complexity, power efficiency, and performance, but the key question to application developers is portability.

Workloads built for x86 will not run on the A1 family of instances, they must be ported. In many cases this is trivial. An extensive range of Free and Open Source software supports ARM and requires no modification to port workloads. Aside from installing a different binary, the process for installing the Apache Web Server, Nginx, PostgreSQL, Docker, and many more applications is unchanged. Unfortunately, porting is not always so simple, especially for applications developed in house. Ideally, software is written to be portable, but this is not always the case. Even workloads written in languages designed for portability such as Java can prove a challenge. In this blog post, we’ll review common challenges and migration paths from porting from x86 based architectures to ARM.

A General Porting Strategy

  1. Check for Core Language and Platform Support. The vast majority of common languages such as Java, Python, Perl, Ruby, PHP, Go, Rust, and so forth support modern ARM architectures. Likewise, major frameworks such as Django, Spring, Hadoop, Apache Spark, and many more run on ARM. More niche languages and frameworks may not have as robust support. If your language or crucial framework is dependent on x86, you may not be able to run your application on the A1 instance type.
  2. Identify all third party libraries and dependencies. All non-trivial applications rely on third party libraries to provide essential functionality. These can range from a standard library, to open source libraries, to paid for proprietary libraries. Examine these libraries and determine if they support ARM. In the event that a library is dependent upon a specific architecture, search for an open issue around ARM support or inquire with the vendor as to the roadmap. If possible, investigate alternative libraries if ARM support is not forthcoming.
  3. Identify Porting Path. There are three common strategies for porting. The strategy that applies will depend upon the language being used.
    1. For interpreted and those compiled to bytecode, the first step is to translate runbooks, scripts, AMIs, and templates to install the ARM equivalent of the interpreter or language VM. For instance, installing the ARM version of the JVM or CPython. If installation is done via package manager, no change may be necessary. Subsequently it is necessary to ensure that any native code libraries are replaced with the ARM equivalent. For Java, this would entail swapping out libraries leveraging JNI. For Python, this would entail swapping out CPython C-API based libraries (such as numpy). Once again, if this is done via a package manager such as yum or pip, manual intervention should not be necessary.
    2. In the case of compiled languages such as C/C++, the application will have to be re-compiled for ARM. If your application utilizes any machine specific features or relies on behavior that varies between compilers, it may be necessary to re-write parts of your application. This is discussed in more detail below. If your application follows common standards such as ANSI C and avoids any machine specific dependencies, the majority of effort will be spent in modifying the build process. This will depend upon your build process and will likely involve modifying build configurations, makefiles, configure scripts, or other assets. The binary can either be compiled on an A1 instance or cross compiled. Once a binary has been produced, the deployment process should be adapted to target the A1 instance.
    3. For applications built using platform specific languages and frameworks, which have an ARM alternative, the application will have to be ported to this alternative. By far the most common example of this is the .NET Framework. In order to run on ARM, .NET applications must be ported to .NET Core or to Mono. This will likely entail a non-trivial modification to the application codebase. This is discussed in more detail below.
  4. Test! All tests must be ported over and, if there is insufficient test coverage, it may be necessary to write new tests. This is especially pertinent if you had to recompile your application. A strong test suite should identify any issues arising from machine specific code that behaves incorrectly on the A1 instance type. Perform the usual range of unit testing, acceptance testing, and pre-prod testing.
  5. Update your infrastructure as code resources, such as AWS CloudFormation templates, to provision your application to A1 instances. This will likely be a simple change, modifying the instance type, AMI, and user data to reflect the change to the A1 instance.
  6. Perform a Green/Blue Deployment. Create your new A1 based stack alongside your existing stack and leverage Route 53 weighted routing to route 10% of requests to the new stack. Monitor error rates, user behavior, load, and other critical factors in order to determine the health of the ported application in production. If the application behaves correctly, swap over all traffic to the new stack. Otherwise, reexamine the application and identify the root cause of any errors.

Porting C/C++ Applications to A1 Instances

C and C++ are very portable languages. Indeed, C runs on more architectures than any other language but that is not to say that all C will run on all systems. When porting applications to A1 instances, many of the challenges one might expect do not arise. The AWS Graviton chip is little endian, just like x86 and int, float, double, and other common types are the same size between architectures. This does not, however, guarantee portability. Most commonly, issues porting a C based application arise from aspects of the C standard which are dependent upon the architecture and implementation. Let’s briefly look at some examples of C that are not portable between architectures.

The most frequently discussed issue in porting C from x86 to ARM is the use of the character datatype. This issue arises from the C99 standard which requires the implementation to decide if the char datatype is signed or unsigned. On x86 Linux a char is signed by default. On ARM Linux a char is unsigned. This discrepancy is due to performance, with unsigned char types resulting in more efficient ARM assembly. This can, however, cause issues. Let’s examine the following code listing:

//Code Listing 1
#include <stdio.h>

int main(){
    char c = -1;
    if (c < 0){
        printf("The value of the char is less than 0\n");
    } else {
        printf("The value of the char is greater than 0\n");
    }
    return 0;
}

On an x86 instance (in this case a t3.large) the above code has the expected result – printing “The value of the char is less than 0”. On an A1 instance, it does not. There are mechanisms around this, for example gcc has the -fsigned-char flag which forces all char types to become signed upon compilation. Crucially, a developer must be aware of these types of issues ahead of time. Not all compilers and warn levels will provide appropriate warnings around char signedness (and indeed many other issues arising from architectural differences). Resultantly, without rigorous testing, it is possible to introduce unexpected errors by porting. This makes a comprehensive set of tests for your application an essential part of the porting process.

If your application has only ever been built for a single target environment, (e.g. x86 Linux with gcc) there are potentially unexpected behaviors which will emerge when that application is built for a different architecture or with a different compiler.

The key best practice for ensuring portability of your C applications (and other compiled languages) is to adhere to a standard. Vanilla C99 will ensure the broadest compatibility across architectures and operating systems. A compiler specific standard such as gnu99 will ensure compatibility but can tether you to one compiler.

Static analysis should always be used when building your applications. Static analysis helps to detect bugs, security flaws, and pertinent to our discussion: compatibility issues.

Porting .NET Applications to A1 Instances

Porting a .NET application from a Windows instance to an A1 Linux instance can yield significant cost reductions and is an effective way to economically scale out web apps and other parallelizable workloads.

For all intents and purposes, the .NET Framework only runs on x86 Windows. This limitation does not necessarily prevent running your .NET application on an A1 instance. There are two .NET implementations, .NET Core and .NET Framework. The .NET Framework is the modern evolution of the original .NET release and is tightly coupled to x86 Windows. Meanwhile, .NET Core is a recent open source project, developed by Microsoft, which is decoupled from Windows and runs on a variety of platforms, including ARM Linux. There is one final option, Mono – an open source implementation of the .NET Framework which runs on a variety of architectures.

For greenfield projects .NET Core has become the de facto option as it has a number of advantages over the alternatives. Projects based on .NET Core are cross platform and significantly lighter than .Net Framework and Mono projects – making them far better suited to developing microservices and to running in containers or serverless environments. From version 2.1 onward, .NET Core supports ARM.

For existing projects, .NET Core is the best migration path to containers and to running on A1 instances. There are, however, a number of factors that might prohibit replatforming to .NET Core. These include

• Dependency on Windows specific APIs
• Reliance on features that are only available in .NET Framework such as WPF or Windows Forms. Many of these features are coming to .NET Core as part of .NET Core 3, but at time or writing, this is in preview.
• Use of third-party libraries that do not support .NET Core.
• Use of an unsupported language Currently, .NET Core only supports C#, F#, Visual Studio.

There are a number of tools to help port from .NET Framework to .NET Core. These include the .Net portability analyzer, which will analyze a .NET codebase and determine any factors that might prohibit porting.

Conclusion

The A1 instances can deliver significant cost savings over other instances types and are ideal for scale-out applications such as microservices. In many cases, moving to A1 instances can be easy. Many languages, frameworks, and applications have strong support for ARM. For Python, Java, Ruby, and other open source languages, porting can be trivial. For other applications such as native binary applications or .net applications, there can be some challenges. By examining your application and determining what, if any, x86 dependencies exist you can devise a migration strategy which will enable you to make use of A1 instances.

Learn about hourly-replication in Server Migration Service and the ability to migrate large data volumes

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/learn-about-hourly-replication-in-server-migration-service-and-the-ability-to-migrate-large-data-volumes/

This post courtesy of Shane Baldacchino, AWS Solutions Architect

AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.

In my previous blog posts, we introduced how you can use AWS Server Migration Service (AWS SMS) to migrate a popular commercial off the shelf software, WordPress into AWS.

For details and a walkthrough on how setup the AWS Server Migration Service, please see the following blog posts for Hyper-V and VMware hypervisors which will guide you through the high level process.

In this article we are going to step it up a few notches and look past common the migration of off-the-shelf software and provide you a pattern on how you can use AWS SMS and some of the recently launched features to migrate a more complicated environment, especially compression and resiliency for replication jobs and the support for data volumes greater than 4TB.

This post covers a migration of a complex internally developed eCommerce system comprising of a polyglot architecture. It is made up a Windows Microsoft IIS presentation tier, Tomcat application tier, and Microsoft SQL Server database tier. All workloads run on-premises as virtual machines in a VMware vCenter 5.5 and ESX 5.5 environment.

This theoretical customer environment has various business and infrastructure requirements.

Application downtime: During any migration activities, the application cannot be offline for more than 2 hours
Licensing: The customer has renewed their Microsoft SQL Server license for an additional 3 years and holds License Mobility with Software Assurance option for Microsoft SQL Server and therefore wants to take advantage of AWS BYOL licensing for Microsoft SQL server and Microsoft Windows Server.
Large data volumes: The Microsoft SQL Server database engine (.mdf, .ldf and .ndf files) consumes 11TB of storage

Walkthrough

Key elements of this migration process are identical to the process outlined in my previous blog posts and for more information on this process, please see the following blog posts Hyper-V and VMware hypervisors, but a high level you will need to.

• Establish your AWS environment.
• Download the SMS Connector from the AWS Management Console.
• Configure AWS SMS and Hypervisor permissions.
• Install and configure the SMS Connector appliance.
• Import your virtual machine inventory and create a replication jobs
• Launch your Amazon EC2 instances and associated NACL’s, Security Groups and AWS Elastic Load Balancers
• Change your DNS records to resolve the custom application to an AWS Elastic Load Balancer.

Before you start, ensure that your source systems OS and vCenter version are supported by AWS. For more information, see the Server Migration Service FAQ.

Planning the Migration

Once you have downloaded and configured the AWS SMS connector with your given Hypervisor you can get started in creating replication jobs.

The artifacts derived from our replication jobs with AWS SMS will be AMI’s (Amazon Machine Images) and as such we do not need to replicate each server individually and that is because we have a three-tier architecture that has commonality between servers with multiple Application and Web servers performing the same function, and as such we can leverage a common AMI and create three replication jobs.

1. Microsoft SQL Server – Database Tier
2. Ubuntu Server – Application Tier
3. IIS Web server – Webserver Tier

Performing the Replication

After validating that the SMS Connector is in a “HEALTHY” state, import your server catalog from your Hypervisor to AWS SMS. This process can take up to a minute.

Select the three servers (Microsoft SQL Server, Ubuntu Server, IIS Web server) to migrate and choose Create replication job. AWS SMS now supports creating replications jobs with frequencies as short as 1 hour, and as such to ensure our business RTO (Recovery Time Objective) of 2 hours is met we will create our replication jobs with a frequency of 1 hour. This will minimize the risk of any delta updates during the cutover windows not completing.

Given the businesses existing licensing investment in Microsoft SQL Server, they will leverage these the BYOL (Bring Your Own License) offering when creating the Microsoft SQL Server replication job.

The AWS SMS console guides you through the process. The time that the initial replication task takes to complete is dependent on available bandwidth and the size of your virtual machines.

After the initial seed replication, network bandwidth requirement is minimized as AWS SMS replicates only incremental changes occurring on the VM.

The progress updates from AWS SMS are automatically sent to AWS Migration Hub so that you can track tasks in progress.

AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions. In this post, we are using AWS SMS as a mechanism to migrate the virtual machines (VMs) and track them via AWS Migration Hub.

Migration Hub and AWS SMS are both free. You pay only for the cost of the individual migration tools that you use, and any resources being consumed on AWS

The dashboard reflects any status changes that occur in the linked services. You can see from the following image that two servers are complete whilst another is in progress.

Using Migration Hub, you can view the migration progress of all applications. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

After validating that the SMS Connector

Testing Your Replicated Instances

Thirty hours after creating the replication jobs, notification was received via AWS SNS (Simple Notification Service) that all 3 replication jobs have completed. During the 30-hour replication window the customers ISP experienced downtime and sporadic flapping of the link, but this was negated by the network auto-recovery feature of SMS. It recovered and resumed replication without any intervention.

With the replication tasks being complete. The artifact created by AWS SMS is a custom AMI that you can use to deploy an EC2 instance. Follow the usual process to launch your EC2 instance, noting that you may need to replace any host-based firewalls with security groups and NACLs and any hardware based load balancers with Elastic Load Balancing to achieve fault tolerance, scalability, performance and security.

As this environment is a 3-tier architecture with commonality been tiers (Application and Presentation Tier) we can create during the EC2 Launch process an ASG (Auto Scaling Group) to ensure that deployed capacity matches user demand. The ASG will be based on the custom AMI’s generated by the replication jobs.

When you create an EC2 instance, ensure that you pick the most suitable EC2 instance type and size to match your performance and cost requirements.

While your new EC2 instances are a replica of your on-premises VM, you should always validate that applications are functioning. How you do this differs on an application-by-application basis. You can use a combination of approaches, such as editing a local host file and testing your application, SSH, RDP and Telnet.

For our Windows Presentation and database tier, I can RDP in to my systems and validate IIS 8.0 and other services are functioning correctly.

For our Ubuntu Application tier, we can SSH in to perform validation.

Post validation of each individual server we can now continue to test the application end to end. This is because our systems have been instantiated inside a VPC with no route back to our on-premises environment which allows us to test functionality without the risk of communication back to our production application.

After validation of systems it is now time to cut over, plan your runbook accordingly to ensure you either eliminate or minimize application disruption.

Cutting Over

As the replication window specified in AWS SMS replication jobs was 1 hour, there were hourly AMI’s created that provide delta updates since the initial seed replication was performed. The customer verified the stack by executing the previously created runbook using the latest AMIs, and verified the application behaved as expected.

After another round of testing, the customer decided to plan the cutover on the coming Saturday at midnight, by announcing a two-hour scheduled maintenance window. During the cutover window, the customer took the application offline, shutdown Microsoft SQL Server instance and performed an on-demand sync of all systems.

This generate a new versioned AMI that contained all on-premise data. The customer then executed the runbook on the new AMI’s. For the application and presentation tier these AMI’s were used in the ASG configuration. After application validation Amazon Route 53 was updated to resolve the application CNAME to the Application Load Balancer CNAME used to load balance traffic to the fleet of IIS servers.

Based on the TTL (Time To Live) of your Amazon Route 53 DNS zone file, end users slowly resolve the application to AWS, in this case within 300 seconds. Once this TTL period had elapsed the customer brought their application back online and exited their maintenance window, with time to spare.

After modifying the Amazon Route 53 Zone Apex, the physical topology now looks as follows with traffic being routed to AWS.

After validation of a successful migration the customer deleted their AWS Server Migration Service replication jobs and began planning to decommission their on-premises resources.

Summary

This is an example pattern on migrate a complex custom polyglot environment in to AWS using AWS migration services, specifically leveraging many of the new features of the AWS SMS service.

Many architectures can be extended to use many of the inherent benefits of AWS, with little effort. For example this article illustrated how AWS Migration Services can be used to migrate complex environments in to AWS and then use native AWS services such as Amazon CloudWatch metrics to drive Auto Scaling policies to ensure deployed capacity matches user demand whilst technologies such as Application Load Balancers can be used to achieve fault tolerance and scalability

Think big and get building!

 

 

BYOL and Oversubscription

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/byol-and-oversubscription/

This post is courtesy of Mike Eizensmits, Senior Solutions Architect – AWS

Most AWS customers have a significant Windows Server deployment and are also tied to a Microsoft licensing program. When it comes to Microsoft products, such as Windows Server and SQL Server, licensing models can easily dictate Cloud infrastructure solutions. AWS provides several options to support Bring Your Own Licensing (BYOL) as well as EC2 License Included models for non-BYOL workloads. Most Enterprise customers have EA’s with Microsoft which can skew their licensing strategy when considering Azure, On-premises and other Cloud Service Providers such as AWS. BYOL models may be the only reasonable implementation path when entering a new environment or spinning up new applications. Licensing can constitute a significant investment when running workloads on public cloud. To help facilitate the maximum benefit of a customer’s existing Microsoft licensing, AWS provides multiple options to utilize BYOL EC2 Dedicated Hosts and Dedicated Instances expose the physical cores of the server to Windows and applications such as SQL Server while allowing licenses with or without Software Assurance to be utilized. Bare Metal as well as VMware on AWS can minimize additional licensing costs.

Dedicated Hosts and Instances

An EC2 Dedicated Host is a dedicated server of a specific Instance Class that is allotted to a single customer, referred to as dedicated tenancy. The density of a host is based on the Instance Size as well as the Instance Type defined at creation. If you chose the M5 Instance Type and chose the m5.large Instance Size, you would have 48 “slots” available on the host to deploy m5.large Instances. If you chose the m5.xlarge, you would have enough capacity to house 24 Instances. Dedicated hosts have a fixed number of vCPU and RAM per Instance Type. To deploy Windows on a Dedicated Host, the customer imports an image (vmdk, ova, vhd) using the import-image utility and tags the image as a “BYOL” in the command. The BYOL flag dictates whether the image will acquire a license from AWS or the customer’s existing licensing framework. When dealing with an oversubscribed customer environment, such as an on-premise VMware deployment, the customer has likely oversubscribed the environment with minimum of 4 vCPU to 1 physical core (4:1). In these environments, Microsoft licensing typically takes place at the host level using physical cores rather than the resources in a provisioned instance (vCPU). An AWS Dedicated Host is oversubscribed 2 vCPU to 1 CPU, meaning each core is Hyper-Threaded. While the math can be performed to show the actual value of a vCPU, customers can be reluctant to modify vCPU configurations to reflect the greater value of the AWS vCPU. Simply matching the quantity of vCPU’s to their current environment may be much more costly and expansive than rightsizing the instances for cost optimization.

Below is a sample configuration of a customer interested in migrating to AWS and utilizing BYOL for Microsoft Windows and SQL Server Enterprise Edition. By licensing the 400 physical cores in their cluster, the customer is able to assign any number of vCPU’s to the VM’s deployed on the hosts. Enterprise Architects have spent a considerable amount of time sizing VM’s with the proper resource attributes, so it can be difficult to initiate that process all over again to bring them to the public cloud.

Customer Environment (SQL Server Enterprise Cluster on VMware):

ESXi Nodes in Cluster10
Cores/Host (2.6GHz)40
Total Cores in Cluster400
vCPU assigned (240/host)2400
Virtual Machines470
Oversubscription5:1
Value of vCPU520 MHz

In this case, the customer has decided not to right-size their VM’s and instead maintain their current vCPU/RAM specifications. This is a Dedicated Host solution to match their vCPU configurations.

AWS Dedicated Host Environment (Dedicated Hosts required to match vCPU of VMware above)

Dedicated Hosts (r4)38
Cores/Host (2.6GHz)36
vCPU/Host64
Total vCPU assigned2432
Oversubscription2:1
Value of vCPU1300 MHz

If the customer is not willing to consider right-sizing their VM’s by assigning fewer, yet higher powered vCPU’s, they face a considerably larger server deployment. After doing the math, we can determine that the Dedicated Host solution has 2.5 times the power of the VMware solution. Following that logic, and math, right-sizing the VM’s would drop the required vCPU count down to 960 vCPU to match their current solution. This would change the number of required r4 Dedicated Hosts from 38 to 15 hosts and slash the SQL licensing requirements for the solution.

EC2 Bare Metal instances and VMware Cloud on AWS

AWS does have other products that lend to the BYOL/oversubscription story. EC2 Bare Metal instances and VMware Cloud on AWS gives the customer full control of the configuration of instances just as they have on-premise. The EC2 Bare Metal instances are built on the Nitro System which is a collection of AWS-built hardware offload and security components that offers high performance networking and storage to EC2 instances. EC2 Bare Metal instances can utilize AWS services such as Amazon Virtual Private Cloud (VPC), Elastic Block Store (EBS), Elastic Load Balancing (ELB), AutoScaling and more. The Nitro configuration gives the customer the ability to install a server Operating System or hypervisor directly on the hardware. By utilizing their own hypervisor, the customer can define and configure their own instance configurations of RAM, disk and vCPU. By surpassing the fixed configurations in the EC2 Dedicated Host environment, Nitro configurations enable migrating highly oversubscribed on-premise workloads.

The VMware Cloud on AWS offering provides organizations the ability to extend and migrate their on-premise vSphere environment to AWS’s scalable and secure Cloud infrastructure. Customers can leverage vSphere, vSAN, NSX and vCenter technologies to extend their data centers and consume AWS services. vMotion provides the ability to live migrate VM’s to AWS with limited or no downtime. While licensing for migrated VM’s does not change at the VM level, it is imperative that the licensing on the new vSphere node be adequate. Since the customer has complete control of the environment, they have the ability to oversubscribe CPU’s to any ratio. By licensing applications such as SQL Server at the host level, oversubscription rates are irrelevant to licensing. If a vSphere node has 40 cores, as long as the 40 cores are licensed, the number of vCPU’s assigned is immaterial. In the VMware environment, all OS’s and applications are BYOL and no licensing will be provided by AWS. Ultimately, this solution is free of the oversubscription burden that affects certain AWS dedicated tenancy options.

Optimize CPU

EC2 Instance Types offer multiple fixed vCPU to memory configurations to match the customer’s workloads and use cases. With Optimize CPU, customers now have the ability to specify the number of cores that an instance has access to as well as determining if Hyper-Threading is enabled. Hyper-Threading Technology enables multiple threads to run concurrently on a single Intel Xeon CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. Controlling the threads and core count is significant for Microsoft SQL Server as it is typically more RAM constrained than compute bound. With Optimize CPUs, you can potentially reduce the number of SQL Server licenses required by specifying a custom number of vCPUs. SQL Server on Amazon EC2 is often licensed per virtual core. EC2 vCPU’s are the equivalent of virtual cores. When licensing with virtual cores on EC2, the number of active vCPUs provisioned through Optimize CPUs may indicate the number of SQL Server licenses required. For example, if you have a SQL Standard build that needs the RAM and network capabilities of an r4.4xlarge but not the 16 vCPUs that comes configured on it, you can define the Optimize CPU options in the CLI or API at launch to disable Hyper-Threading and limit the instance to 1, 2, 3, 4, 5, 6, 7 or 8 cores. This example would cut the licensing costs exponentially. The Optimize CPU feature is available for new instance launches in all AWS public regions. To benefit from Optimize CPUs, you can bring SQL Server licenses to Amazon EC2 and use it on EC2 default tenancy or on allocated instances on a EC2 Dedicated Host or a EC2 Dedicated Instance. For a list of supported instance types, and valid CPU counts, see instance type documentation.

In this post, we’ve covered three AWS scenarios and how they fulfill specific areas of BYOL with CPU oversubscription scenario as well as how Optimize CPU can help cut licensing costs. EC2 Dedicated hosts are generally the first choice in the Microsoft BYOL realm unless the customer is absolutely unwilling to right-size their highly oversubscribed instances. EC2 Bare Metal instances provide the customer the ability to configure all aspects of their hypervisor of choice and maintain any oversubscription that exists in their environment. This is a very popular choice that requires little change and ultimately gets their workloads to the AWS Cloud. The VMware Cloud on AWS option is sold by and provisioned by VMware. For users that are current VMware customers, this service allows them to bridge the gap between their on-premise data center and AWS while providing a seamless migration path to the Cloud using their current toolsets.

Running Hyper-V on Amazon EC2 Bare Metal Instances

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/running-hyper-v-on-amazon-ec2-bare-metal-instances/

AWS recently announced the general availability of Amazon EC2 bare metal Instances. This post provides an overview of launching, setting up, and configuring a Hyper-V enabled host, launching a guest virtual machine (VM) within Hyper-V running on i3.metal.

Walkthrough

The key elements of this process include the following steps:

  1. Launch a Windows Server 2016 with Hyper-V AMI provided by Amazon.
  2. Configure Hyper-V networking.
  3. Launch a Hyper-V guest VM.

Launch a Windows Server 2016 with Hyper-V AMI provided by Amazon

1. Open the EC2 console.
2. Choose Public Images and search for the Amazon Hyper-V AMIs.
3. Select your preferred Hyper-V AMI, and choose Launch.
4. Follow the Launch wizard process to launch the instance on i3.metal.

The Amazon Hyper-V AMIs have the Hyper-V role pre-enabled. You can also launch a Windows Server 2016 Base AMI to i3.metal, and enable the Hyper-V role for your use case.

Configure Hyper-V networking

To enable networking for your Hyper-V guests—so they can have connectivity to other resources in your VPC, or to the internet via your VPC internet gateway, ensure that you have first configured your VPC. For more information, see Creating and Attaching an Internet Gateway.

Hyper-V provides three types of virtual switches for networking:

  • External
  • Internal
  • Private

In this solution, you are creating an internal virtual switch and using the Hyper-V host as the NAT server for the guest VMs, similar to Microsoft’s topic Set up a NAT network.

You can specify your own virtual network range. For this example, use 192.168.0.0/24 as the range for the virtual network inside the Hyper-V host.

  1. Run the following PowerShell command to create the internal virtual switch:
    New-VMSwitch -SwitchName "Hyper-VSwitch" -SwitchType Internal
  2. Determine which network interface is associated with the virtual switch. For this solution, the Get-NetAdapter command shows that the Hyper-V virtual switch has an ifIndex value of 12.
  3. Configure the Hyper-V Virtual Ethernet adapter with the NAT gateway IP address. This IP address is used as default gateway (Router IP) for the guest VMs. The following command sets the IP address 192.168.0.1 with a subnet mask 255.255.255.0 on the Interface (InterfaceIndex 12):
    New-NetIPAddress -IPAddress 192.168.0.1 -PrefixLength 24 -InterfaceIndex 12
  4. Create a NAT virtual network using the range of 192.168.0.0/24:
    New-NetNat -Name MyNATnetwork -InternalIPInterfaceAddressPrefix 192.168.0.0/24

Now the environment is ready for the guest VMs to have outbound communication with other resources through the host NAT. For each VM, assign an IP address with the default gateway (192.168.0.1). This can be done manually within each guest VM. In this solution, you make it easier by enabling a DHCP server within the Hyper-V host to automatically assign IP addresses.

Setting up DHCP server role on the host

  1. Run the following command to add the DHCP role to the host:
    Install-WindowsFeature -Name 'DHCP' -IncludeManagementTools
  2. To configure the DHCP server to bind on the Hyper-V virtual interface, choose Control Panel, Administrative Tools, DHCP.
  3. Select this computer, add or remove bindings, and then select the IP address corresponding to Hyper-V virtual interface (that is, 192.168.0.1).
  4. Configure the DHCP scope and specify a range from the subnet that you determined earlier. In this example, use 192.168.0.10~192.168.0.20.
    Add-DhcpServerv4Scope -Name GuestIPRange -StartRange 192.168.0.10 -EndRange 192.168.0.20 -SubnetMask 255.255.255.0 -State Active

    You should be able to see the range in the DHCP console, as in the following screenshot:

  5. For Router, choose the NAT gateway IP address assigned it to the Hyper-V network adapter (192.168.0.1).
  6. For DNS server, use the Amazon DNS, which is the second IP address for the VPC (172.30.0.2).

Launch a Hyper-V guest VM

For this post, follow the new VM wizard to create an Ubuntu 18.04 LTS guest VM. First, download the Ubuntu installation ISO from the Ubuntu website to your Hyper-V host, and store it on a secondary EBS volume that you added as the D: drive.

I3.metal instances use Amazon EBS and instance store volumes with the NVM Express (NVMe) interface. When you stop an I3.metal instance, any data stored on instance store volumes is gone. I recommend storing your guest VM’s hard drive (vhd or vhdx) on an EBS volume that is attached to your I3.Metal instance. This can be the root volume (C:) or any additional EBS volumes attached to the instance. For more information, see What’s the difference between instance store and EBS?

After that is complete, follow these steps:

  1. In Hyper-V Manager, choose Actions, New, Virtual Machine.
  2. Follow the wizard with your desired configuration up to the Configure Networking section.
  3. In the Configure Networking step, for Connection, choose Hyper-V Switch, and choose Next.
  4. In the Connect Virtual Hard Disk step, enter a name for the virtual hard disk. Use the default location C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\.
  5. Specify the size of the virtual hard disk, and choose Next.
  6. In the Installation Options step, choose the Ubuntu ISO that you downloaded earlier.
  7. Finish the wizard and start the VM, then follow the steps on the Ubuntu installation wizard. As you have already set up DHCP and NAT for the Hyper-V network, the Ubuntu VM automatically gets an IP address from the DHCP scope that you defined earlier.
  8. Confirm the connectivity of the VM to the internet

Conclusion

You’ve just built a Hyper-V host on an EC2 bare metal instance. Now you’re ready to add more guest VMs and put them to work!

Migrating a multi-tier application from a Microsoft Hyper-V environment using AWS SMS and AWS Migration Hub

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/migrating-a-multi-tier-application-from-a-microsoft-hyper-v-environment-using-aws-sms-and-aws-migration-hub/

Shane Baldacchino is a Solutions Architect at Amazon Web Services

Many customers ask for guidance to migrate end-to-end solutions running in their on-premises data center to AWS. This post provides an overview of moving a common blogging platform, WordPress, running on an on-premises virtualized Microsoft Hyper-V platform to AWS, including re-pointing the DNS records associated to the website.

AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. In November 2017, AWS added support for Microsoft’s Hyper-V hypervisor. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations. In this post, I guide you through migrating your multi-tier workloads using both AWS SMS and AWS Migration Hub.

Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions. In this post, you use AWS SMS as a mechanism to migrate the virtual machines (VMs) and track them via Migration Hub. You can also use other third-party tools in Migration Hub, and choose the migration tools that best fit your needs. Migration Hub allows you to get progress updates across all migrations, identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

Migration Hub and AWS SMS are both free. You pay only for the cost of the individual migration tools that you use, and any resources being consumed on AWS.

Walkthrough

For this walkthrough, the WordPress blog is currently running as a two-tier stack in a corporate data center. The example environment is multi-tier and polyglot in nature. The frontend uses Windows Server 2016 (running IIS 10 with PHP as an ISAPI extension) and the backend is supported by a MySQL server running on Ubuntu 16.04 LTS. All systems are hosted on a virtualized platform. As the environment consists of multiple servers, you can use Migration Hub to group the servers together as an application and manage the holistic process of migrating the application.
The key elements of this migration process involve the following steps:

  1. Establish your AWS environment.
  2. Replicate your database.
  3. Download the SMS Connector from the AWS Management Console.
  4. Configure AWS SMS and Hyper-V permissions.
  5. Install and configure the SMS Connector appliance.
  6. Configure Hyper-V host permissions.
  7. Import your virtual machine inventory and create a replication job.
  8. Use AWS Migration Hub to track progress.
  9. Launch your Amazon EC2 instance.
  10. Change your DNS records to resolve the WordPress blog to your EC2 instance.

Before you start, ensure that your source systems OS and hypervisor version are supported by AWS SMS. For more information, see the Server Migration Service FAQ. This post focuses on the Microsoft Hyper-V hypervisor.

Establish your AWS environment

First, establish your AWS environment. If your organization is new to AWS, this may include account or subaccount creation, a new virtual private cloud (VPC), and associated subnets, route tables, internet gateways, and so on. Think of this phase as setting up your software-defined data center. For more information, see Getting Started with Amazon EC2 Linux Instances.

The blog is a two-tier stack, so go with two private subnets. Because you want it to be highly available, use multiple Availability Zones. An Availability Zone resides within an AWS Region. Each Availability Zone is isolated, but the zones within a Region are connected through low-latency links. This allows architects and solution designers to build highly available solutions.

Replicate your database

WordPress uses a MySQL relational database. You could continue to manage MySQL and the associated EC2 instances associated with maintaining and scaling a database. But for this walkthrough, I am using this opportunity to migrate to an RDS instance of Amazon Aurora, as it is a MySQL-compliant database. Not only is Amazon Aurora a high-performant database engine but it frees you up to focus on application development by managing time-consuming database administration tasks, including backups, software patching, monitoring, scaling, and replication.

Use AWS Database Migration Service (AWS DMS) to migrate your MySQL database to Amazon Aurora easily and securely. You can send the results from AWS DMS to Migration Hub. This allows you to create a single pane view of your application migration.

After a database migration instance has been instantiated, configure the source and destination endpoints and create a replication task.

By attaching to the MySQL binlog, you can seed in the current data in the database and also capture all future state changes in near–real time. For more information, see Migrating a MySQL-Compatible Database to Amazon Aurora.

Finally, the task shows that you are replicating current data in your WordPress blog database and future changes from MySQL into Amazon Aurora.

Download the SMS Connector from the AWS Management Console

Now, use AWS SMS to migrate your IIS/PHP frontend. AWS SMS is delivered as a virtual appliance that can be deployed in your Hyper-V environment.

To download the SMS Connector, log in to the console and choose Server Migration Service, Connectors, SMS Connector setup guide. Download the VHD file for SCVMM/Hyper-V.

Configure SMS

Your hypervisor and AWS SMS need an appropriate user with sufficient privileges to perform migrations:

Launch a new VM in Hyper-V based on the SMS Connector that you downloaded. To configure the connector, connect to it via HTTPS. You can obtain the SMS Connector IP address from within Hyper-V. By default, the SMS Connector uses DHCP to obtain a valid IP address.

Connect to the SMS Connector via HTTPS. In the example above, the connector IP address is 10.0.0.88. In your browser, enter https://10.0.0.88. As the SMS Connector can only work with one hypervisor at a time, you must state the hypervisor with which to interface. For the purpose of this post, the examples use Microsoft Hyper-V.

Configure the connector with the IAM and hypervisor credentials that you created earlier.

After you have entered in both your AWS and Hyper-V credentials and the associated connectivity and authentication checks have passed, you are redirected to the home page of your SMS Connector. The home page provides you a status on connectivity and the health of the SMS Connector.

Configure Hyper-V host permissions

You also must modify your Hyper-V hosts to provide WinRM connectivity. AWS provides a downloadable PowerShell script to configure your Windows environment to support WinRM communications with the SMS Connector. The same script is used for configuring either standalone Hyper-V or SCVMM.

Execute the PowerShell script and follow the prompts. In the following example, Reconfigure Hyper-V not managed by SCVMM (Standalone Hyper-V)… was selected.

Import your virtual machine inventory and create a replication job

You have now configured the SMS Connector and your Microsoft Hyper-V hosts. Switch to the console to import your server catalog to AWS SMS. Within AWS SMS, choose Connectors, Import Server Catalog.

This process can take up to a few minutes and is dependent on the number of machines in your Hyper-V inventory.

Select the server to migrate and choose Create replication job. The console guides you through the process. The time that the initial replication task takes to complete is dependent on the available bandwidth and the size of your VM. After the initial seed replication, network bandwidth is minimized as AWS SMS replicates only incremental changes occurring on the VM.

Use Migration Hub to track progress

You have now successfully started your database migration via AWS DMS, set up your SMS Connector, configured your Microsoft Hyper-V environment, and started a replication job.

You can now track the collective progress of your application migration. To track migration progress, connect AWS DMS and AWS SMS to Migration Hub.

To do this, navigate to Migration Hub in the AWS Management Console. Under Migrate and Tools, connect both services so that the migration status of these services is sent to Migration Hub.

You can then group your servers into an application in Migration Hub and collectively track the progress of your migration. In this example, I created an application, Company Blog, and added in my servers from both AWS SMS and AWS DMS.

The progress updates from linked services are automatically sent to Migration Hub so that you can track tasks in progress. The dashboard reflects any status changes that occur in the linked services. You can see from the following image that one server is complete while another is in progress.

Using Migration Hub, you can view the migration progress of all applications. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

Launch your EC2 instance

When your replication task is complete, the artifact created by AWS SMS is a custom AMI that you can use to deploy an EC2 instance. Follow the usual process to launch your EC2 instance, using the custom AMI created by AWS SMS, noting that you may need to replace any host-based firewalls with security groups and NACLs.

When you create an EC2 instance, ensure that you pick the most suitable EC2 instance type and size to match your performance requirements while optimizing for cost.

While your new EC2 instance is a replica of your on-premises VM, you should always validate that applications are functioning. How you do this differ on an application-by-application basis. You can use a combination of approaches, such as editing a local host file and testing your application, SSH, RDP, and Telnet.

From the RDS console, get your connection string details and update your WordPress configuration file to point to the Amazon Aurora database. As WordPress is expecting a MySQL database and Amazon Aurora is MySQL-compliant, this change of database engine is transparent to WordPress.

Change your DNS records to resolve the WordPress blog to your EC2 instance

You have validated that your WordPress application is running correctly, as you are still receiving changes from your on-premises data center via AWS DMS into your Amazon Aurora database. You can now update your DNS zone file using Amazon Route 53. Amazon Route 53 can be driven by multiple methods: console, SDK, or AWS CLI.

For this walkthrough, use Windows PowerShell for AWS to update the DNS zone file. The example shows UPSERTING the A record in the zone to resolve to the Amazon EC2 instance created with AWS SMS.

Based on the TTL of your DNS zone file, end users slowly resolve the WordPress blog to AWS.

Summary

You have now successfully migrated your WordPress blog to AWS using AWS migration services, specifically the AWS SMS Hyper-V/SCVMM Connector. Your blog now resolves to AWS. After validation, you are ready to decommission your on-premises resources.

Many architectures can be extended to use many of the inherent benefits of AWS, with little effort. For example, by using Amazon CloudWatch metrics to drive scaling policies, you can use an Application Load Balancer as your frontend. This removes the single point of failure for a single EC2 instance

Query for the latest Amazon Linux AMI IDs using AWS Systems Manager Parameter Store

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/query-for-the-latest-amazon-linux-ami-ids-using-aws-systems-manager-parameter-store/

Want a simpler way to query for the latest Amazon Linux AMI? AWS Systems Manager Parameter Store already allows for querying the latest Windows AMI. Now, support has been expanded to include the latest Amazon Linux AMI. Each Amazon Linux AMI now has its own Parameter Store namespace that is public and describable. Upon querying, an AMI namespace returns only its regional ImageID value.

The namespace is made up of two parts:

  • Parameter Store Prefix (tree): /aws/service/ami-amazon-linux-latest/
  • AMI name alias: (example) amzn-ami-hvm-x86_64-gp2

You can determine an Amazon Linux AMI alias by taking the full AMI name property of an Amazon Linux public AMI and removing the date-based version identifier. A list of these AMI name properties can be seen by running one for the following Amazon EC2 queries.

Using the AWS CLI:

aws ec2 describe-images --owners amazon --filters "Name=name,Values=amzn*" --query 'sort_by(Images, &CreationDate)[].Name'

Using PowerShell:

Get-EC2ImageByName -Name amzn* | Sort-Object CreationDate | Select-Object Name

For example, amzn2-ami-hvm-2017.12.0.20171208-x86_64-gp2 without the date-based version becomes amzn2-ami-hvm-x86_64-gp2.

When you add the public Parameter Store prefix namespace to the AMI alias, you have the Parameter Store name of “/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2”.

Each unique AMI namespace always remains the same. You no longer need to pattern match on name filters, and you no longer need to sort through CreationDate AMI properties. As Amazon Linux AMIs are patched and new versions are released to the public, AWS updates the Parameter Store value with the latest ImageID value for each AMI namespace in all supported Regions.

Before this release, finding the latest regional ImageID for an Amazon Linux AMI involved a three-step process. First, using an API call to search the list of available public AMIs. Second, filtering the results by a given partial string name. Third, sorting the matches by CreationDate property and selecting the newest ImageID. Querying AWS Systems Manager greatly simplifies this process.
Querying for the latest AMI using public parameters

After you have your target namespace, your query can be created to retrieve the latest Amazon Linux AMI ImageID value. Each Region has an exact replica namespace containing its Region-specific ImageID value.

Using the AWS CLI:

aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --region us-east-1 

Using PowerShell:

Get-SSMParameter -Name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 -region us-east-1

Always launch new instances with the latest ImageID

After you have created the query, you can embed the command as a command substitution into your new instance launches.

Using the AWS CLI:

 aws ec2 run-instances --image-id $(aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --query 'Parameters[0].[Value]' --output text) --count 1 --instance-type m4.large

Using PowerShell:

New-EC2Instance -ImageId ((Get-SSMParameterValue -Name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2).Parameters[0].Value) -InstanceType m4.large -AssociatePublicIp $true

This new instance launch always results in the latest publicly available Amazon Linux AMI for amzn2-ami-hvm-x86_64-gp2. Similar embedding can be used in a number of automation process, docs, and coding languages.

Display a complete list of all available Public Parameter Amazon Linux AMIs

You can also query for the complete list of AWS Amazon Linux Parameter Store namespaces available.

Using the AWS CLI:

aws ssm get-parameters-by-path --path "/aws/service/ami-amazon-linux-latest" --region us-east-1

Using PowerShell:

Get-SSMParametersByPath -Path "/aws/service/ami-amazon-linux-latest" -region us-east-1

Here’s an example list retrieved from a get-parameters-by-path call:

 /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
 /aws/service/ami-amazon-linux-latest/amzn2-ami-minimal-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-s3
 /aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-s3

Launching latest Amazon Linux AMI in an AWS CloudFormation stack

AWS CloudFormation also supports Parameter Store. For more information, see Integrating AWS CloudFormation with AWS Systems Manager Parameter Store. Here’s an example of how you would reference the latest Amazon Linux AMI in a CloudFormation template.

 # Use public Systems Manager Parameter
 Parameters :
 LatestAmiId :
 Type : 'AWS::SSM::Parameter::Value'
 Default: ‘/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2’

Resources :
 Instance :
 Type : 'AWS::EC2::Instance'
 Properties :
 ImageId : !Ref LatestAmiId

 

About the Author

Arend Castelein is a software development engineer on the Amazon Linux team. Most of his work relates to making Amazon Linux updates available sooner while also reducing the workload for his teammates. Outside of work, he enjoys rock climbing and playing indie games.