All posts by Martin Yip

BYOL and Oversubscription

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/byol-and-oversubscription/

This post is courtesy of Mike Eizensmits, Senior Solutions Architect – AWS

Most AWS customers have a significant Windows Server deployment and are also tied to a Microsoft licensing program. When it comes to Microsoft products, such as Windows Server and SQL Server, licensing models can easily dictate Cloud infrastructure solutions. AWS provides several options to support Bring Your Own Licensing (BYOL) as well as EC2 License Included models for non-BYOL workloads. Most Enterprise customers have EA’s with Microsoft which can skew their licensing strategy when considering Azure, On-premises and other Cloud Service Providers such as AWS. BYOL models may be the only reasonable implementation path when entering a new environment or spinning up new applications. Licensing can constitute a significant investment when running workloads on public cloud. To help facilitate the maximum benefit of a customer’s existing Microsoft licensing, AWS provides multiple options to utilize BYOL EC2 Dedicated Hosts and Dedicated Instances expose the physical cores of the server to Windows and applications such as SQL Server while allowing licenses with or without Software Assurance to be utilized. Bare Metal as well as VMware on AWS can minimize additional licensing costs.

Dedicated Hosts and Instances

An EC2 Dedicated Host is a dedicated server of a specific Instance Class that is allotted to a single customer, referred to as dedicated tenancy. The density of a host is based on the Instance Size as well as the Instance Type defined at creation. If you chose the M5 Instance Type and chose the m5.large Instance Size, you would have 48 “slots” available on the host to deploy m5.large Instances. If you chose the m5.xlarge, you would have enough capacity to house 24 Instances. Dedicated hosts have a fixed number of vCPU and RAM per Instance Type. To deploy Windows on a Dedicated Host, the customer imports an image (vmdk, ova, vhd) using the import-image utility and tags the image as a “BYOL” in the command. The BYOL flag dictates whether the image will acquire a license from AWS or the customer’s existing licensing framework. When dealing with an oversubscribed customer environment, such as an on-premise VMware deployment, the customer has likely oversubscribed the environment with minimum of 4 vCPU to 1 physical core (4:1). In these environments, Microsoft licensing typically takes place at the host level using physical cores rather than the resources in a provisioned instance (vCPU). An AWS Dedicated Host is oversubscribed 2 vCPU to 1 CPU, meaning each core is Hyper-Threaded. While the math can be performed to show the actual value of a vCPU, customers can be reluctant to modify vCPU configurations to reflect the greater value of the AWS vCPU. Simply matching the quantity of vCPU’s to their current environment may be much more costly and expansive than rightsizing the instances for cost optimization.

Below is a sample configuration of a customer interested in migrating to AWS and utilizing BYOL for Microsoft Windows and SQL Server Enterprise Edition. By licensing the 400 physical cores in their cluster, the customer is able to assign any number of vCPU’s to the VM’s deployed on the hosts. Enterprise Architects have spent a considerable amount of time sizing VM’s with the proper resource attributes, so it can be difficult to initiate that process all over again to bring them to the public cloud.

Customer Environment (SQL Server Enterprise Cluster on VMware):

ESXi Nodes in Cluster 10
Cores/Host (2.6GHz) 40
Total Cores in Cluster 400
vCPU assigned (240/host) 2400
Virtual Machines 470
Oversubscription 5:1
Value of vCPU 520 MHz

In this case, the customer has decided not to right-size their VM’s and instead maintain their current vCPU/RAM specifications. This is a Dedicated Host solution to match their vCPU configurations.

AWS Dedicated Host Environment (Dedicated Hosts required to match vCPU of VMware above)

Dedicated Hosts (r4) 38
Cores/Host (2.6GHz) 36
vCPU/Host 64
Total vCPU assigned 2432
Oversubscription 2:1
Value of vCPU 1300 MHz

If the customer is not willing to consider right-sizing their VM’s by assigning fewer, yet higher powered vCPU’s, they face a considerably larger server deployment. After doing the math, we can determine that the Dedicated Host solution has 2.5 times the power of the VMware solution. Following that logic, and math, right-sizing the VM’s would drop the required vCPU count down to 960 vCPU to match their current solution. This would change the number of required r4 Dedicated Hosts from 38 to 15 hosts and slash the SQL licensing requirements for the solution.

EC2 Bare Metal instances and VMware Cloud on AWS

AWS does have other products that lend to the BYOL/oversubscription story. EC2 Bare Metal instances and VMware Cloud on AWS gives the customer full control of the configuration of instances just as they have on-premise. The EC2 Bare Metal instances are built on the Nitro System which is a collection of AWS-built hardware offload and security components that offers high performance networking and storage to EC2 instances. EC2 Bare Metal instances can utilize AWS services such as Amazon Virtual Private Cloud (VPC), Elastic Block Store (EBS), Elastic Load Balancing (ELB), AutoScaling and more. The Nitro configuration gives the customer the ability to install a server Operating System or hypervisor directly on the hardware. By utilizing their own hypervisor, the customer can define and configure their own instance configurations of RAM, disk and vCPU. By surpassing the fixed configurations in the EC2 Dedicated Host environment, Nitro configurations enable migrating highly oversubscribed on-premise workloads.

The VMware Cloud on AWS offering provides organizations the ability to extend and migrate their on-premise vSphere environment to AWS’s scalable and secure Cloud infrastructure. Customers can leverage vSphere, vSAN, NSX and vCenter technologies to extend their data centers and consume AWS services. vMotion provides the ability to live migrate VM’s to AWS with limited or no downtime. While licensing for migrated VM’s does not change at the VM level, it is imperative that the licensing on the new vSphere node be adequate. Since the customer has complete control of the environment, they have the ability to oversubscribe CPU’s to any ratio. By licensing applications such as SQL Server at the host level, oversubscription rates are irrelevant to licensing. If a vSphere node has 40 cores, as long as the 40 cores are licensed, the number of vCPU’s assigned is immaterial. In the VMware environment, all OS’s and applications are BYOL and no licensing will be provided by AWS. Ultimately, this solution is free of the oversubscription burden that affects certain AWS dedicated tenancy options.

Optimize CPU

EC2 Instance Types offer multiple fixed vCPU to memory configurations to match the customer’s workloads and use cases. With Optimize CPU, customers now have the ability to specify the number of cores that an instance has access to as well as determining if Hyper-Threading is enabled. Hyper-Threading Technology enables multiple threads to run concurrently on a single Intel Xeon CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. Controlling the threads and core count is significant for Microsoft SQL Server as it is typically more RAM constrained than compute bound. With Optimize CPUs, you can potentially reduce the number of SQL Server licenses required by specifying a custom number of vCPUs. SQL Server on Amazon EC2 is often licensed per virtual core. EC2 vCPU’s are the equivalent of virtual cores. When licensing with virtual cores on EC2, the number of active vCPUs provisioned through Optimize CPUs may indicate the number of SQL Server licenses required. For example, if you have a SQL Standard build that needs the RAM and network capabilities of an r4.4xlarge but not the 16 vCPUs that comes configured on it, you can define the Optimize CPU options in the CLI or API at launch to disable Hyper-Threading and limit the instance to 1, 2, 3, 4, 5, 6, 7 or 8 cores. This example would cut the licensing costs exponentially. The Optimize CPU feature is available for new instance launches in all AWS public regions. To benefit from Optimize CPUs, you can bring SQL Server licenses to Amazon EC2 and use it on EC2 default tenancy or on allocated instances on a EC2 Dedicated Host or a EC2 Dedicated Instance. For a list of supported instance types, and valid CPU counts, see instance type documentation.

In this post, we’ve covered three AWS scenarios and how they fulfill specific areas of BYOL with CPU oversubscription scenario as well as how Optimize CPU can help cut licensing costs. EC2 Dedicated hosts are generally the first choice in the Microsoft BYOL realm unless the customer is absolutely unwilling to right-size their highly oversubscribed instances. EC2 Bare Metal instances provide the customer the ability to configure all aspects of their hypervisor of choice and maintain any oversubscription that exists in their environment. This is a very popular choice that requires little change and ultimately gets their workloads to the AWS Cloud. The VMware Cloud on AWS option is sold by and provisioned by VMware. For users that are current VMware customers, this service allows them to bridge the gap between their on-premise data center and AWS while providing a seamless migration path to the Cloud using their current toolsets.

Running Hyper-V on Amazon EC2 Bare Metal Instances

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/running-hyper-v-on-amazon-ec2-bare-metal-instances/

AWS recently announced the general availability of Amazon EC2 bare metal Instances. This post provides an overview of launching, setting up, and configuring a Hyper-V enabled host, launching a guest virtual machine (VM) within Hyper-V running on i3.metal.

Walkthrough

The key elements of this process include the following steps:

  1. Launch a Windows Server 2016 with Hyper-V AMI provided by Amazon.
  2. Configure Hyper-V networking.
  3. Launch a Hyper-V guest VM.

Launch a Windows Server 2016 with Hyper-V AMI provided by Amazon

1. Open the EC2 console.
2. Choose Public Images and search for the Amazon Hyper-V AMIs.
3. Select your preferred Hyper-V AMI, and choose Launch.
4. Follow the Launch wizard process to launch the instance on i3.metal.

The Amazon Hyper-V AMIs have the Hyper-V role pre-enabled. You can also launch a Windows Server 2016 Base AMI to i3.metal, and enable the Hyper-V role for your use case.

Configure Hyper-V networking

To enable networking for your Hyper-V guests—so they can have connectivity to other resources in your VPC, or to the internet via your VPC internet gateway, ensure that you have first configured your VPC. For more information, see Creating and Attaching an Internet Gateway.

Hyper-V provides three types of virtual switches for networking:

  • External
  • Internal
  • Private

In this solution, you are creating an internal virtual switch and using the Hyper-V host as the NAT server for the guest VMs, similar to Microsoft’s topic Set up a NAT network.

You can specify your own virtual network range. For this example, use 192.168.0.0/24 as the range for the virtual network inside the Hyper-V host.

  1. Run the following PowerShell command to create the internal virtual switch:
    New-VMSwitch -SwitchName "Hyper-VSwitch" -SwitchType Internal
  2. Determine which network interface is associated with the virtual switch. For this solution, the Get-NetAdapter command shows that the Hyper-V virtual switch has an ifIndex value of 12.
  3. Configure the Hyper-V Virtual Ethernet adapter with the NAT gateway IP address. This IP address is used as default gateway (Router IP) for the guest VMs. The following command sets the IP address 192.168.0.1 with a subnet mask 255.255.255.0 on the Interface (InterfaceIndex 12):
    New-NetIPAddress -IPAddress 192.168.0.1 -PrefixLength 24 -InterfaceIndex 12
  4. Create a NAT virtual network using the range of 192.168.0.0/24:
    New-NetNat -Name MyNATnetwork -InternalIPInterfaceAddressPrefix 192.168.0.0/24

Now the environment is ready for the guest VMs to have outbound communication with other resources through the host NAT. For each VM, assign an IP address with the default gateway (192.168.0.1). This can be done manually within each guest VM. In this solution, you make it easier by enabling a DHCP server within the Hyper-V host to automatically assign IP addresses.

Setting up DHCP server role on the host

  1. Run the following command to add the DHCP role to the host:
    Install-WindowsFeature -Name 'DHCP' -IncludeManagementTools
  2. To configure the DHCP server to bind on the Hyper-V virtual interface, choose Control Panel, Administrative Tools, DHCP.
  3. Select this computer, add or remove bindings, and then select the IP address corresponding to Hyper-V virtual interface (that is, 192.168.0.1).
  4. Configure the DHCP scope and specify a range from the subnet that you determined earlier. In this example, use 192.168.0.10~192.168.0.20.
    Add-DhcpServerv4Scope -Name GuestIPRange -StartRange 192.168.0.10 -EndRange 192.168.0.20 -SubnetMask 255.255.255.0 -State Active

    You should be able to see the range in the DHCP console, as in the following screenshot:

  5. For Router, choose the NAT gateway IP address assigned it to the Hyper-V network adapter (192.168.0.1).
  6. For DNS server, use the Amazon DNS, which is the second IP address for the VPC (172.30.0.2).

Launch a Hyper-V guest VM

For this post, follow the new VM wizard to create an Ubuntu 18.04 LTS guest VM. First, download the Ubuntu installation ISO from the Ubuntu website to your Hyper-V host, and store it on a secondary EBS volume that you added as the D: drive.

I3.metal instances use Amazon EBS and instance store volumes with the NVM Express (NVMe) interface. When you stop an I3.metal instance, any data stored on instance store volumes is gone. I recommend storing your guest VM’s hard drive (vhd or vhdx) on an EBS volume that is attached to your I3.Metal instance. This can be the root volume (C:) or any additional EBS volumes attached to the instance. For more information, see What’s the difference between instance store and EBS?

After that is complete, follow these steps:

  1. In Hyper-V Manager, choose Actions, New, Virtual Machine.
  2. Follow the wizard with your desired configuration up to the Configure Networking section.
  3. In the Configure Networking step, for Connection, choose Hyper-V Switch, and choose Next.
  4. In the Connect Virtual Hard Disk step, enter a name for the virtual hard disk. Use the default location C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\.
  5. Specify the size of the virtual hard disk, and choose Next.
  6. In the Installation Options step, choose the Ubuntu ISO that you downloaded earlier.
  7. Finish the wizard and start the VM, then follow the steps on the Ubuntu installation wizard. As you have already set up DHCP and NAT for the Hyper-V network, the Ubuntu VM automatically gets an IP address from the DHCP scope that you defined earlier.
  8. Confirm the connectivity of the VM to the internet

Conclusion

You’ve just built a Hyper-V host on an EC2 bare metal instance. Now you’re ready to add more guest VMs and put them to work!

Migrating a multi-tier application from a Microsoft Hyper-V environment using AWS SMS and AWS Migration Hub

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/migrating-a-multi-tier-application-from-a-microsoft-hyper-v-environment-using-aws-sms-and-aws-migration-hub/

Shane Baldacchino is a Solutions Architect at Amazon Web Services

Many customers ask for guidance to migrate end-to-end solutions running in their on-premises data center to AWS. This post provides an overview of moving a common blogging platform, WordPress, running on an on-premises virtualized Microsoft Hyper-V platform to AWS, including re-pointing the DNS records associated to the website.

AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. In November 2017, AWS added support for Microsoft’s Hyper-V hypervisor. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations. In this post, I guide you through migrating your multi-tier workloads using both AWS SMS and AWS Migration Hub.

Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions. In this post, you use AWS SMS as a mechanism to migrate the virtual machines (VMs) and track them via Migration Hub. You can also use other third-party tools in Migration Hub, and choose the migration tools that best fit your needs. Migration Hub allows you to get progress updates across all migrations, identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

Migration Hub and AWS SMS are both free. You pay only for the cost of the individual migration tools that you use, and any resources being consumed on AWS.

Walkthrough

For this walkthrough, the WordPress blog is currently running as a two-tier stack in a corporate data center. The example environment is multi-tier and polyglot in nature. The frontend uses Windows Server 2016 (running IIS 10 with PHP as an ISAPI extension) and the backend is supported by a MySQL server running on Ubuntu 16.04 LTS. All systems are hosted on a virtualized platform. As the environment consists of multiple servers, you can use Migration Hub to group the servers together as an application and manage the holistic process of migrating the application.
The key elements of this migration process involve the following steps:

  1. Establish your AWS environment.
  2. Replicate your database.
  3. Download the SMS Connector from the AWS Management Console.
  4. Configure AWS SMS and Hyper-V permissions.
  5. Install and configure the SMS Connector appliance.
  6. Configure Hyper-V host permissions.
  7. Import your virtual machine inventory and create a replication job.
  8. Use AWS Migration Hub to track progress.
  9. Launch your Amazon EC2 instance.
  10. Change your DNS records to resolve the WordPress blog to your EC2 instance.

Before you start, ensure that your source systems OS and hypervisor version are supported by AWS SMS. For more information, see the Server Migration Service FAQ. This post focuses on the Microsoft Hyper-V hypervisor.

Establish your AWS environment

First, establish your AWS environment. If your organization is new to AWS, this may include account or subaccount creation, a new virtual private cloud (VPC), and associated subnets, route tables, internet gateways, and so on. Think of this phase as setting up your software-defined data center. For more information, see Getting Started with Amazon EC2 Linux Instances.

The blog is a two-tier stack, so go with two private subnets. Because you want it to be highly available, use multiple Availability Zones. An Availability Zone resides within an AWS Region. Each Availability Zone is isolated, but the zones within a Region are connected through low-latency links. This allows architects and solution designers to build highly available solutions.

Replicate your database

WordPress uses a MySQL relational database. You could continue to manage MySQL and the associated EC2 instances associated with maintaining and scaling a database. But for this walkthrough, I am using this opportunity to migrate to an RDS instance of Amazon Aurora, as it is a MySQL-compliant database. Not only is Amazon Aurora a high-performant database engine but it frees you up to focus on application development by managing time-consuming database administration tasks, including backups, software patching, monitoring, scaling, and replication.

Use AWS Database Migration Service (AWS DMS) to migrate your MySQL database to Amazon Aurora easily and securely. You can send the results from AWS DMS to Migration Hub. This allows you to create a single pane view of your application migration.

After a database migration instance has been instantiated, configure the source and destination endpoints and create a replication task.

By attaching to the MySQL binlog, you can seed in the current data in the database and also capture all future state changes in near–real time. For more information, see Migrating a MySQL-Compatible Database to Amazon Aurora.

Finally, the task shows that you are replicating current data in your WordPress blog database and future changes from MySQL into Amazon Aurora.

Download the SMS Connector from the AWS Management Console

Now, use AWS SMS to migrate your IIS/PHP frontend. AWS SMS is delivered as a virtual appliance that can be deployed in your Hyper-V environment.

To download the SMS Connector, log in to the console and choose Server Migration Service, Connectors, SMS Connector setup guide. Download the VHD file for SCVMM/Hyper-V.

Configure SMS

Your hypervisor and AWS SMS need an appropriate user with sufficient privileges to perform migrations:

Launch a new VM in Hyper-V based on the SMS Connector that you downloaded. To configure the connector, connect to it via HTTPS. You can obtain the SMS Connector IP address from within Hyper-V. By default, the SMS Connector uses DHCP to obtain a valid IP address.

Connect to the SMS Connector via HTTPS. In the example above, the connector IP address is 10.0.0.88. In your browser, enter https://10.0.0.88. As the SMS Connector can only work with one hypervisor at a time, you must state the hypervisor with which to interface. For the purpose of this post, the examples use Microsoft Hyper-V.

Configure the connector with the IAM and hypervisor credentials that you created earlier.

After you have entered in both your AWS and Hyper-V credentials and the associated connectivity and authentication checks have passed, you are redirected to the home page of your SMS Connector. The home page provides you a status on connectivity and the health of the SMS Connector.

Configure Hyper-V host permissions

You also must modify your Hyper-V hosts to provide WinRM connectivity. AWS provides a downloadable PowerShell script to configure your Windows environment to support WinRM communications with the SMS Connector. The same script is used for configuring either standalone Hyper-V or SCVMM.

Execute the PowerShell script and follow the prompts. In the following example, Reconfigure Hyper-V not managed by SCVMM (Standalone Hyper-V)… was selected.

Import your virtual machine inventory and create a replication job

You have now configured the SMS Connector and your Microsoft Hyper-V hosts. Switch to the console to import your server catalog to AWS SMS. Within AWS SMS, choose Connectors, Import Server Catalog.

This process can take up to a few minutes and is dependent on the number of machines in your Hyper-V inventory.

Select the server to migrate and choose Create replication job. The console guides you through the process. The time that the initial replication task takes to complete is dependent on the available bandwidth and the size of your VM. After the initial seed replication, network bandwidth is minimized as AWS SMS replicates only incremental changes occurring on the VM.

Use Migration Hub to track progress

You have now successfully started your database migration via AWS DMS, set up your SMS Connector, configured your Microsoft Hyper-V environment, and started a replication job.

You can now track the collective progress of your application migration. To track migration progress, connect AWS DMS and AWS SMS to Migration Hub.

To do this, navigate to Migration Hub in the AWS Management Console. Under Migrate and Tools, connect both services so that the migration status of these services is sent to Migration Hub.

You can then group your servers into an application in Migration Hub and collectively track the progress of your migration. In this example, I created an application, Company Blog, and added in my servers from both AWS SMS and AWS DMS.

The progress updates from linked services are automatically sent to Migration Hub so that you can track tasks in progress. The dashboard reflects any status changes that occur in the linked services. You can see from the following image that one server is complete while another is in progress.

Using Migration Hub, you can view the migration progress of all applications. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

Launch your EC2 instance

When your replication task is complete, the artifact created by AWS SMS is a custom AMI that you can use to deploy an EC2 instance. Follow the usual process to launch your EC2 instance, using the custom AMI created by AWS SMS, noting that you may need to replace any host-based firewalls with security groups and NACLs.

When you create an EC2 instance, ensure that you pick the most suitable EC2 instance type and size to match your performance requirements while optimizing for cost.

While your new EC2 instance is a replica of your on-premises VM, you should always validate that applications are functioning. How you do this differ on an application-by-application basis. You can use a combination of approaches, such as editing a local host file and testing your application, SSH, RDP, and Telnet.

From the RDS console, get your connection string details and update your WordPress configuration file to point to the Amazon Aurora database. As WordPress is expecting a MySQL database and Amazon Aurora is MySQL-compliant, this change of database engine is transparent to WordPress.

Change your DNS records to resolve the WordPress blog to your EC2 instance

You have validated that your WordPress application is running correctly, as you are still receiving changes from your on-premises data center via AWS DMS into your Amazon Aurora database. You can now update your DNS zone file using Amazon Route 53. Amazon Route 53 can be driven by multiple methods: console, SDK, or AWS CLI.

For this walkthrough, use Windows PowerShell for AWS to update the DNS zone file. The example shows UPSERTING the A record in the zone to resolve to the Amazon EC2 instance created with AWS SMS.

Based on the TTL of your DNS zone file, end users slowly resolve the WordPress blog to AWS.

Summary

You have now successfully migrated your WordPress blog to AWS using AWS migration services, specifically the AWS SMS Hyper-V/SCVMM Connector. Your blog now resolves to AWS. After validation, you are ready to decommission your on-premises resources.

Many architectures can be extended to use many of the inherent benefits of AWS, with little effort. For example, by using Amazon CloudWatch metrics to drive scaling policies, you can use an Application Load Balancer as your frontend. This removes the single point of failure for a single EC2 instance

Query for the latest Amazon Linux AMI IDs using AWS Systems Manager Parameter Store

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/query-for-the-latest-amazon-linux-ami-ids-using-aws-systems-manager-parameter-store/

Want a simpler way to query for the latest Amazon Linux AMI? AWS Systems Manager Parameter Store already allows for querying the latest Windows AMI. Now, support has been expanded to include the latest Amazon Linux AMI. Each Amazon Linux AMI now has its own Parameter Store namespace that is public and describable. Upon querying, an AMI namespace returns only its regional ImageID value.

The namespace is made up of two parts:

  • Parameter Store Prefix (tree): /aws/service/ami-amazon-linux-latest/
  • AMI name alias: (example) amzn-ami-hvm-x86_64-gp2

You can determine an Amazon Linux AMI alias by taking the full AMI name property of an Amazon Linux public AMI and removing the date-based version identifier. A list of these AMI name properties can be seen by running one for the following Amazon EC2 queries.

Using the AWS CLI:

aws ec2 describe-images --owners amazon --filters "Name=name,Values=amzn*" --query 'sort_by(Images, &CreationDate)[].Name'

Using PowerShell:

Get-EC2ImageByName -Name amzn* | Sort-Object CreationDate | Select-Object Name

For example, amzn2-ami-hvm-2017.12.0.20171208-x86_64-gp2 without the date-based version becomes amzn2-ami-hvm-x86_64-gp2.

When you add the public Parameter Store prefix namespace to the AMI alias, you have the Parameter Store name of “/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2”.

Each unique AMI namespace always remains the same. You no longer need to pattern match on name filters, and you no longer need to sort through CreationDate AMI properties. As Amazon Linux AMIs are patched and new versions are released to the public, AWS updates the Parameter Store value with the latest ImageID value for each AMI namespace in all supported Regions.

Before this release, finding the latest regional ImageID for an Amazon Linux AMI involved a three-step process. First, using an API call to search the list of available public AMIs. Second, filtering the results by a given partial string name. Third, sorting the matches by CreationDate property and selecting the newest ImageID. Querying AWS Systems Manager greatly simplifies this process.
Querying for the latest AMI using public parameters

After you have your target namespace, your query can be created to retrieve the latest Amazon Linux AMI ImageID value. Each Region has an exact replica namespace containing its Region-specific ImageID value.

Using the AWS CLI:

aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --region us-east-1 

Using PowerShell:

Get-SSMParameter -Name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 -region us-east-1

Always launch new instances with the latest ImageID

After you have created the query, you can embed the command as a command substitution into your new instance launches.

Using the AWS CLI:

 aws ec2 run-instances --image-id $(aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --query 'Parameters[0].[Value]' --output text) --count 1 --instance-type m4.large

Using PowerShell:

New-EC2Instance -ImageId ((Get-SSMParameterValue -Name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2).Parameters[0].Value) -InstanceType m4.large -AssociatePublicIp $true

This new instance launch always results in the latest publicly available Amazon Linux AMI for amzn2-ami-hvm-x86_64-gp2. Similar embedding can be used in a number of automation process, docs, and coding languages.

Display a complete list of all available Public Parameter Amazon Linux AMIs

You can also query for the complete list of AWS Amazon Linux Parameter Store namespaces available.

Using the AWS CLI:

aws ssm get-parameters-by-path --path "/aws/service/ami-amazon-linux-latest" --region us-east-1

Using PowerShell:

Get-SSMParametersByPath -Path "/aws/service/ami-amazon-linux-latest" -region us-east-1

Here’s an example list retrieved from a get-parameters-by-path call:

 /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
 /aws/service/ami-amazon-linux-latest/amzn2-ami-minimal-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-s3
 /aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-s3

Launching latest Amazon Linux AMI in an AWS CloudFormation stack

AWS CloudFormation also supports Parameter Store. For more information, see Integrating AWS CloudFormation with AWS Systems Manager Parameter Store. Here’s an example of how you would reference the latest Amazon Linux AMI in a CloudFormation template.

 # Use public Systems Manager Parameter
 Parameters :
 LatestAmiId :
 Type : 'AWS::SSM::Parameter::Value'
 Default: ‘/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2’

Resources :
 Instance :
 Type : 'AWS::EC2::Instance'
 Properties :
 ImageId : !Ref LatestAmiId

 

About the Author

Arend Castelein is a software development engineer on the Amazon Linux team. Most of his work relates to making Amazon Linux updates available sooner while also reducing the workload for his teammates. Outside of work, he enjoys rock climbing and playing indie games.