Tag Archives: Amazon EC2

New – Amazon EC2 Hpc6a Instance Optimized for High Performance Computing

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-hpc6a-instance-optimized-for-high-performance-computing/

High Performance Computing (HPC) allows scientists and engineers to solve complex, compute-intensive problems such as computational fluid dynamics (CFD), weather forecasting, and genomics. HPC applications typically require instances with high memory bandwidth, a low latency, high bandwidth network interconnect and access to a fast parallel file system.

Many customers have turned to AWS to run their HPC workloads. For example, Descartes Labs used AWS to power a TOP500 LINPACK benchmarking (the most powerful commercially available computer systems) run that delivered 1.93 PFLOPS, landing at position 136 on the TOP500 list in June 2019. That run made use of 41,472 cores on a cluster of Amazon EC2 C5 instances. Last year Descartes Labs ran the LINPACK benchmark again and placed within the top 40 on the June 2021 TOP500 list with 172,692 cores on a cluster of EC2 instances, which represents a 417 percent performance increase in just two years.

AWS enables you to increase the speed of research and reduce time-to-results by running HPC in the cloud and scaling to tens of thousands of parallel tasks that wouldn’t be practical in most on-premises environments. AWS helps you reduce costs by providing CPU, GPU, and FPGA instances on-demand, Elastic Fabric Adapter (EFA), an EC2 network device that improves throughput and scaling tightly coupled workloads, and AWS ParallelCluster, an open-source cluster management tool that makes it easy for you to deploy and manage HPC clusters on AWS.

Announcing EC2 Hpc6a Instances for HPC Workloads
Customers today across various industries use compute-optimized EFA-enabled Amazon EC2 instances (for example, C5n, R5n, M5n, and M5zn) to maximize the performance of a variety of HPC workloads, but as these workloads scale to tens of thousands of cores, cost-efficiency becomes increasingly important. We have found that customers are not only looking to optimize performance for their HPC workloads but want to optimize costs as well.

As we pre-announced in November 2021, Hpc6a, a new HPC-optimized EC2 instance, is generally available beginning today. This instance delivers 100 Gbps networking through EFA with 96 third-generation AMD EPYC™ processor (Milan) cores with 384 GB RAM, and offers up to 65 percent better price-performance over comparable x86-based compute-optimized instances.

You can launch Hpc6a instances today in the US East (Ohio) and GovCloud (US-West) Regions in On-Demand and Dedicated Hosting or as part of a Savings Plan. Here are the detailed specs:

Instance Name CPUs* RAM EFA Network Bandwidth Attached Storage
hpc6a.48xlarge 96 384 GiB Up to 100 Gbps EBS Only

*Hpc6a instances have simultaneous multi-threading disabled to optimize for HPC codes. This means that unlike other EC2 instances, Hpc6a vCPUs are physical cores, not threads.

To enable predictable thread performance and efficient scheduling for HPC workloads, simultaneous multi-threading is disabled. Thanks to AWS Nitro System, no cores are held back for the hypervisor, making all cores available to your code.

Hpc6a instances introduce a number of targeted features to deliver cost and performance optimizations for customers running tightly coupled HPC workloads that rely on high levels of inter-instance communications. These instances enable EFA networking bandwidth of 100 Gbps and are designed to efficiently scale large tightly coupled clusters within a single Availability Zone.

We hear from many of our engineering customers, such as those in the automotive sector, that they want to reduce the need for physical testing and move towards an increasingly virtual simulation-based product design process faster at a lower cost.

According to our benchmarking results for Siemens Simcenter STAR-CCM+ automotive CFD simulation, when the Hpc6a scales up to 400 nodes (approximately 40,000 cores), with the help of EFA networking, it is able to maintain approximately 100 percent scaling efficiency. Hpc6a instance shows 70 percent lower cost compared to c5n, meaning companies can deliver new designs faster and at a lower cost when using Hpc6a instances. This means companies can deliver new designs faster and at a lower cost when using Hpc6a instances.

You can use the Hpc6a instance with AMD EPYC third-generation (Milan) processors to run your largest and most complex HPC simulations on EC2 and optimize for cost and performance. Customers can also use the new Hpc6a instances with AWS Batch and AWS ParallelCluster to simplify workload submission and cluster creation.

To learn more, visit our Hpc6a instance page and get in touch with our HPC team, AWS re:Post for EC2, or through your usual AWS Support contacts.

Channy

Efficiently Scaling kOps clusters with Amazon EC2 Spot Instances

Post Syndicated from Pranaya Anshu original https://aws.amazon.com/blogs/compute/efficiently-scaling-kops-clusters-with-amazon-ec2-spot-instances/

This post is written by Carlos Manzanedo Rueda, WW SA Leader for EC2 Spot, and Brandon Wagner, Senior Software Development Engineer for EC2.

This post focuses on how you can leverage recently released tools to optimize your usage of Amazon EC2 Spot Instances on Kubernetes Operations (kOps) clusters. Spot Instances let you utilize unused capacity in the AWS cloud for up to 90% off compared to On-Demand prices, and they are a great fit for fault-tolerant, containerized applications. kOps is an open source project providing a cohesive toolset for provisioning, operating, and deleting Kubernetes clusters in the cloud.

Even with customers such as Snap Inc., Babylon Health, and Fidelity Investments telling us how Amazon Elastic Kubernetes Service (EKS) is essential for running their containerized workloads, we appreciate that there are scenarios where using Amazon EC2 instances and kOps are a viable alternative. At AWS, we understand “one size does not fit all.” While we encourage Kubernetes users to contribute their feedback to the AWS container roadmap so that we can improve our services, we also would like to reduce heavy lifting and simplify Spot best practices integration in kOps clusters.

To simplify the integration of Spot Instances in kOps clusters, in January of 2021 we introduced a new kops toolbox command: kops toolbox instance-selector. The utility is distributed as part of the standard kOps distribution. Moreover, it simplifies the creation of kOps Instance Groups by configuring them with full adherence to Spot Instances best practices.

Handling Spot interruption notifications in Kubernetes

Let’s quickly recap Spot best practices. Spot Instances perform exactly like any other EC2 Instances, except that in exchange for their discounted price, they can be interrupted with a two-minute warning when EC2 must reclaim capacity. Applications running on Spot can typically recover from transient interruptions by simply starting a new instance. Spot best practices involve measures such as diversifying into as many Spot capacity pools as possible, choosing the right Spot allocation strategy, and utilizing Spot integrated services. These handle the Spot Instances lifecycles for you. This blog post on handling Spot interruptions dives deeper into AWS’s EC2 Spot best practices.

In Kubernetes, to handle spot termination and rebalance recommendation events (both explained in this blog post on proactively managing Spot Instance lifecycle), we utilize the AWS open-source project AWS Node Termination Handler. We will be deploying the Node Termination Handler as a kOps managed addon, which simplifies its setup and configuration.

The Node Termination Handler ensures that the Kubernetes control plane responds appropriately to events that can make EC2 instances unavailable. It can be operated in two different modes: Instance Metadata Service (IMDS), deployed as a DaemonSet, or Queue Processor, deployed as a Deployment Controller. We recommend running it in Queue Processor mode. The Queue Processor controller continuously monitors an Amazon Simple Queue Service (SQS) queue for events received from Amazon EventBridge. This can lead to node termination in your cluster. When one of these events is received, the Node Termination Handler notifies the Kubernetes control plane to cordon and drain the node that is about to be interrupted. Then, the kubelet sends a SIGTERM signal to the Pods and containers running on the node. This lets your application proceed with a graceful termination – one of the recommended best practices of a Twelve-Factor App.

The kOps managed addon will let you configure the Node Termination Handler within your kOps cluster spec and, more importantly, manage provisioning the necessary infrastructure for you.

To deploy the AWS Node Termination Handler, we start by editing our cluster spec:

kops edit cluster --name ${KOPS_CLUSTER_NAME}

We append the nodeTerminationHandler configuration to the spec node:

spec:
  nodeTerminationHandler:
    enabled: true
    enableSQSTerminationDraining: true
    managedASGTag: "aws-node-termination-handler/managed"

Finally, we deploy the changes made to our cluster configuration:

kops update cluster --name ${KOPS_CLUSTER_NAME} –-state {KOPS_STATE_STORE} --yes --admin

${KOPS_CLUSTER_NAME} refers to the environment variable containing the cluster name, and ${KOPS_STATE_STORE} indicates the Amazon Simple Storage Service (S3) bucket – or kOps State Store – where kOps configuration is stored.

To check that your Node Termination Handler deployment was successful, you can execute:

kops get deployment aws-node-termination-handler -n kube-system

Instance Flexibility and Diversification

Diversification and selection of multiple instances types is essential to acquire and maintain Spot capacity, as well as to successfully replace interrupted instances with others from different pools. When running kOps on AWS, this is implemented by utilizing Amazon EC2 Auto Scaling. Amazon Auto Scaling group’s capacity-optimized allocation strategy ensures that Spot capacity is provisioned from the optimal pools, thereby reducing the chances of Spot terminations.

Simplifying adoption of Spot Best practices on kOps

Before the kops toolbox instance-selector, you would have to setup Spot best practices on kOps manually. This involved writing a stub file following the InstanceGroup specification and examples, and then implementing every best practice, including finding every pool that qualifies for our workload.

The new functionality in kops toolbox instance-selector simplifies InstanceGroup creation by moving the focus of kOps users and administrators from this manual configuration over to simply selecting the vCPUs and Memory requirements for their application (or a base instance type), and then letting kops toolbox instance-selector define the right configuration. Behind the scenes, it utilizes a library allowing it to plug into the feature-set of Amazon EC2 instance selector. At its core, ec2 instance selector helps you select compatible instance types for your application to run on. Utilize ec2 instance selector CLI or library when automating your configurations. In the case of kOps, the integration already comes in the kops toolbox.

For example, let’s say your cluster runs stateless, fault tolerant applications that are CPU/Memory bound and have a ratio of vCPU to Memory requiring at least 1vCPU : 4GB of RAM. You can run the following command in order to acquire cluster spot capacity:

kops toolbox instance-selector "spot-group-" \
  --usage-class spot --flexible --cluster-autoscaler \
  --vcpus-to-memory-ratio="1:4" \
  --ig-count 2

Let’s focus first on the command, and later cover its output. You can get a list of parameters and default values by running: kops toolbox instance-selector –help. A few default parameters weren’t passed in the command above, but they will be set to sane defaults, such as the maximum and minimum number of instances in the Instance Group. The parameter –flexible refers to our request to provide a group of flexible instance types spanning multiple generations.

Once you’ve defined the InstanceGroups, start them up by using the command:

kops update cluster \
–state=${KOPS_STATE_STORE} \
–name=${KOPS_CLUSTER_NAME} \
–yes –admin

The two commands above define and create a request for spot capacity from a flexible and diversified pool set, which meet the criteria to provide at least 4GB of RAM for each vCPU. The command creates not just one, but two node groups named “spot-group-1” and “spot-group-2” (–ig-count 2).

Now, let’s check the contents of the configuration file generated by kops toolbox instance-selector. To preview a configuration without making changes, add –dry-run –output yaml.

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-08-11T10:22:16Z"
  labels:
    kops.k8s.io/cluster: spot-kops-cluster.k8s.local
  name: spot-group-1
spec:
  cloudLabels:
    k8s.io/cluster-autoscaler/enabled: "1"
    k8s.io/cluster-autoscaler/spot-kops-cluster.k8s.local: "1"
    kops.k8s.io/instance-selector: "1"
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200716
  machineType: m3.xlarge
  maxSize: 15
  minSize: 2
  mixedInstancesPolicy:
    instances:
    - m3.xlarge
    - m4.xlarge
    - m5.xlarge
    - m5a.xlarge
    - t2.xlarge
    - t3.xlarge
    onDemandAboveBase: 0
    onDemandBase: 0
    spotAllocationStrategy: capacity-optimized
  nodeLabels:
    kops.k8s.io/instancegroup: spot-group-1
  role: Node
  subnets:
  - eu-west-1a
  - eu-west-1b
  - eu-west-1c
...

The configuration above lists one of the groups created by kops toolbox instance-selector in the previous example. The second group will have a very similar make-up and format, except that it will refer to instances such as: r3.xlarge, r4.xlarge, r5.xlarge, and r5a.xlarge in the mixedInstancesPolicy section. By defining the parameter –usage-class to Spot, the configuration created by kops toolbox instance-selector will add the tags identifying this Auto Scaling group as a Spot group. When the nodes are initialized, kOps controller will identify the nodes as Spot and add the label node-role.kubernetes.io/spot-worker=true. Therefore, at a later stage, we can apply placement logic to our cluster by using nodeSelector and affinity. The configuration above adheres to the definition of kOps support for mixed Instance Groups in AWS, and adds all of the right cloudLabels in order to integrate and implement not only with Spot best practices, but also with Cluster Autoscaler Auto-Discovery configuration best practices.

Kubernetes Cluster Autoscaler is a Kubernetes controller that dynamically adjusts the cluster size. According to a 2020 survey by Cloud Native Computing Foundation (CNCF), 70% of Kubernetes workloads plan to autoscale their stateless applications. Dynamically scaling applications and clusters is also a great practice for optimizing your system costs in situations where capacity is unnecessary, as well as for scaling out accordingly in order to meet business demands. If there are Pods that can’t be scheduled due to insufficient resources, then Cluster Autoscaler will issue a Scale-out action. When there are nodes in the cluster that have been under-utilized for a configurable period of time, Cluster Autoscaler will Scale-in the cluster, and even down-scale to 0 instances when applications don’t need to be run.

On Scale-out operations, Cluster Autoscaler evaluates a set of node groups. When Cluster Autoscaler runs on AWS, node groups are implemented by using Auto Scaling groups (referring to the same instance group as a kOps Instance Group). Therefore, to calculate the number of nodes to scale-out, Cluster Autoscaler assumes that every instance in a node group has the same number of vCPUs and memory size.

By creating two node groups, you apply two diversification levels. You diversify within each node group by using an Auto Scaling group with Mixed Instance Policies and capacity-optimized allocation strategy. Then, to increase the pool range you can leverage, you add more than one node group, while still adhering to the best practices required by Cluster Autoscaler.

While we’ve been focusing on Spot Instances, the parameter –usage-class can be utilized to get OnDemand instances instead of Spot. In the next example, let’s say we would like to get On-Demand capacity in order to train complex deep learning models that will take hours to run. To train our models, we need instances that have at least one GPU with 16GB of RAM on instances that have at least 32GB Ram and 8 vCPUs.

kops toolbox instance-selector "ondemand-gpu-group" \
  --gpus-min 1 --gpu-memory-total-min 16gb --memory-min 32gb --vcpus 8\
  --node-count-max 4 --node-count-min 4 --cpu-architecture amd64

The command above, followed by kops update cluster –state=${KOPS_STATE_STORE} –name=${KOPS_CLUSTER_NAME} –yes can be utilized to produce a configuration and create a nodegroup with the right requirements. This could be created at the start of the training procedure, and then – once the training is done and the capacity is no longer needed – you could automate the nodegroup removal with the following command:

kops delete instancegroup ondemand-gpu-group --name ${KOPS_CLUSTER_NAME} –yes

Conclusions

We believe the best way to run Kubernetes on AWS is by using Amazon EKS. However, scenarios may exist where kOps is utilized in AWS. By using the kOps managed add-on to install aws-node-termination-handler and kops toolbox instance-selector, it is easier than ever to apply Spot best practices to Kubernetes workloads on kOps, and cost-optimize fault-tolerant, stateless applications. These tools let kOps workloads gracefully terminate applications, as well as proactively handle the replacement of instances that are at an elevated risk of termination. kops toolbox instance-selector leverages Amazon ec2-instance-selector in order to simplify the creation of Instance Group configurations adhering to Spot Instances best practices, implementing instance type flexibility, and utilizing capacity-optimized allocation strategy.

By adhering to these best practices to reduce the frequency of Spot interruptions, we will optimize not only the cost, but also our Spot Instances selection. This will enable us to acquire capacity at a massive scale if necessary.

To start using the tools we have described, follow along this step-by-step tutorial. Also, head over to the kops toolbox documentation to learn more about the ways in which you can use it.

Deep dive into NitroTPM and UEFI Secure Boot support in Amazon EC2

Post Syndicated from Neelay Thaker original https://aws.amazon.com/blogs/compute/deep-dive-into-nitrotpm-and-uefi-secure-boot-support-in-amazon-ec2/

Contributed by Samartha Chandrashekar, Principal Product Manager Amazon EC2

At re:Invent 2021, we announced NitroTPM, a Trusted Platform Module (TPM) 2.0 and Unified Extensible Firmware Interface (UEFI) Secure Boot support in Amazon EC2. In this blog post, we’ll share additional details on how these capabilities can help further raise the security bar of EC2 deployments.

A TPM is a security device to gather and attest system state, store and generate cryptographic data, and prove platform identity. Although TPMs are traditionally discrete chips or firmware modules, their adaptation on AWS as NitroTPM preserves their security properties without affecting the agility and scalability of EC2. NitroTPM makes it possible to use TPM-dependent applications and Operating System (OS) capabilities in EC2 instances. It conforms to the TPM 2.0 specification, which makes it easy to migrate existing on-premises workloads that use TPM functionalities to EC2.

Unified Extensible Firmware Interface (UEFI) Secure Boot is a feature of UEFI that builds on EC2’s long-standing secure boot process and provides additional defense-in-depth that helps you secure software from threats that persist across reboots. It ensures that EC2 instances run authentic software by verifying the digital signature of all boot components, and halts the boot process if signature verification fails. When used with UEFI Secure Boot, NitroTPM can verify the integrity of software that boots and runs in the EC2 instance. It can measure instance properties and components as evidence that unaltered software in the correct order was used during boot. Features such as “Measured Boot” in Windows, Linux Unified Key Setup (LUKS) and dm-verity in popular Linux distributions can use NitroTPM to further secure OS launches from malware with administrative that attempt to persist across reboots.

NitroTPM derives its root-of-trust from the Nitro Security Chip and performs the same functions as a physical/discrete TPM. Similar to discrete TPMs, an immutable private and public Endorsement Key (EK) is set up inside the NitroTPM by AWS during instance creation. NitroTPM can serve as a “root-of-trust” to verify the provenance of software in the instance (e.g., NitroTPM’s EKCert as the basis for SSL certificates). Sensitive information protected by NitroTPM is made available only if the OS has booted correctly (i.e., boot measurements match expected values). If the system is tampered, keys are not released since the TPM state is different, thereby ensuring protection from malware attempting to hijack the boot process. NitroTPM can protect volume encryption keys used by full-disk encryption utilities (such as dm-crypt and BitLocker) or private keys for certificates.

NitroTPM can be used for attestation, a process to demonstrate that an EC2 instance meets pre-defined criteria, thereby allowing you to gain confidence in its integrity. It can be used to authenticate an instance requesting access to a resource (such as a service or a database) to be contingent on its health state (e.g., patching level, presence of mandated agents, etc.). For example, a private key can be “sealed” to a list of measurements of specific programs allowed to “unseal”. This makes it suited for use cases such as digital rights management to gate LDAP login, and database access on attestation. Access to AWS Key Management Service (KMS) keys to encrypt/decrypt data accessed by the instance can be made to require affirmative attestation of instance health. Anti-malware software (e.g., Windows Defender) can initiate remediation actions if attestation fails.

NitroTPM uses Platform Configuration Registers (PCR) to store system measurements. These do not change until the next boot of the instance. PCR measurements are computed during the boot process before malware can modify system state or tamper with the measuring process. These values are compared with pre-calculated known-good values, and secrets protected by NitroTPM are released only if the sequences match. PCRs are recalculated after each reboot, which ensures protection against malware aiming to hijack the boot process or persist across reboots. For example, if malware overwrites part of the kernel, measurements change, and disk decryption keys sealed to NitroTPM are not unsealed. Trust decisions can also be made based on additional criteria such as boot integrity, patching level, etc.

The workflow below shows how UEFI Secure Boot and NitroTPM work to ensure system integrity during OS startup.

workflow

To get started, you’ll need to register an Amazon Machine Image (AMI) of an Operating System that supports TPM 2.0 and UEFI Secure Boot using the register-image primitive via the CLI, API, or console. Alternatively, you can use pre-configured AMIs from AWS for both Windows and Linux to launch EC2 instances with TPM and Secure Boot. The screenshot below shows a Windows Server 2019 instance on EC2 launched with NitroTPM using its inbox TPM 2.0 drivers to recognize a TPM device.

NitroTPM and UEFI Secure Boot enables you to further raise the bar in running their workloads in a secure and trustworthy manner. We’re excited for you to try out NitroTPM when it becomes publicly available in 2022. Contact [email protected] for additional information.

Creating a Multi-Region Application with AWS Services – Part 1, Compute and Security

Post Syndicated from Joe Chapman original https://aws.amazon.com/blogs/architecture/creating-a-multi-region-application-with-aws-services-part-1-compute-and-security/

Building a multi-Region application requires lots of preparation and work. Many AWS services have features to help you build and manage a multi-Region architecture, but identifying those capabilities across 200+ services can be overwhelming.

In this 3-part blog series, we’ll explore AWS services with features to assist you in building multi-Region applications. In Part 1, we’ll build a foundation with AWS security, networking, and compute services. In Part 2, we’ll add in data and replication strategies. Finally, in Part 3, we’ll look at the application and management layers.

Considerations before getting started

AWS Regions are built with multiple isolated and physically separate Availability Zones (AZs). This approach allows you to create highly available Well-Architected workloads that span AZs to achieve greater fault tolerance. There are three general reasons that you may need to expand beyond a single Region:

  • Expansion to a global audience as an application grows and its user base becomes more geographically dispersed, there can be a need to reduce latencies for different parts of the world.
  • Reducing Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO) as part of disaster recovery (DR) plan.
  • Local laws and regulations may have strict data residency and privacy requirements that must be followed.

Ensuring security, identity, and compliance

Creating a security foundation starts with proper authentication, authorization, and accounting to implement the principle of least privilege. AWS Identity and Access Management (IAM) operates in a global context by default. With IAM, you specify who can access which AWS resources and under what conditions. For workloads that use directory services, the AWS Directory Service for Microsoft Active Directory Enterprise Edition can be set up to automatically replicate directory data across Regions. This allows applications to reduce lookup latencies by using the closest directory and creates durability by spanning multiple Regions.

Applications that need to securely store, rotate, and audit secrets, such as database passwords, should use AWS Secrets Manager. It encrypts secrets with AWS Key Management Service (AWS KMS) keys and can replicate secrets to secondary Regions to ensure applications are able to obtain a secret in the closest Region.

Encrypt everything all the time

AWS KMS can be used to encrypt data at rest, and is used extensively for encryption across AWS services. By default, keys are confined to a single Region. AWS KMS multi-Region keys can be created to replicate keys to a second Region, which eliminates the need to decrypt and re-encrypt data with a different key in each Region.

AWS CloudTrail logs user activity and API usage. Logs are created in each Region, but they can be centralized from multiple Regions and multiple accounts into a single Amazon Simple Storage Service (Amazon S3) bucket. As a best practice, these logs should be aggregated to an account that is only accessible to required security personnel to prevent misuse.

As your application expands to new Regions, AWS Security Hub can aggregate and link findings to a single Region to create a centralized view across accounts and Regions. These findings are continuously synced between Regions to keep you updated on global findings.

We put these features together in Figure 1.

Multi-Region security, identity, and compliance services

Figure 1. Multi-Region security, identity, and compliance services

Building a global network

For resources launched into virtual networks in different Regions, Amazon Virtual Private Cloud (Amazon VPC) allows private routing between Regions and accounts with VPC peering. These resources can communicate using private IP addresses and do not require an internet gateway, VPN, or separate network appliances. This works well for smaller networks that only require a few peering connections. However, as the number of peered connections increases, the mesh of peered connections can become difficult to manage and troubleshoot.

AWS Transit Gateway can help reduce these difficulties by creating a central transitive hub to act as a cloud router. A Transit Gateway’s routing capabilities can expand to additional Regions with Transit Gateway inter-Region peering to create a globally distributed private network.

Building a reliable, cost-effective way to route users to distributed Internet applications requires highly available and scalable Domain Name System (DNS) records. Amazon Route 53 does exactly that.

Route 53 routing policies can route traffic to a record with the lowest latency, or automatically fail over a record. If a larger failure occurs, the Route 53 Application Recovery Controller can simplify the monitoring and failover process for application failures across Regions, AZs, and on-premises.

Amazon CloudFront’s content delivery network is truly global, built across 300+ points of presence (PoP) spread throughout the world. Applications that have multiple possible origins, such as across Regions, can use CloudFront origin failover to automatically fail over the origin. CloudFront’s capabilities expand beyond serving content, with the ability to run compute at the edge. CloudFront functions make it easy to run lightweight JavaScript functions, and AWS Lambda@Edge makes it easy to run Node.js and Python functions across these 300+ PoPs.

AWS Global Accelerator uses the AWS global network infrastructure to provide two static anycast IPs for your application. It automatically routes traffic to the closest Region deployment, and if a failure is detected it will automatically redirect traffic to a healthy endpoint within seconds.

Figure 2 brings these features together to create a global network across two Regions.

AWS VPC connectivity and content delivery

Figure 2. AWS VPC connectivity and content delivery

Building the compute layer

An Amazon Elastic Compute Cloud (Amazon EC2) instance is based on an Amazon Machine Image (AMI). An AMI specifies instance configurations such as the instance’s storage, launch permissions, and device mappings. When a new standard image needs to be created, EC2 Image Builder can be used to streamline copying AMIs to selected Regions.

Although EC2 instances and their associated Amazon Elastic Block Store (Amazon EBS) volumes live in a single AZ, Amazon Data Lifecycle Manager can automate the process of taking and copying EBS snapshots across Regions. This can enhance DR strategies by providing a relatively easy cold backup-and-restore option for EBS volumes.

As an architecture expands into multiple Regions, it can become difficult to track where instances are provisioned. Amazon EC2 Global View helps solve this by providing a centralized dashboard to see Amazon EC2 resources such as instances, VPCs, subnets, security groups, and volumes in all active Regions.

Microservice-based applications that use containers benefit from quicker start-up times. Amazon Elastic Container Registry (Amazon ECR) can help ensure this happens consistently across Regions with private image replication at the registry level. An ECR private registry can be configured for either cross-Region or cross-account replication to ensure your images are ready in secondary Regions when needed.

We bring these compute layer features together in Figure 3.

AMI and EBS snapshot copy across Regions

Figure 3. AMI and EBS snapshot copy across Regions

Summary

It’s important to create a solid foundation when architecting a multi-Region application. These foundations pave the way for you to move fast in a secure, reliable, and elastic way as you build out your application. In this post, we covered options across AWS security, networking, and compute services that have built-in functionality to take away some of the undifferentiated heavy lifting. We’ll cover data, application, and management services in future posts.

Ready to get started? We’ve chosen some AWS Solutions and AWS Blogs to help you!

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Use a City Planning Analogy to Visualize and Create your Cloud Architecture

Post Syndicated from Marwan Al Shawi original https://aws.amazon.com/blogs/architecture/use-a-city-planning-analogy-to-visualize-and-create-your-cloud-architecture/

If you are new to creating cloud architectures, you might find it a daunting undertaking. However, there is an approach that can help you define a cloud architecture pattern by using a similar construct. In this blog post, I will show you how to envision your cloud architecture using this structured and simplified approach.

Such an approach helps you to envision the architecture as a whole. You can then create reusable architecture patterns that can be used for scenarios with similar requirements. It also will help you define the more detailed technological requirements and interdependencies of the different architecture components.

First, I will briefly define what is meant by an architecture pattern and an architecture component.

Architecture pattern and components

An architecture pattern can be defined as a mechanism used to structure multiple functional components of a software or a technology solution to address predefined requirements. It can be characterized by use case and requirements, and should be tested and reusable whenever possible.

Architecture patterns can be composed of three main elements: the architecture components, the specific functions or capabilities of each component, and the connectivity among those components.

A component in the context of a technology solution architecture is a building block. Modular architecture is composed of a collection of these building blocks.

To think modularly, you must look at the overall technology solution. What is its intended function as a complete system? Then, break it down into smaller parts or components. Think about how each component communicates with others. Identify and define each block or component and its specific roles and function. Consider the technical operational responsibilities each is expected to deliver.

Cloud architecture patterns and the city planning analogy

Let’s assume a content marketing company wants to provide marketing analytics to its partners. It proposes a SaaS solution, by offering an analytics dashboard on Amazon Web Services (AWS). This company may offer the same solution in other locations in the future.

How would you create a reusable architecture pattern for such a solution? To simplify the concept of a component and the architecture pattern, let’s use city planning as a frame of reference.

Subarchitectures or components

A city can be imagined as consisting of three organizing contexts or components:

  1. Overall City Architecture (the big picture)
  2. District Architecture
  3. Building Architecture

Let’s define each of these components or subarchitectures, and see how they correlate to an enterprise cloud architecture.

I. City Architecture consists of the city structures and the integrations of services required by the population, see Figure 1.

Figure 1. Oversimplified city layout

Figure 1. Oversimplified city layout

The overall anticipated capacity within a certain period must be calculatedfor roads, sewage, water, electricity grids, and overall city layout. Typically, this structure should be built from the intended purpose or vision of the city. This can be the type of services it will offer, and the function of each district.

Think of City Architecture as the overall cloud architecture for your enterprise. Include the anticipated capacity, the layout (single Region, multi-Region), type, and number of Amazon Virtual Private Cloud (VPC)s. Decide how you will connect and integrate all these different architecture components.

The initial workflow that can be used to define the high-level architecture pattern layout of the SaaS solution example is analogous to the overall city architecture. We can define its three primary elements: architecture components, specific functions of each component, and the connectivity among those components.

  1. Production environment. The front and backend of your application. It provides the marketing data analytics dashboard.
  2. Testing and development environment. A replica of, but isolated from the Production app. Users’ traffic doesn’t pass through security inspection layer.
  3. Security layer. Provides perimeter security inspection. Users’ traffic passes through security inspection layer.

Translating this workflow into an AWS architecture, Figure 2 shows the analogous structure.

  • Single AWS Region (to be offered in a specific geographical area)
  • Amazon VPC to host the production application
  • Amazon VPC to host the test/dev application
  • Separate VPC (or a layer within a VPC) to provide security services for perimeter security inspection
  • Customer’s connectivity (for example, over public internet, or VPN)
  • AWS Transit Gateway (TGW) to connect and isolate the different components (VPCs and VPN)
Figure 2. Architecture pattern (high-level layout)

Figure 2. Architecture pattern (high-level layout)

Domain-driven design

At this stage, you may also consider a domain-driven design (DDD). This is an approach to software development that centers on a domain model. With your DDD, you can break the solution into different bounded contexts. You can translate the business functions/capabilities into logical domains, and then define how they communicate.

Let’s use the same SaaS example and further analyze the requirements of the solution with the DDD approach in mind. The SaaS solution is offered based on two types of industries: regulated with specific security compliance, and non-regulated. By translating this into logical domains, we can optimize the design to offer a more modular architecture. This will minimize the blast radius of the solution, as illustrated in Figure 3. Watch How AWS Minimizes the Blast Radius of Failures.

Figure 3. DDD-based architecture pattern (high-level layout)

Figure 3. DDD-based architecture pattern (high-level layout)

Now let’s think of governmental boundaries within a city and among its districts. This can be analogous to AWS accounts structures and the trust boundaries among them. By applying this to the example preceding, the VPC with the security compliance requirements can be placed in a separate AWS account. Read Design principles for organizing your AWS accounts.

II. District Architecture consists of the structures and integrations required within a district to manage its buildings, see Figure 4.

Figure 4. City structure with districts

Figure 4. City structure with districts

It illustrates how to connect/integrate back to the city-wide architecture. It should consider the overall anticipated capacity within each district.

For instance, a district can be designed based on the type of function/service it provides, such as residential district, leisure district, or business district.

Mapping this to cloud architecture, you can envision it as the more specific functions/services you are expecting from a certain block, component, or domain. Your architecture can be within one or multiple VPCs, as shown in Figure 5. The structure of a domain or block can vary by number of Availability Zones and VPCs, type of external access, compliance requirements, and the hosted application requirements. Each of these blocks serves a different function and requires different specifications. However, they all need to integrate back to the overall cloud and network architecture to provide a cohesive design.

The architect must define and specify clearly the communication model among the architecture components. You may further break the application architecture at the module level into microservices using the DDD approach. An example is the use of Micro-frontend Architectures on AWS.

Figure 5. Architecture module structure

Figure 5. Architecture module structure

III. Building Architecture refers to the buildings’ structures and standards required to deliver the specific properties/services within a district. It also must integrate back with the district architecture.

To apply this to your architecture, envision the specialized functions/capabilities you are expecting from your application within a module (subcomponents). What are the requirements needed for the application tiers? In this example, let’s assume that the VPC without security compliance requirements will use a frontend web tier on Amazon EC2. Its backend database will be Amazon Relational Database Service (RDS).

Each of these subcomponents must integrate with other components and modules, as well as to the public internet. For example, an AWS Application Load Balancer could handle connections requests from external users, and AWS Web Application Firewall (AWS WAF) used as the perimeter security layer. AWS Transit Gateway could connect to other modules (VPCs). NAT gateways could provide connectivity to the internet for the internal systems in a VPC (shown in Figure 6.)

Figure 6. Architecture module and its subcomponents structure

Figure 6. Architecture module and its subcomponents structure

Conclusion

The vision and goal of a city architecture can set the basis for districts’ architectures. In turn, the district architecture sets the basis of the building architecture within a district. Similarly, the targeted enterprise cloud architecture goal should set the key requirements of the building blocks (or functional components) of the architecture.

Each architecture block sets the requirements of the subcomponents. They collectively construct a system or module of a system, as illustrated in Figure 7.

Figure 7. Structure of cloud architecture requirements and interdependencies

Figure 7. Structure of cloud architecture requirements and interdependencies

As a next step, assess your architecture from both a scale and reliability perspective. Designing for scale alone is not enough. Reliable scalability should be always the targeted architectural attribute. Read Architecting for Reliable Scalability.

Use New Amazon EC2 M1 Mac Instances to Build & Test Apps for iPhone, iPad, Mac, Apple Watch, and Apple TV

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/use-amazon-ec2-m1-mac-instances-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/

Last year at AWS re:Invent, Jeff Barr wrote about the exciting availability of Amazon Elastic Compute Cloud (Amazon EC2) Mac instances. Today, we’re announcing the preview of a new EC2 M1 Mac instance.

The introduction of EC2 Mac instances brought the flexibility, scalability, and cost benefits of AWS to all Apple developers. EC2 Mac instances are dedicated Mac mini computers attached through Thunderbolt to the AWS Nitro System, which lets the Mac mini appear and behave like another EC2 instance. It connects to your Amazon Virtual Private Cloud (VPC), boot from Amazon Elastic Block Store (EBS) volumes, and leverage EBS snapshots, security groups and other AWS services. EC2 Mac instances let you scale your build and test fleets of Macs, paying as you go. There is no hypervisor involved, and you get full bare metal performance of the underlying Mac mini. An EC2 dedicated host reserves a Mac mini for your usage.

The availability (in preview) of EC2 M1 Mac instances lets you access machines built around the Apple-designed M1 System on Chip (SoC). If you are a Mac developer and re-architecting your apps to natively support Macs with Apple silicon, you may now build and test your apps and take advantage of all the benefits of AWS. Developers building for iPhone, iPad, Apple Watch, and Apple TV will also benefit from faster builds. EC2 M1 Mac instances deliver up to 60% better price performance over the x86-based EC2 Mac instances for iPhone and Mac app build workloads.

EC2 M1 Mac instances are powered by a combination of two hardware components:

  • The Mac mini, featuring M1 SoC with 8 CPU cores, 8 GPU cores, 16 GiB of memory, and a 16 core Apple Neural Engine.
  • The AWS Nitro System, providing up to 10 Gbps of VPC network bandwidth and 8 Gbps of EBS storage bandwidth through a high-speed Thunderbolt connection.

How to Get Started
As I explained previously, when using EC2 Mac instances, there is no virtual machine involved. These are running on bare metal servers, each hosting a Mac mini. The first step, therefore, involves grabbing a dedicated server. I open the AWS Management Console, navigate to the Amazon EC2 section, then I select Dedicated Hosts. I select Allocate Dedicated Host to allocate a server to my AWS account.

EC2 Mac2 Instances - Dedicated Hosts

Alternatively, I may use the AWS Command Line Interface (CLI).

➜  ~ aws ec2 allocate-hosts                  \
         --instance-type mac2.metal          \
         --availability-zone us-east-2b      \
         --quantity 1 
{
    "HostIds": [
        "h-0fxxxxxxx90"
    ]
}

Once the host is allocated, I start an EC2 instance on it. The procedure is no different from starting any EC2 instance type. I just have to ensure I select a macOS AMI version that suits my requirements. I select the mac2.metal instance type and select host Tenancy and the dedicated Host I just created.

EC2 Dedicated TenancyAlternatively, I may use the CLI.

➜ ~ aws ec2 run-instances                                     \
	    --instance-type mac2.metal                             \
        --key-name my_key                                      \
        --placement HostId=h-0fxxxxxxx90                       \
        --security-group-ids sg-01000000000000032              \
        --image-id AWS_OR_YOUR_AMI_ID
{
    "Groups": [],
    "Instances": [
        {
            "AmiLaunchIndex": 0,
            "ImageId": "ami-01xxxxbd",
            "InstanceId": "i-08xxxxx5c",
            "InstanceType": "mac2.metal",
            "KeyName": "my_key",
            "LaunchTime": "2021-11-08T16:47:39+00:00",
            "Monitoring": {
                "State": "disabled"
            },
... redacted for brevity ....

When you use EC2 Mac instances for the first time, you’re likely to ask questions such as, “How do I connect through Apple Remote Desktop?” or “How do I increase the size of the APFS file system on the EBS volume?” The EC2 Mac documentation covers the answers for you and provides examples of commands to run on macOS to perform these common tasks.

I use SSH to connect to the newly launched instance as usual.

EC2 Mac M1 Instance uname -a

I may enable Apple Remote Desktop and start a VNC session to the EC2 instance. The EC2 Mac instance documentation page has the details.

mac2 GUI VNC

Availability and Pricing
EC2 M1 Mac instances are now available in preview in US East (N. Virginia) and US West (Oregon), with other AWS Regions coming at launch.

Pricing metrics are similar to the previous generation of EC2 Mac instances. You are charged per hour of reservation of the dedicated host, not for the time the instance is running, and there is a minimum charge of 24 hours for reserving a dedicated host.

In the two preview Regions, the on-demand price is $0.6498 per hour. You can save up to 42 percent over the on-demand price with Savings Plans. Check our Dedicated Host on-demand pricing page, as well as the Savings Plans page to learn the details.

You can sign up for the preview of EC2 Mac M1 instances today!

— seb

Announcing winners of the AWS Graviton Challenge Contest and Hackathon

Post Syndicated from Neelay Thaker original https://aws.amazon.com/blogs/compute/announcing-winners-of-the-aws-graviton-challenge-contest-and-hackathon/

At AWS, we are constantly innovating on behalf of our customers so they can run virtually any workload, with optimal price and performance. Amazon EC2 now includes more than 475 instance types that offer a choice of compute, memory, networking, and storage to suit your workload needs. While we work closely with our silicon partners to offer instances based on their latest processors and accelerators, we also drive more choice for our customers by building our own silicon.

The AWS Graviton family of processors were built as part of that silicon innovation initiative with the goal of pushing the price performance envelope for a wide variety of customer workloads in EC2. We now have 12 EC2 instance families powered by AWS Graviton2 processors – general purpose (M6g, M6gd), burstable (T4g), compute optimized (C6g, C6gd, C6gn), memory optimized (R6g, R6gd, X2gd), storage optimized (Im4gn, Is4gen), and accelerated computing (G5g) available globally across 23 AWS Regions. We also announced the preview of Amazon EC2 C7g instances powered by the latest generation AWS Graviton3 processors that will provide the best price performance for compute-intensive workloads in EC2. Thousands of customers, including Discovery, DIRECTV, Epic Games, and Formula 1, have realized significant price performance benefits with AWS Graviton-based instances for a broad range of workloads. This year, AWS Graviton-based instances also powered much of Amazon Prime Day 2021 and supported 12 core retail services during the massive 2-day online shopping event.

To make it easy for customers to adopt Graviton-based instances, we launched a program called the Graviton Challenge. Working with customers, we saw that many successful adoptions of Graviton-based instances were the result of one or two developers taking a single workload and spending a few days to benchmark the price performance gains with Graviton2-based instances, before scaling it to more workloads. The Graviton Challenge provides a step-by-step plan that developers can follow to move their first workload to Graviton-based instances. With the Graviton Challenge, we also launched a Contest (US-only), and then a Hackathon (global), where developers could compete for prizes by building new applications or moving existing applications to run on Graviton2-based instances. More than a thousand participants, including enterprises, startups, individual developers, open-source developers, and Arm developers, registered and ran a variety of applications on Graviton-based instances with significant price performance benefits. We saw some fantastic entries and usage of Graviton2-based instances across a variety of use cases and want to highlight a few.

The Graviton Challenge Contest winners:

  • Best Adoption – Enterprise and Most Impactful Adoption: VMware vRealize SRE team, who migrated 60 micro-services written in Java, Rust, and Golang to Graviton2-based general purpose and compute optimized instances and realized up to 48% latency reduction and 22% cost savings.
  • Best Adoption – Startup: Kasm Technologies, who realized up to 48% better performance and 25% potential cost savings for its container streaming platform built on C/C++ and Python.
  • Best New Workload adoption: Dustin Wilson, who built a dynamic tile server based on Golang and running on Graviton2-based memory-optimized instances that helps analysts query large geospatial datasets and benchmarked up to 1.8x performance gains over comparable x86-based instances.
  • Most Innovative Adoption: Loroa, an application that translates any given text into spoken words from one language into multiple other languages using Graviton2-based instances, Amazon Polly, and Amazon Translate.

If you are attending AWS re:Invent 2021 in person, you can hear more details on their Graviton adoption experience by attending the CMP213: Lessons learned from customers who have adopted AWS Graviton chalk talk.

Winners for the Graviton Challenge Hackathon:

  • Best New App: PickYourPlace, an open-source based data analytics platform to help users select a place to live based on property value, safety, and accessibility.
  • Best Migrated App: Genie, an image credibility checker based on deep learning that makes predictions on photographic and tampered confidence of an image.
  • Highest Potential Impact: Welly Tambunan, who’s also an AWS Community Builder, for porting big data platforms Spark, Dremio, and AirByte to Graviton2 instances so developers can leverage it to build big data capabilities into their applications.
  • Most Creative Use Case: OXY, a low-cost custom Oximeter with mobile and web apps that enables continuous and remote monitoring to prevent deaths due to Silent Hypoxia.
  • Best Technical Implementation: Apollonia Bot that plays songs, playlists, or podcasts on a Discord voice channel, so users can listen to it together.

It’s been incredibly exciting to see the enthusiasm and benefits realized by our customers. We are also thankful to our judges – Patrick Moorhead from Moor Insights, James Governor from RedMonk, and Jason Andrews from Arm, for their time and effort.

In addition to EC2, several AWS services for databases, analytics, and even serverless support options to run on Graviton-based instances. These include Amazon Aurora, Amazon RDS, Amazon MemoryDB, Amazon DocumentDB, Amazon Neptune, Amazon ElastiCache, Amazon OpenSearch, Amazon EMR, AWS Lambda, and most recently, AWS Fargate. By using these managed services on Graviton2-based instances, customers can get significant price performance gains with minimal or no code changes. We also added support for Graviton to key AWS infrastructure services such as Elastic Beanstalk, Amazon EKS, Amazon ECS, and Amazon CloudWatch to help customers build, run, and scale their applications on Graviton-based instances. Additionally, a large number of Linux and BSD-based operating systems, and partner software for security, monitoring, containers, CI/CD, and other use cases now support Graviton-based instances and we recently launched the AWS Graviton Ready program as part of the AWS Service Ready program to offer Graviton-certified and validated solutions to customers.

Congrats to all of our Contest and Hackathon winners! Full list of the Contest and Hackathon winners is available on the Graviton Challenge page.

P.S.: Even though the Contest and Hackathon have ended, developers can still access the step-by-step plan on the Graviton Challenge page to move their first workload to Graviton-based instances.

New Storage-Optimized Amazon EC2 Instances (Im4gn and Is4gen) Powered by AWS Graviton2 Processors

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-storage-optimized-amazon-ec2-instances-im4gn-and-is4gen-powered-by-aws-graviton2-processors/

EC2 storage-optimized instances are designed to deliver high disk I/O performance, and plenty of storage. Our customers use them to host high-performance real-time databases, distributed file systems, data warehouses, key-value stores, and more. Over the years we have released multiple generations of storage-optimized instances including the HS1 (2012) , D2 (2015), I2 (2013) , I3 (2017), I3en (2019), and D3/D3en (2020).

As I look back on all of these launches, it is interesting to see how we continue to provide an ever-increasing set of options that make each successive generation an even better fit for the diverse (and also ever-increasing) needs of our customers. HS1 instances were available in just one size, D2 and I2 in four, I3 in six, and I3en in eight. These instances give our customers the freedom to choose the size that best meets their current needs while also giving them room to scale up or down if those needs happen to change.

Im4gn and Is4gen
Today I am happy to introduce the two newest families of storage-optimized instances, Im4gn and Is4gen, powered by Graviton2 processors. Both instances offer up to 30 TB of NVMe storage using AWS Nitro SSD devices that are custom-built by AWS. As part of our drive to innovate on behalf of our customers, we turned our attention to storage and designed devices that were optimized to support high-speed access to large amounts of data. The AWS Nitro SSDs reduce I/O latency by up to 60% and also reduce latency variability by up to 75% when compared to the third generation of storage-optimized instances. As a result you get faster and more predictable performance for your I/O-intensive EC2 workloads.

Im4gn instances are a great fit for applications that require large amounts of dense SSD storage and high compute performance, but are not especially memory intensive such as social games, session storage, chatbots, and search engines. Here are the specs:

Instance Name vCPUs
Memory Local NVMe Storage
(AWS Nitro SSD)
Read Throughput
(128 KB Blocks)
EBS-Optimized Bandwidth Network Bandwidth
im4gn.large 2 8 GiB 937 GB 250 MB/s Up to 9.5 Gbps Up to 25 Gbps
im4gn.xlarge 4 16 GiB 1.875 TB 500 MB/s Up to 9.5 Gbps Up to 25 Gbps
im4gn.2xlarge 8 32 GiB 3.75 TB 1 GB/s Up to 9.5 Gbps Up to 25 Gbps
im4gn.4xlarge 16 64 GiB 7.5 TB 2 GB/s 9.5 Gbps 25 Gbps
im4gn.8xlarge 32 128 GiB 15 TB
(2 x 7.5 TB)
4 GB/s 19 Gbps 50 Gbps
im4gn.16xlarge 64 256 GiB 30 TB
(4 x 7.5 TB)
8 GB/s 38 Gbps 100 Gbps

Im4gn instances provide up to 40% better price performance and up to 44% lower cost per TB of storage compared to I3 instances. The new instances are available in the AWS US West (Oregon), US East (Ohio), US East (N. Virginia), and Europe (Ireland) Regions as On-Demand, Spot, Savings Plan, and Reserved instances.

Is4gen instances are a great fit for applications that do large amounts of random I/O to large amounts of SSD storage. This includes shared file systems, stream processing, social media monitoring, and streaming platforms, all of which can use the increased storage density to retain more data locally. Here are the specs:

Instance Name vCPUs
Memory Local NVMe Storage
(AWS Nitro SSD)
Read Throughput
(128 KB Blocks)
EBS-Optimized Bandwidth Network Bandwidth
is4gen.medium 1 6 GiB 937 GB 250 MB/s Up to 9.5 Gbps Up to 25 Gbps
is4gen.large 2 12 GiB 1.875 TB 500 MB/s Up to 9.5 Gbps Up to 25 Gbps
is4gen.xlarge 4 24 GiB 3.75 TB 1 GB/s Up to 9.5 Gbps Up to 25 Gbps
is4gen.2xlarge 8 48 GiB 7.5 TB 2 GB /s Up to 9.5 Gbps Up to 25 Gbps
is4gen.4xlarge 16 96 GiB 15 TB
(2 x 7.5 TB)
4 GB/s 9.5 Gbps 25 Gbps
is4gen.8xlarge 32 192 GiB 30 TB
(4 x 7.5 TB)
8 GB/s 19 Gbps 50 Gbps

Is4gen instances provide 15% lower cost per TB of storage and up to 48% better compute performance compared to I3en instances. The new instances are available in the AWS US West (Oregon), US East (Ohio), US East (N. Virginia), and Europe (Ireland) Regions as On-Demand, Spot, Savings Plan, and Reserved instances.

Available Now
As I never get tired of saying, these new instances are available now and you can start using them today. You can use Amazon Linux 2, Ubuntu 18.04.05 (and newer), Red Hat Enterprise Linux 8.0, and SUSE Enterprise Server 15 (and newer) AMIs, along with the container-optimized ECS and EKS AMIs. Learn more about the Im4gn and Is4gen instances.

Jeff;

PS – As of this launch twelve EC2 instance types are now powered by Graviton2 processors! To learn more, visit the Graviton2 page.

New – AWS Outposts Servers in Two Form Factors

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-outposts-servers-in-two-form-factors/

AWS Outposts gives you on-premises compute and storage that is monitored and managed by AWS, and controlled by the same, familiar AWS APIs. You may already know about the AWS Outposts rack, which occupies a full 42U rack.

Last year I told you that we were working on new sizes of Outposts suitable for locations such as branch offices, factories, retail stores, health clinics, hospitals, and cell sites that are space-constrained and need access to low-latency compute capacity. Today we are launching three AWS Outposts servers, all powered by AWS Nitro System and with your choice of x86 or Arm/Graviton2 processors. Here’s an overview:

Name/Rack Size/Catalog ID
EC2 Instance Capacity
Processor / Architecture
vCPUs Memory
Local NVMe
SSD Storage
Outposts 1U
(STBKRBE)
c6gd.16xlarge Graviton2 / Arm 64 128 GiB 3.8 TB
( 2x 1.9 TB)
Outposts 2U
(LMXAD41)
c6id.16xlarge Intel Ice Lake / x86 64 128 GiB 3.8 TB
(2 x 1.9 TB)
Outposts 2U
(KOSKFSF)
c6id.32xlarge Intel Ice Lake / x86 128 256 GiB 7.6 TB
(4 x 1.9 GB)

You can create VPC subnets on each Outpost, and you can launch Amazon Elastic Compute Cloud (Amazon EC2) instances from EBS-backed AMIs in the parent region. The c6gd.16xlarge model supports six instance sizes, as follows:

Instance Name vCPUs Memory Local Storage
c6gd.large 2 4 GiB 118 GB
c6gd.xlarge 4 8 GiB 237 GB
c6gd.2xlarge 8 16 GiB 474 GB
c6gd.4xlarge 16 32 GiB 950 GB
c6gd.8xlarge 32 64 GiB 1.9 TB
c6gd.16xlarge 64 128 GiB 3.8 TB

The c6id.16xlarge model supports all but the largest of the following instance sizes, and the c6id.32xlarge supports all of them:

Instance Name vCPUs Memory Local Storage
c6id.large 2 4 GiB 118 GB
c6id.xlarge 4 8 GiB 237 GB
c6id.2xlarge 8 16 GiB 474 GB
c6id.4xlarge 16 32 GiB 950 GB
c6id.8xlarge 32 64 GiB 1.9 TB
c6id.16xlarge 64 128 GiB 3.8 TB
c6id.32xlarge 128 256 GiB 7.6 TB

Within each of your Outposts servers, you can launch any desired mix of instance sizes as long as you remain within the overall processing and storage available. You can create Amazon Elastic Container Service (Amazon ECS) clusters (Amazon Elastic Kubernetes Service (EKS) is coming soon) , and the code you run on-premises can make use of the entire lineup of services in the AWS Cloud.

Each Outposts server connects to the cloud via the public Internet or across a private AWS Direct Connect line. Additionally, each Outpost server supports a Local Network Interface (LNI) that provides a Level 2 presence on your local network for AWS service endpoints.

Outposts servers incorporate many powerful Nitro features including high speed networking and enhanced security. The security model is locked-down and prevents administrative access, preventing tampering or human error. Additionally, data at rest is protected by a NIST-compliant physical security key.

While I was writing this post, I stopped in to say hello to the design and development team, and met with my colleague Bianca Nagy to learn more about the Outposts server:

Ordering Outposts Servers
Let’s walk through the process of ordering an Outposts server from the AWS Management Console. I visit the AWS Outposts Console, make sure that I am in the desired AWS Region, and click Place order to get started:

I click Servers, and then choose the desired configuration. I pick the c6gd.16xlarge, and click Next to proceed:

Then I create a new Outpost:

And a new Site:

Then I review my payment options and select my shipping address:

On the next page I review all of my options, click Place order, and await delivery:

In general, we expect to be able to deliver Outposts servers in two to six weeks, starting in the first quarter of 2022. After you receive yours, you or a member of your IT team can mount it in a 19″ rack or position it on a flat surface, cable it to power and networking, and power the device on. You then use a set of temporary AWS credentials to confirm the identity of the device, and to verify that the device is able to use DHCP to obtain an IP address. Once the device has established connectivity to the designated AWS parent region, we will finalize the provisioning of EC2 instance capacity and make it available to you.

After that, you are ready to launch instances and to deploy your on-premises applications.

We will monitor hardware performance and will contact you if your device is in need of maintenance. We will ship a replacement device for arrival within 2 business days. You can migrate your workloads to a redundant device, and use tracking information & notifications to track delivery status. When the replacement arrives, you install it and then destroy the physical security key in the old one before shipping it back to AWS.

Outposts API Update
We are also enhancing the Outposts API as part of this launch. Here are some of the new functions:

ListCatalogItem – Get a list of items in the Outposts catalog, with optional filtering by EC2 family or supported storage options.

GetCatalogItem – Get full information about a single item in the Outposts catalog.

GetSiteAddress – Get the physical address of a site where an Outposts rack or server is installed.

You can use the information returned by GetCatalogItem to place an order that contains the desired quantity of one or more catalog items.

Things to Know
Here are a couple of important things to know about Outposts servers:

Availability – Outposts servers are available for order to most locations where Outposts racks are available (currently 23 regions and 49 countries), with more to follow in 2022.

Ordering at Scale – I showed you the console-based ordering process above, and also gave you a glimpse at the Outposts API. If you need hundreds or thousands of devices, get in touch and we will give you a template that you can fill in and then upload.

re:Invent 2021 Outposts Server Selfie Challenge
If you attend AWS re:Invent, be sure to visit the AWS Hybrid kiosk in the AWS Booth (#1719) to see the new Outposts Servers up close and personal. While you are there, take a fun & creative selfie, tag it with #AWSOutposts & #AWSPromotion, and share it on Twitter. I will post my three favorites at the end of the show!

Jeff;

Join the Preview – Amazon EC2 C7g Instances Powered by New AWS Graviton3 Processors

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-the-preview-amazon-ec2-c7g-instances-powered-by-new-aws-graviton3-processors/

We announced the first generation AWS-designed Graviton processor in late 2018, and followed it up with the second generation Graviton2 a year later. Today, AWS customers make use of twelve different Graviton2-powered instances including the new X2gd instances that are designed for memory-intensive workloads. All Graviton processors include dedicated cores & caches for each vCPU, along with additional security features courtesy of AWS Nitro System; the Graviton2 processors add support for always-on memory encryption.

C7g in the Works
I am thrilled to tell you about our upcoming C7g instances. Powered by new Graviton3 processors, these instances are going to be a great match for your compute-intensive workloads: HPC, batch processing, electronic design automation (EDA), media encoding, scientific modeling, ad serving, distributed analytics, and CPU-based machine learning inferencing.

While we are still optimizing these instances, it is clear that the Graviton3 is going to deliver amazing performance. In comparison to the Graviton2, the Graviton3 will deliver up to 25% more compute performance and up to twice as much floating point & cryptographic performance. On the machine learning side, Graviton3 includes support for bfloat16 data and will be able to deliver up to 3x better performance.

Graviton3 processors also include a new pointer authentication feature that is designed to improve security. Before return addresses are pushed on to the stack, they are first signed with a secret key and additional context information, including the current value of the stack pointer. When the signed addresses are popped off the stack, they are validated before being used. An exception is raised if the address is not valid, thereby blocking attacks that work by overwriting the stack contents with the address of harmful code. We are working with operating system and compiler developers to add additional support for this feature, so please get in touch if this is of interest to you.

C7g instances will be available in multiple sizes (including bare metal), and are the first in the cloud industry to be equipped with DDR5 memory. In addition to drawing less power, this memory delivers 50% higher bandwidth than the DDR4 memory used in the current generation of EC2 instances.

On the network side, C7g instances will offer up to 30 Gbps of network bandwidth and Elastic Fabric Adapter (EFA) support.

Join the Preview
We are now running a preview of the C7g instances so that you can be among the first to experience all of this power. Sign up now, take an instance for a spin, and let me know what you think!

Jeff;

New – Recycle Bin for EBS Snapshots

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-recycle-bin-for-ebs-snapshots/

It is easy to create EBS Snapshots, and just as easy to either delete them manually or to use the Data Lifecycle Manager to delete them automatically in accord with your organization’s retention model. Sometimes, as it turns out, it is a bit too easy to delete snapshots, and a well-intended cleanup effort or a wayward script can sometimes go a bit overboard!

New Recycle Bin
In order to give you more control over the deletion process, we are launching a Recycle Bin for EBS Snapshots. As you will see in a moment, you can now set up rules to retain deleted snapshots so that you can recover them after an accidental deletion. You can think of this as a two-level model, where individual AWS users are responsible for the initial deletion, and then a designated “Recycle Bin Administrator” (as specified by an IAM role) manages retention and recovery.

Rules can apply to all snapshots, or to snapshots that include a specified set of tag/value pairs. Each rule specifies a retention period (between one day and one year), after which the snapshot is permanently deleted.

Let’s Recycle!
I open the Recycle Bin Console, select the region of interest, and click Create retention rule to begin:

I call my first rule KeepAll, and set it to retain all deleted EBS snapshots for 4 days:

I add a tag (User) to the rule, and click Create retention rule:

Because Apply to all resources is checked, this is a general rule that applies when there are no applicable rules that specify one or more tags.

Then I create a second rule (KeepDev) that retains snapshots tagged with a Mode of Dev for just one day:

If two different tag-based rules match the same resource, then the one with the longer retention period applies.

Here are my retention rules:

Here are my EBS snapshots. As you can see, the first three are tagged with a Mode of Dev:

In an effort to save several cents per month, I impulsively delete them all:

And they are gone:

Later in the day, a member of my developer team messages me in a panic and lets me know that they desperately need the latest snapshot of the development server’s code. I open the Recycle Bin and I locate the snapshot (DevServer_2021_10_6):

I select the snapshot and click Recover:

Then I confirm my intent:

And the snapshot is available once again:

As has always been the case, Fast Snapshot Restore is disabled when a snapshot is deleted. With this launch, it will remain disabled when a snapshot is restored.

All of this functionality (creating rules, listing resources in the Recycle Bin, and restoring them) is also available from the CLI and via the Recycle Bin APIs.

Things to Know
Here are a couple of things to know about the new Recycle Bin:

IAM Support – As I mentioned earlier, you can use AWS Identity and Access Management (IAM) to grant access to this feature, and should consider creating an empowered user known as the Recycle Bin Administrator.

Rule Changes – You can make changes to your retention rules at any time, but be aware that the rules are evaluated (and the retention period is set) when you delete a snapshot. Changing a rule after an item has been deleted will not alter the retention period for the item.

Pricing – Resources that are in the Recycle Bin are charged the usual price, but be aware that creating rules with long retention periods could increase your AWS bill. On a related note, be sure that keeping deleted snapshots around does not violate your organization’s data retention policies. There is no charge for deleting or recovering a resource.

In the Bin – Resources in the Recycle Bin are immutable. If a resource is recovered, all of its existing metadata (tags and so forth) is also recovered intact.

Recycling  – We will do our best to recycle all of the zeroes and all of the ones once when a resource in your Recycle Bin reaches the end of its retention period!

Jeff;

Volotea MRO Modernization in AWS

Post Syndicated from Albert Capdevila original https://aws.amazon.com/blogs/architecture/volotea-mro-modernization-in-aws/

Volotea is one of the fastest growing independent airlines in Europe, and has increased its fleet, routes, and number of available seats year over year. Volotea has already transported more than 30 million passengers across Europe since 2012, and has bases in 16 European capitals.

The maintenance, repair, and overhaul (MRO) application is a critical system for every airline. It’s used to manage the maintenance, repair, service, and inspection of aircraft. The main goal of an MRO application is to ensure the safety and airworthiness of the aircraft. Traditionally, those systems have been based on monolithic, packaged applications. However, these are difficult to scale and do not offer the benefit of elasticity to adapt to changing demand. Volotea migrated to Amazon Web Services (AWS) to modernize their MRO without refactoring the code. In this blog post, we’ll show you an architecture solution that can be applied to modernize an MRO (or similarly packaged monolithic application) without refactoring, and discuss some considerations.

The challenges with an on-premises MRO solution

Volotea’s MRO software previously ran in an on-premises data center. The system was based on Windows, an outdated database engine, and a virtual desktop system based on Citrix. Costs were fixed, yet MRO usage is typically seasonal. All the interfaces with other systems were based on an outdated communications protocol. This presented security concerns, especially considering that ransomware attacks are an increasing threat.

The main challenge for Volotea was adapting the MRO system to changing business requirements. Seasonal workloads and high impact projects, like changing fleets from Boeing to Airbus, require flexibility. The company also needed to adapt to the changing protocols necessitated by the COVID-19 pandemic, as airlines are one of the most impacted industries in Europe.

Volotea needed to modernize the operating system (OS) and database, simplify the end user application access, and increase the overall platform security, including integration with other applications.

Modernizing the MRO without refactoring

Following Volotea’s cloud strategy, the MRO system was migrated in 2 months to AWS to reduce technology costs and gain higher operational performance, availability, security, and flexibility. The migration was not simply based on a lift-and-shift approach, but used an existing AWS reference architecture for the MRO system. This reference architecture incorporates AWS managed services to modernize the application without incurring refactoring costs.

Figure 1. Volotea MRO deployment in a multi-account architecture

Figure 1. Volotea MRO deployment in a multi-account architecture

As shown in the high-level architecture in Figure 1:

  1. Volotea migrated their servers to Amazon EC2 instances based on Linux, to minimize the OS costs. The database management system is now using an open source engine. Those changes have permitted saving more than €10K yearly in licenses.
  2. The user access technology was migrated to Amazon AppStream 2.0. This is a managed service with increased security, elasticity, and flexibility compared to traditional virtual desktop infrastructure (VDI) solutions. Volotea aligned the cost with the real usage and decreased the TCO by configuring Auto Scaling fleets, reducing the workplace costs by 50%.
  3. AWS Transfer Family was used to centralize the information exchanged with third-party applications, while increasing the security of the communication channel. This managed service enabled the migration of the SFTP, FTPS, and FTP interfaces without the need to manage servers.
  4. To modernize the access of the MRO administrators, AWS Systems Manager Session Manager was used. This provided an ideal browser-based shell access without requiring bastion hosts or opening SSH ports in the Amazon EC2 instances.
  5. The AWS services were linked to Volotea’s user directory using AWS Single Sign-On. This allowed users to authenticate with their corporate credentials, decreasing maintenance costs, and increasing the security.

The application was deployed in Volotea’s AWS Landing Zone, which included the following services:

To make the systems management homogeneous, AWS Systems Manager and AWS Backup offered a single management point for the backup policies, system inventory, and patching.

Incorporating high availability to the MRO

Once this initial modernization is finished, Volotea will use the AWS reference architecture for high availability (HA) to increase resiliency. They’ll configure Amazon EC2 Auto Scaling with application failover to another Availability Zone and the DB native replication mechanisms. This will use Elastic IP addresses to remap the endpoints in a failover scenario. This architecture can be easily implemented in AWS to incorporate HA to applications that do not natively support horizontal scaling.

Conclusion

Volotea successfully modernized its MRO software, which has given them greater flexibility, elasticity, and the increased security of AWS services. They intend to continue with their digital transformation journey. Volotea is increasing its capacity to innovate faster to deliver new digital services more efficiently and with reduced IT costs. The AWS services and strategies discussed in this blog post can be applied to other similarly packaged applications to implement a first level of modernization with little effort and low migration risk.

References:

New – Amazon EC2 M6a Instances Powered By 3rd Gen AMD EPYC Processors

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-m6a-instances-powered-by-3rd-gen-amd-epyc-processors/

AWS and AMD have collaborated to give customers more choice and value in cloud computing, starting with the first generation AMD EPYC™ processors in 2018 such as M5a/R5a, M5ad/R5ad, and T3a instances. In 2020, we expanded the second generation AMD EPYC™ processors to include C5a/C5ad instances and recently G4ad instances, combining the power of both second-generation AMD EPYC™ processors and AMD Radeon Pro GPUs.

Today, I am happy to announce the general availability of Amazon EC2 M6a instances featuring the 3rd Gen AMD EPYC processors, running at frequencies up to 3.6 GHz to offer up to 35 percent price performance versus the previous generation M5a instances.

You can launch M6a instances today in ten sizes in the AWS US East (N. Virginia), US West (Oregon), and Europe (Ireland) Regions as On-Demand, Spot, and Reserved Instance or as part of a Savings Plan. Here are the specs:

Name vCPUs Memory
(GiB)
Network Bandwidth
(Gbps)
EBS Throughput
(Gbps)
m6a.large 2 8 Up to 12.5 Up to 6.6
m6a.xlarge 4 16 Up to 12.5 Up to 6.6
m6a.2xlarge 8 32 Up to 12.5 Up to 6.6
m6a.4xlarge 16 64 Up to 12.5 Up to 6.6
m6a.8xlarge 32 128 12.5 6.6
m6a.12xlarge 48 192 18.75 10
m6a.16xlarge 64 256 25 13.3
m6a.24xlarge 96 384 37.5 20
m6a.32xlarge 128 512 50 26.6
m6a.48xlarge 192 768 50 40

Compared to M5a instances, the new M6a instances offer:

  • Larger instance size with 48xlarge with up to 192 vCPUs and 768 GiB of memory, enabling you to consolidate more workloads on a single instance. M6a also offers Elastic Fabric Adapter (EFA) support for workloads that benefit from lower network latency and highly scalable inter-node communication, such as HPC and video processing.
  • Up to 35 percent higher price performance per vCPU versus comparable M5a instances, up to 50 Gbps of networking speed, and up to 40 Gbps bandwidth of Amazon EBS, more than twice that of M5a instances.
  • Always-on memory encryption and support for new AVX2 instructions for accelerating encryption and decryption algorithms

M6a instances expand the 6th generation general purpose instances portfolio and provide high-performance processing at 10 percent lower cost over comparable x86 instances. M6a instances are a good fit for running general-purpose workloads such as web servers,  application servers, and small data stores.

To learn more, visit the M6a instances page. Please send feedback to [email protected], AWS forum for EC2, or through your usual AWS Support contacts.

— Channy

New – Amazon EC2 G5g Instances Powered by AWS Graviton2 Processors and NVIDIA T4G Tensor Core GPUs

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-g5g-instances-powered-by-aws-graviton2-processors-and-nvidia-t4g-tensor-core-gpus/

AWS Graviton2 processors are custom-designed by AWS to enable the best price performance in Amazon EC2. Thousands of customers are realizing significant price performance benefits for a wide variety of workloads with Graviton2-based instances.

Today, we are announcing the general availability of Amazon EC2 G5g instances that extend Graviton2 price-performance benefits to GPU-based workloads including graphics applications and machine learning inference. In addition to Graviton2 processors, G5g instances feature NVIDIA T4G Tensor Core GPUs to provide the best price performance for Android game streaming, with up to 25 Gbps of networking bandwidth and 19 Gbps of EBS bandwidth.

These instances provide up to 30 percent lower cost per stream per hour for Android game streaming than x86-based GPU instances. G5g instances are also ideal for machine learning developers who are looking for cost-effective inference, have ML models that are sensitive to CPU performance, and leverage NVIDIA’s AI libraries.

G5g instances are available in the six sizes as shown below.

Instance Name vCPUs Memory (GB) NVIDIA T4G Tensor Core GPU GPU Memory (GB) EBS Bandwidth (Gbps) Network Bandwidth (Gbps)
g5g.xlarge 4 8 1 16 Up to 3.5 Up to 10
g5g.2xlarge 8 16 1 16 Up to 3.5 Up to 10
g5g.4xlarge 16 32 1 16 Up to 3.5 Up to 10
g5g.8xlarge 32 64 1 16 9 12
g5g.16xlarge 64 128 2 32 19 25
g5g.metal 64 128 2 32 19 25

These instances are a great fit for many interesting types of workloads. Here are a few examples:

  • Streaming Android gaming—With G5g instances, Android game developers can build natively on Arm-based GPU instances without the need for cross-compilation or emulation on x86-based instances. They can encode the rendered graphics and stream the game over the network to a mobile device. This helps simplify development efforts and time and lowers the cost per stream per hour by up to 30 percent.
  • ML Inference —G5g instances are also ideal for machine learning developers who are looking for cost-effective inference, have ML models that are sensitive to CPU performance, and leverage NVIDIA’s AI If you don’t have any dependencies on NVIDIA software, you may use Inf1 instances, which deliver up to 70 percent lower cost-per-inference than G4dn instances.
  • Graphics rendering—G5g instances are the most cost-effective option for customers with rendering workloads and dependencies on NVIDIA libraries. These instances also support rendering applications and use cases that leverage industry-standard APIs such as OpenGL and Vulkan.
  • Autonomous Vehicle Simulations—Several of our customers are designing and simulating autonomous vehicles that include multiple real-time sensors. They can use ray tracing to simulate sensor input in real time.

The instances are compatible with a very long list of graphical and machine learning libraries on Linux, including NVENC, NVDEC, nvJPEG, OpenGL, Vulkan, CUDA, CuDNN, CuBLAS, and TensorRT.

Available Now
The new G5g instances are available now, and you can start using them today in the US East (N. Virginia), US West (Oregon), and Asia-Pacific (Seoul, Singapore and Tokyo) Regions in On-Demand, Spot, Savings Plan, and Reserved Instance form. To learn more, see the EC2 pricing page.

G5g instances are available now in AWS Deep Learning AMIs with NVIDIA drivers and popular ML frameworks, Amazon Elastic Container Service (Amazon ECS), or Amazon Elastic Kubernetes Service (Amazon EKS) clusters for containerized ML applications.

You can send feedback to the AWS forum for Amazon EC2 or through your usual AWS Support contacts.

Channy

New for AWS Compute Optimizer – Resource Efficiency Metrics to Estimate Savings Opportunities and Performance Risks

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-compute-optimizer-resource-efficiency-metrics-to-estimate-savings-opportunities-and-performance-risks/

By applying the knowledge drawn from Amazon’s experience running diverse workloads in the cloud, AWS Compute Optimizer identifies workload patterns and recommends optimal AWS resources.

Today, I am happy to share that AWS Compute Optimizer now delivers resource efficiency metrics alongside its recommendations to help you assess how efficiently you are using AWS resources:

  • A dashboard shows you savings and performance improvement opportunities at the account level. You can dive into resource types and individual resources from the dashboard.
  • The Estimated monthly savings (On-Demand) and Savings opportunity (%) columns estimate the possible savings for over-provisioned resources. You can sort your recommendations using these two columns to quickly find the resources on which to focus your optimization efforts.
  • The Current performance risk column estimates the bottleneck risk with the current configuration for under-provisioned resources.

These efficiency metrics are available for Amazon Elastic Compute Cloud (Amazon EC2), AWS Lambda, and Amazon Elastic Block Store (EBS) at the resource and AWS account levels.

For multi-account environments, Compute Optimizer continuously calculates resource efficiency metrics at individual account level in an AWS organization to help identify teams with low cost-efficiency or possible performance risks. This lets you to create goals and track progress over time. You can quickly understand just how resource-efficient teams and applications are, easily prioritize recommendation evaluation and adoption by engineering team, and establish a mechanism that drives a cost-aware culture and accountability across engineering teams.

Using Resource Efficiency Metrics in AWS Compute Optimizer
You can opt in using the AWS Management Console or the AWS Command Line Interface (CLI) to start using Compute Optimizer. You can enroll the account that you’re currently signed in to or all of the accounts within your organization. Depending on your choice, Compute Optimizer analyzes resources that are in your individual account or for each account in your organization, and then generates optimization recommendations for those resources.

To see your savings opportunity in Compute Optimizer, you should also opt in to AWS Cost Explorer and enable the rightsizing recommendations in the AWS Cost Explorer preferences page. For more details, see Getting started with rightsizing recommendations.

I already enrolled some time ago, and in the Compute Optimizer console I see the overall savings opportunity for my account.

Console screenshot.

Below that, I have a recap of the performance improvement opportunity. This includes an overview of the under-provisioned resources, as well as the performance risks that they pose by resource type.

Console screenshot.

Let’s dive into some of those savings. In the EC2 instances section, Compute Optimizer found 37 over-provisioned instances.

Console screenshot.

I follow the 37 instances link to get recommendations for those resources, and then sort the table by Estimated monthly savings (On-Demand) descending.

Console screenshot.

On the right, in the same table, I see which is the current instance type, the recommended instance type based on Computer Optimizer estimates, the difference in pricing, and if there are platform differences between the current and recommended instance types.

Console screenshot.

I can select each instance to further drill down into the metrics collected, as well as the other possible instance types suggested by Computer Optimizer.

Back to the Compute Optimizer Dashboard, in the Lambda functions section, I see that eight functions have under-provisioned memory.

Console screenshot.

Again, I follow the 8 functions link to get recommendations for those resources, and then sort the table by Current performance risk. In my case, the risk is always low, but different values can help prioritize your activities.

Console screenshot.

Here, I see the current and recommended configured memory for those Lambda functions. I can select each function to get a view of the metrics collected. Choosing the memory allocated to Lambda functions is an optimization process that balances speed (duration) and cost. See Profiling functions with AWS Lambda Power Tuning in the documentation for more information.

Availability and Pricing
You can use resource efficiency metrics with AWS Compute Optimizer in any AWS Region where it is offered. For more information, see the AWS Regional Services List. There is no additional charge for this new capability. See the AWS Compute Optimizer pricing page for more information.

This new feature lets you implement a periodic workflow to optimize your costs:

  • You can start by reviewing savings opportunities for all of your accounts to identify which accounts have the highest savings opportunity.
  • Then, you can drill into those accounts with the highest savings opportunity. You can refer to the estimated monthly savings to see which recommendations can drive the largest absolute cost impact.
  • Finally, you can communicate optimization opportunities and priority order to the teams using those accounts.

Start using AWS Compute Optimizer today to find and prioritize savings opportunities in your AWS account or organization.

Danilo

New for AWS Compute Optimizer – Enhanced Infrastructure Metrics to Extend the Look-Back Period to Three Months

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-compute-optimizer-enhanced-infrastructure-metrics-to-extend-the-look-back-period-to-three-months/

By using machine learning to analyze historical utilization metrics, AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance. Over-provisioning resources can lead to unnecessary infrastructure costs, and under-provisioning resources can lead to poor application performance. Compute Optimizer helps you choose optimal configurations for three types of AWS resources: Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic Block Store (EBS) volumes, and AWS Lambda functions, based on your utilization data. Today, I am happy to share that AWS Compute Optimizer now supports recommendation preferences where you can opt in or out of features that enhance resource-specific recommendations.

For EC2 instances, AWS Compute Optimizer analyzes Amazon CloudWatch metrics from the past 14 days to generate recommendations. For this reason, recommendations weren’t relevant for a subset of workloads that had monthly or quarterly patterns. For those workloads, you had to look for unoptimized resources and determine the right resource configurations over a longer period of time. This can be time-consuming and requires deep cloud expertise, especially for large organizations.

With the launch of recommendation preferences, Compute Optimizer now offers enhanced infrastructure metrics, a new paid recommendation preference feature that enhances recommendation quality for EC2 instances and Auto Scaling groups. Activating it extends the metrics look-back period to three months. You can activate enhanced infrastructure metrics for individual resources or at the AWS account or AWS organization level.

Let’s see how that works in practice.

Using Enhanced Infrastructure Metrics with AWS Compute Optimizer
Here, I am using the management account of my AWS organization to see organization-level preferences. In the left pane of the Compute Optimizer console, I choose Accounts. Here, there is a new section to set up Organization level preferences for enhanced infrastructure metrics. The console warns me that this is a paid feature.

I want to activate enhanced infrastructure metrics for EC2 instances running in the US East (N. Virginia) Region for all accounts in my organization. I choose the Edit button. For Resource type, I select EC2 instances. For Region, I select US East (N. Virginia). I check that the flag is active and save.

Console screenshot.

If I select one of the AWS accounts on this page, I can choose View preferences and override the setting for that specific account. For example, I can disable accounts that I use for testing because EC2 instances there are created automatically by a CI/CD pipeline and are usually terminated within a few hours.

Console screenshot.

In the console Dashboard, I look at the overall recommendations for EC2 instances and Auto Scaling groups.

Console screenshot.

In the EC2 instances box, I choose View recommendations and then one of the instances. With the Edit button, I can activate or inactivate enhanced infrastructure metrics for this specific resource. Here, I can also see if, considering all settings at organization, account, and resource level, enhanced infrastructure metrics is actually active or not for this specific EC2 instance. I see Active (pending) here because I’ve just changed the setting and it may take a few hours for Compute Optimizer to consider my updated preferences in its recommendations.

Console screenshot.

Below, I see the recommended options for the instance. Considering the current workload, I should change instance type and size from c3.2xlarge to r5d.large and save some money.

Console screenshot.

In a few hours, Compute Optimizer updates its recommendations based on the latest three months of CloudWatch metrics. In this way, I get better suggestions for workloads that have monthly or quarterly activities.

Availability and Pricing
You can activate enhanced infrastructure metrics in the AWS Compute Optimizer account preferences page for all the accounts in your organization or for individual accounts. If you need more granular controls, you can activate (or deactivate) for an individual resource (Auto Scaling group or EC2 instance) in the resource detail page. You can also activate enhanced infrastructure metrics using the AWS Command Line Interface (CLI) or AWS SDKs.

Default preferences in Compute Optimizer (with 14-day look-back) are free. Enabling enhanced infrastructure metrics costs $0.0003360215 per resource per hour and is charged based on the number of hours per month the resource is running. For a resource running a full 31-day month, that’s $0.25. For more information, see the Compute Optimizer pricing page.

Use enhanced infrastructure metrics to generate recommendations with Compute Optimizer based on metrics from the past three months.

Danilo

Using EC2 Auto Scaling predictive scaling policies with Blue/Green deployments

Post Syndicated from Pranaya Anshu original https://aws.amazon.com/blogs/compute/retaining-metrics-across-blue-green-deployment-for-predictive-scaling/

This post is written by Ankur Sethi, Product Manager for EC2.

Amazon EC2 Auto Scaling allows customers to realize the elasticity benefits of AWS by automatically launching and shutting down instances to match application demand. Earlier this year we introduced predictive scaling, a new EC2 Auto Scaling policy that predicts demand and proactively scales capacity, resulting in better availability of your applications (if you are new to predictive scaling, I suggest you read this blog post before proceeding). In this blog, I will walk you through how to use a new feature, predictive scaling custom metrics, to configure predictive scaling for an application that follows a Blue/Green deployment strategy.

Blue/Green Deployment using Auto Scaling groups

The fundamental idea behind Blue/Green deployment is to shift traffic between two environments that are running different versions of your application. The Blue environment represents your current application version serving production traffic. In parallel, the Green environment is staged running the newer version. After the Green environment is ready and tested, production traffic is redirected from Blue to Green either all at once or in increments, similar to canary deployments. At the end of the load transfer, you can either terminate the Blue Auto Scaling group or reuse it to stage the next version update. Irrespective of the approach, when a new Auto Scaling group is created as part of Blue/Green deployment, EC2 Auto Scaling, and in turn predictive scaling, does not know that this new Auto Scaling group is running the same application that the Blue one was. Predictive scaling needs a minimum of 24 hours of historical metric data and up to 14 days for the most accurate results, neither of which the new Auto Scaling group has when the Blue/Green deployment is initiated. This means that if you frequently conduct Blue/Green deployments, predictive scaling regularly pauses for at least 24 hours, and you may experience less optimal forecasts after each deployment.

In Blue/Green deployment you have two Auto Scaling groups - Blue Auto Scaling Group running the current version and Green Auto Scaling group staged with the updated version. Once you are ready to make the updated version live, you switch production traffic from Blue to Green through your load balancer or your DNS settings.

Figure 1. In Blue/Green deployment you have two Auto Scaling groups running different versions of an application. You switch production traffic from Blue to Green to make the updated version public.

How to retain your application load history using predictive scaling custom metrics

To make predictive scaling work for Blue/Green deployment scenarios, we need to aggregate load metrics from both Blue and Green environments before using it to forecast capacity as depicted in the following illustration. The key benefit of using the aggregated metric is that, throughout the Blue/Green deployment, predictive scaling can continue to forecast load correctly without a pause, and it can retain the entire 14 days of data to provide the best predictions. For example, if your application observes different patterns during a weekday vs. a weekend, predictive scaling will be able to retain knowledge of that pattern after the deployment.

The aggregated metrics of Blue and Green Auto Scaling groups give you the total load traffic of an application. Prior to Blue/Green deployment, Blue Auto Scaling group served the entire traffic while after the deployment, Green Auto Scaling group handles it. There can be a period of overlap where traffic is split between the two Auto Scaling groups. By adding the traffic on two Auto Scaling groups, you get a single time series which allows predictive scaling to generate forecasts based on complete set of 14 days of history.

Figure 2. The aggregated metrics of Blue and Green Auto Scaling groups give you the total load traffic of an application. Predictive scaling gives most accurate forecasts when based on last 14 days of history.

Example

Let’s explore this solution with an example. I created a sample application and load simulation infrastructure that you can use to follow along by deploying this example AWS CloudFormation Stack in your account. This example deploys two Auto Scaling groups: ASG-myapp-v1 (Blue) and ASG-myapp-v2 (Green) to run a sample application. Only ASG-myapp-v1 is attached to a load balancer and has recurring requests generated for its application. I have applied a target tracking policy and predictive scaling policy to maintain CPU utilization at 25%. You should keep this Auto Scaling group running for at least 24 hours before proceeding with the rest of the example to have enough load generated for predictive scaling to start forecasting.

ASG-myapp-v2 does not have any requests generated of its own. In the following sections, to highlight how metric aggregation works, I will apply a predictive scaling policy to it using Custom Metric configurations aggregating CPU Utilization metrics of both Auto Scaling groups. I’ll then verify if the forecasts are generated for ASG-myapp-v2 based on the aggregated metrics.

As part of your Blue/Green deployment approach, if you alternate between exactly two Auto Scaling groups, then you can use simple math expressions such as SUM (m1, m2) where m1 and m2 are metrics for each Auto Scaling group. However, if you create new Auto Scaling groups for each deployment, then you need to refer to the metrics of all the Auto Scaling groups that were used to run the application in the last 14 days. You can simplify this task by following a naming convention for your Auto Scaling groups and leveraging the Search expression to select the required metrics. The naming convention is ASG-myapp-vx where we name the new Auto Scaling group according to the version number (ASG-myapp-v1ASG-myapp-v2 and so on). Using SEARCH(‘ {Namespace, DimensionName1, DimensionName2} SearchTerm’, ‘Statistic’, Period) expression I can identify the metrics of all the Auto Scaling groups that follow the name according to the SearchTerm. I can then aggregate the metrics by appending another expression. The final expression should look like SUM(SEARCH(…).

Step 1: Apply predictive scaling policy to Green Auto Scaling group ASG-myapp-v2 with custom metrics

To generate forecasts, the predictive scaling algorithm needs three metrics as input: a load metric that represents total demand on an Auto Scaling group, the number of instances that represents the capacity of the Auto Scaling groups, and a scaling metric that represents the average utilization of the instances in the Auto Scaling groups.

Here is how it would work with CPU Utilization metrics. First, create a scaling configuration file where you define the metrics, target value, and the predictive scaling mode for your policy.

cat predictive-scaling-policy-cpu.json
{
        "MetricSpecifications": [
      {
            "TargetValue": 25,
           "CustomizedLoadMetricSpecification": {
        },
           "CustomizedCapacityMetricSpecification": {  
        },
           "CustomizedScalingMetricSpecification": {
        },
            }
    ],
        "Mode": “ForecastOnly”
}
EoF

I’ll elaborate on each of these metric specifications separately in the following sections. You can download the complete JSON file in GitHub.

Customized Load Metric Specification: You can aggregate the demand across your Auto Scaling groups by using the SUM expression. The demand forecasts are generated every hour, so this metric has to be aggregated with a time period of 3600 seconds.

"CustomizedLoadMetricSpecification": {
    "MetricDataQueries": [
        {
            "Id": "load_sum",
            "Expression": "SUM(SEARCH('{AWS/EC2,AutoScalingGroupName} MetricName=\"CPUUtilization\" ASG-myapp', 'Sum', 3600))"
        }
    ]
}

Customized Capacity Metric Specification: Your customized capacity metric represents the total number of instances across your Auto Scaling groups. Similar to the load metric, the aggregation across Auto Scaling groups is done by using the SUM expression. Note that this metric has to follow a 300 seconds interval period.

"CustomizedCapacityMetricSpecification": {
    "MetricDataQueries": [
        {
            "Id": "capacity_sum",
            "Expression": "SUM(SEARCH('{AWS/AutoScaling,AutoScalingGroupName} MetricName=\"GroupInServiceIntances\" ASG-myapp', 'Average', 300))"
        }
    ]
}

Customized Scaling Metric Specification: Your customized scaling metric represents the average utilization of the instances across your Auto Scaling groups. We cannot simply SUM the scaling metric of each Auto Scaling group as the utilization is an average metric that depends on the capacity and demand of the Auto Scaling group. Instead, we need to find the weighted average unit load (Load Metric/Capacity). To do so, we will use an expression: Sum(load)/Sum(capacity). Note that this metric also has to follow a 300 seconds interval period.

"CustomizedScalingMetricSpecification": {
    "MetricDataQueries": [
        {
            "Id": "capacity_sum",
            "Expression": "SUM(SEARCH('{AWS/AutoScaling,AutoScalingGroupName} MetricName=\"GroupInServiceIntances\" ASG-myapp', 'Average', 300))"
            “ReturnData”: “False”
        },
        {
            "Id": "load_sum",
            "Expression": "SUM(SEARCH('{AWS/EC2,AutoScalingGroupName} MetricName=\"CPUUtilization\" ASG-myapp', 'Sum', 300))"
            “ReturnData”: “False”
        },
        {
            "Id": "weighted_average",
            "Expression": "load_sum / capacity_sum”
       }
    ]
}

Once you have created the configuration file, you can run the following CLI command to add the predictive scaling policy to your Green Auto Scaling group.

aws autoscaling put-scaling-policy \
    --auto-scaling-group-name "ASG-myapp-v2" \
    --policy-name "CPUUtilizationpolicy" \
    --policy-type "PredictiveScaling" \
    --predictive-scaling-configuration file://predictive-scaling-policy-cpu.json

Instantaneously, the forecasts will be generated for the Green Auto Scaling group (My-ASG-v2) as if this new Auto Scaling group has been running the application. You can validate this using the predictive scaling forecasts API. You can also use the console to review forecasts by navigating to the Amazon EC2 Auto Scaling console, selecting the Auto Scaling group that you configured with predictive scaling, and viewing the predictive scaling policy located under the Automatic Scaling section of the Auto Scaling group details view.

EC2 Auto Scaling console shows you the capacity and load forecasts generated by your predictive scaling policies against the actual metric values. In this case, we are looking at the forecasts generated for Green Auto Scaling group. Since we aggregated metrics across Auto Scaling groups, the forecasts are generated as if this Auto Scaling group has been running the application from the beginning. You see the actual load and capacity values also aggregated for easier comparison of the forecasted and actual values.

Figure 3. EC2 Auto Scaling console showing capacity and load forecasts for Green Auto Scaling group. The forecasts are generated as if this Auto Scaling group has been running the application from the beginning.

Step 2: Terminate ASG-myapp-v1 and see predictive scaling forecasts continuing

Now complete the Blue/Green deployment pattern by terminating the Blue Auto Scaling group, and then go to the console to check if the forecasts are retained for the Green Auto Scaling group.

aws autoscaling delete-auto-scaling-group \
 --auto-scaling-group-name ASG-myapp-v1

You can quickly check the forecasts on the console for ASG-myapp-v2 to find that terminating the Blue Auto Scaling group has no impact on the forecasts of the Green one. The forecasts are all based on aggregated metrics. As you continue to do Blue/Green deployments in future, the history of all the prior Auto Scaling groups will persist, ensuring that our predictions are always based on the complete set of metric history. Before we conclude, remember to delete the resources you created. As part of this example, to avoid unnecessary costs, delete the CloudFormation stack.

Conclusion

Custom metrics give you the flexibility to base predictive scaling on metrics that most accurately represent the load on your Auto Scaling groups. This blog focused on the use case where we aggregated metrics from different Auto Scaling groups across Blue/Green deployments to get accurate forecasts from predictive scaling. You don’t have to wait for 24 hours to get the first set of forecasts or manually set capacity when the new Auto Scaling group is created to deploy an updated version of the application. You can read about other use cases of custom metrics and metric math in the public documentation such as scaling based on queue metrics.

How to enable secure seamless single sign-on to Amazon EC2 Windows instances with AWS SSO

Post Syndicated from Todd Rowe original https://aws.amazon.com/blogs/security/how-to-enable-secure-seamless-single-sign-on-to-amazon-ec2-windows-instances-with-aws-sso/

Today, we’re launching new functionality that simplifies the experience to securely access your AWS compute instances running Microsoft Windows. We took on this update to respond to customer feedback around creating a more streamlined experience for administrators and users to more securely access their EC2 Windows instances. The new experience utilizes your existing identity solutions to run and manage your Microsoft Windows workloads on AWS. You can create and administer users in AWS Single Sign-On (AWS SSO) or an AWS SSO supported identity provider (such as Okta, Ping, and OneLogin), and provide a one-click single sign-on to your EC2 Windows instances from the AWS Fleet Manager console. You can also use your existing corporate usernames, passwords, and multi-factor authentication devices to securely access your EC2 windows instances, without having to enter your credentials multiple times.

Using AWS SSO eliminates the use of shared administrator credentials and the need to configure remote access client software. You can centrally grant and revoke access to your EC2 Windows instances at scale across multiple AWS accounts. For example, if you remove an employee from your AWS SSO integrated identity system, their access to all AWS resources (including EC2 Windows instances) is automatically revoked. Individual user actions can now be viewed in the Amazon EC2 Windows instances event log, making it easier to meet audit and compliance requirements.

AWS SSO background

AWS SSO simplifies managing SSO access to AWS accounts and business applications, and it is the central location where you can create or connect your workforce identities in AWS. You can control SSO access and user permissions across all your AWS accounts in AWS Organizations. You can choose to manage access to your AWS accounts, to cloud applications, or both.

When managing access to AWS accounts, AWS SSO enables you to define and assign roles centrally across your AWS Organizations account using permission sets. Permission sets are role definitions (templates) that AWS SSO uses to create and maintain roles in your AWS Organizations accounts. The permission set defines the session duration and policies for the role. When you assign a permission set to a user or group in a selected AWS account, AWS SSO creates a corresponding role in the target account, and AWS SSO controls access to the role through the AWS SSO user portal.

This post uses a permission set that manages access to AWS Fleet Manager to deliver one-click access into EC2 instances.

You will accomplish this in three steps:

  1. Create an AWS SSO permission set (for example, demoFMPermissionSet)
  2. Assign the permission set to an existing AWS SSO group (for example, demoFMGroup)
  3. Login to the AWS SSO User Portal and connect to your EC2 Windows instance via the AWS Fleet Manager console

Prerequisites

The prerequisites for this example are that you have:

  1. Configured AWS SSO in your account with provisioned users and groups
  2. An EC2 Windows instance managed by AWS Systems Manager Fleet Manager

Solution architecture

The following diagram shows the steps you will follow to configure and use an AWS SSO user identity to login to an EC2 Windows instance. 

Figure 1: Architecture diagram showing steps implemented in this solution

Figure 1: Architecture diagram showing steps implemented in this solution

How it works

The AWS SSO permission set creates a role in a target account that gives an authorized user permissions to use AWS Fleet Manager to sign into EC2 Windows instances. When a user chooses the role in the account, the user signs onto the AWS Fleet Manager console and selects the EC2 instance where they want to sign in.

AWS Fleet Manager creates a local Windows user account and a credential for that user, and then automates their sign-in to the instance.

To create an AWS SSO permission set

This procedure creates a permission set that grants assigned users and groups permissions to use AWS Fleet Manager for single sign-on to EC2 instances.

  1. From the AWS SSO console, go to AWS Accounts, select the Permission sets tab, select Create permission set and choose Create a custom permission set.
  2. Name your permission set, and fill out the required fields, making sure to select Create a custom permissions policy at the bottom of the page. See Sample custom permissions policy below for details on the policy.
  3. After creating the custom permissions policy, you can also apply optional tagging. When you are done, review and choose Create to complete creating your custom permission set, as shown in Figure 2.

 

Figure 2: Reviewing the custom permission set

Figure 2: Reviewing the custom permission set

Sample custom permissions policy

This is the sample policy you’ll use; you can download it here.
Code sample

This permission policy contains a separate statement ID (Sid) for each service, with the required actions for each.

On line 84, notice the reference to an AWSSSO-CreateSSOUser document resource. This document is responsible for creating a local Windows account based on the AWS SSO logged in user, as well as setting/resetting the user’s password for automatic log in to the Windows instance.

On lines 96-98, you will see a new ssm-guiconnect action. This is used to make the secure connection to your EC2 Windows instance, and render the GUI desktop in the Fleet Manager console.

To assign your AWS SSO group

Assign your AWS SSO group to the AWS Fleet Manager permission set in your selected accounts

In this procedure, we will select two AWS accounts in our AWS organization, and grant our AWS SSO group access to the previously-created permission set that enables sign-in via Fleet manager.

  1. From the AWS SSO console, navigate to AWS accounts and select an account (for example, demoAccount1 and demoAccount2), as shown in Figure 3.
  2. Choose the Assign users button. If you wish, you may also assign access to multiple groups or to users individually.
  3.  

    Figure 3: Selecting AWS Account to assign users or groups

    Figure 3: Selecting AWS Account to assign users or groups

  4. To enable multiple AWS SSO users to access this feature, choose an AWS SSO group from the Groups tab and then choose the Next button, as shown in Figure 4
  5.  

    Figure 4: Assigning group to AWS accounts

    Figure 4: Assigning group to AWS accounts

  6. Select the permission set you created previously and choose the Next button.
  7.  

    Figure 5: Selecting permission set to AWS accounts

    Figure 5: Selecting permission set to AWS accounts

  8. Review your choices, and press Submit to submit your assignments, as shown in Figure 6.
  9.  

    Figure 6: Reviewing submit assignments to AWS accounts

    Figure 6: Reviewing submit assignments to AWS accounts

AWS SSO will now use the permission set definition to create a role in each selected account, which grants users access to sign in via Fleet Manager. Users gain access to that role by signing into the AWS SSO user portal.

To access Fleet Managed EC2 instances

  1. From the console, navigate to your AWS SSO user portal URL and login as any AWS SSO user who is a member of the group (e.g., demoFMGroup) you selected in step 3 above.
  2. From the AWS SSO user portal page, choose Management console and navigate to the Fleet Manager console where you have your EC2 Windows managed instance, as shown in Figure 7
  3.  

    Figure 7: Navigating to the Management console from the user portal

    Figure 7: Navigating to the Management console from the user portal

  4. Select a managed Windows instance and select Instance actions and then Connect with Remote Desktop as shown in Figure 8.
  5.  

    Figure 8: Connecting with Remote Desktop

    Figure 8: Connecting with Remote Desktop

  6. Select Single Sign-On and then select Connect, as shown in Figure 9.
  7. This automatically logs you in using your AWS SSO credential. If this is the first time connecting to the instance, a new local user will be created. 

    Figure 9: Selecting Single Sign-On

    Figure 9: Selecting Single Sign-On

    Once connected, you will see your EC2 Windows instance in the All sessions tab, enabling you to have up to four concurrent sessions in a single view, as shown in Figure 10. For a single session view, select the Instance ID tab. 

    Figure 10: Selecting expanded desktop view

    Figure 10: Selecting expanded desktop view

  8. From the single session tab, we can see that AWS Fleet Manager created a local Windows Server user for the AWS SSO user (demoUser1).

After creating the local user, AWS Fleet Manager used the credentials it created to sign into the EC2 Windows server as sso-demoUser1 from the Windows Event Viewer, giving you individual user logging on your EC2 Windows servers. These logs are also available from within the Fleet Manager console. 

Figure 11: Showing AWS SSO username in Amazon EC2 Windows instance event log

Figure 11: Showing AWS SSO username in Amazon EC2 Windows instance event log

Conclusion

This post described how to provide a single sign-in experience to Windows EC2 instances using AWS Fleet Manager with AWS Single Sign-On. Doing this allows you to create users in AWS SSO, or to connect any supported identity provider to AWS SSO, and to give users one-click access to their EC2 instances through AWS Fleet Manager.

This is done by creating an AWS SSO permission set that grants users access to AWS Fleet Manager, then assigning a group from AWS SSO to the permission set in the selected AWS accounts. Users can sign into the AWS SSO user portal, navigate to the AWS Fleet Manager, select their Windows EC2 instance, and land in the Windows user experience without having to enter Windows credentials separately.

To learn more about AWS SSO, visit the AWS Single Sign-On Documentation. To learn more about Fleet Manager, visit the AWS Systems Manager Fleet Manager Documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Single Sign-On forum.

Want more AWS Security news? Follow us on Twitter.

Author

Todd Rowe

Todd is a Principal Product Manager focused on AWS workforce identity products. He enjoys tackling complex customer problems through intuitive connected solutions. Outside of work, Todd enjoys all water sports, mountain biking, and live music.

New – Amazon EC2 R6i Memory-Optimized Instances Powered by the Latest Generation Intel Xeon Scalable Processors

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-amazon-ec2-r6i-memory-optimized-instances-powered-by-the-latest-generation-intel-xeon-scalable-processors/

In August, we introduced the general-purpose Amazon EC2 M6i instances powered by the latest generation Intel Xeon Scalable processors (code-named Ice Lake) with an all-core turbo frequency of 3.5 GHz. Compute-optimized EC2 C6i instances were also made available last month.

Today, I am happy to share that we are expanding our sixth-generation x86-based offerings to include memory-optimized Amazon EC2 R6i instances.

Here’s a quick recap of the advantages of the new R6i instances compared to R5 instances:

  • A larger instance size (r6i.32xlarge) with 128 vCPUs and 1,024 GiB of memory that makes it easier and more cost-efficient to consolidate workloads and scale up applications
  • Up to 15 percent improvement in compute price/performance
  • Up to 20 percent higher memory bandwidth
  • Up to 40 Gbps for Amazon Elastic Block Store (EBS) and 50 Gbps for networking which is 2x more than R5 instances
  • Always-on memory encryption.

R6i instances are SAP Certified and are an ideal fit for memory-intensive workloads such as SQL and NoSQL databases, distributed web scale in-memory caches like Memcached and Redis, in-memory databases, and real-time big data analytics like Apache Hadoop and Apache Spark clusters.

Compared to M6i and C6i instances, the only difference is in the amount of memory that is included per vCPU. R6i instances are available in ten sizes:

Name vCPUs Memory
(GiB)
Network Bandwidth
(Gbps)
EBS Throughput
(Gbps)
r6i.large 2 16 Up to 12.5 Up to 10
r6i.xlarge 4 32 Up to 12.5 Up to 10
r6i.2xlarge 8 64 Up to 12.5 Up to 10
r6i.4xlarge 16 128 Up to 12.5 Up to 10
r6i.8xlarge 32 256 12.5 10
r6i.12xlarge 48 384 18.75 15
r6i.16xlarge 64 512 25 20
r6i.24xlarge 96 768 37.5 30
r6i.32xlarge 128 1024 50 40
r6i.metal 128 1024 50 40

Like M6i and C6i instances, these new R6i instances are built on the AWS Nitro System, which is a collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware, delivering high performance, high availability, and highly secure cloud instances.

As with all sixth generation EC2 instances, you may need to upgrade your Elastic Network Adapter (ENA) for optimal networking performance. For more information, see this article about migrating an EC2 instance to a sixth-generation instance in the AWS Knowledge Center.

R6i instances support Elastic Fabric Adapter (EFA) on r6i.32xlarge and r6i.metal instances for workloads that benefit from lower network latency, such as HPC and video processing.

Availability and Pricing
EC2 R6i instances are available today in four AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland). As usual with EC2, you pay for what you use. For more information, see the EC2 pricing page.

Danilo

Field Notes: Building On-Demand Disaster Recovery for IBM DB2 on AWS

Post Syndicated from João Bozelli original https://aws.amazon.com/blogs/architecture/field-notes-building-on-demand-disaster-recovery-for-ibm-db2-on-aws/

With the increased adoption of critical applications running in the cloud, customers often find themselves revisiting traditional strategies that were adopted for on-premises workloads. When it comes to IBM DB2, one of the first decisions to make is to decide what backup and restore method will be used.

In this blog post, we will show you how IT architects, database administrators, and cloud administrators can use AWS services such as Amazon Machine Images (AMIs) and Amazon Simple Storage Service (Amazon S3) to build on-demand disaster recovery. This is useful for organizations who are flexible in their Recovery Time Objective (RTO) to reduce cost by only provisioning the target environment when needed.

Architecture overview

Figure 1. Architecture of AWS services used in this blog post

Figure 1 shows the Amazon Elastic Compute Cloud (Amazon EC2) instance running the DB2 database in the primary Region (São Paulo, in this example) and performing backups to Amazon S3 by a script initiated by AWS Systems Manager. The backups in Amazon S3 are then replicated to the secondary Region (N. Virginia, in this example) by the S3 Cross-Region Replication (CRR) feature of Amazon S3.

AWS Backup provides automation by performing the AMI copy and in a similar fashion to the database backups, the AMIs are copied to the secondary Region as well. You can further enhance the backup mechanism by activating monitoring through Amazon CloudWatch and using Amazon Simple Notification Service (Amazon SNS) to send out alerts in the event of failures. The architectural considerations will be outlined in detail.

Configuring IBM DB2 native data backup to Amazon S3

Database backups are stored in Amazon S3, which replicates the backups inside a Region by default and can be replicated to another Region using CRR. Since version 11.1, IBM DB2 running on Linux natively supports data backups to Amazon S3. To create this architecture, follow these steps:

  1. Log in to the Linux server and create a PKCS keystore to store the key and create a secret access key that will be used to transfer the data to Amazon S3. The remote storage credentials will be stored in this keystore.
cd /db2/db2<sid>/
mkdir .keystore
gsk8capicmd_64 -keydb -create -db "/db2/db2<sid>/.keystore/db6-s3.p12" -pw "<password>" -type pkcs12 -stash
  1. Configure IBM DB2 to use the keystore with the KEYSTORE_LOCATION and KEYSTORE_TYPE parameters.
db2 "update dbm cfg using keystore_location /db2/db2<sid>/.keystore/db6-s3.p12 keystore_type pkcs12"
  1. Validate that the parameters were successfully updated.
db2 get dbm cfg |grep -i KEYSTORE
 Keystore type                           (KEYSTORE_TYPE) = PKCS12
 Keystore location                   (KEYSTORE_LOCATION) = /db2/db2<sid>/.keystore/db6-s3.p12
  1. Create an S3 bucket in the same Region where your EC2 instance running the IBM DB2 database is located. Ensure that all security best practices are followed for the creation of the bucket. This bucket will store the backup images. You can create different folders to store different objects. For example, you can store the configuration files in a different path, or separate backups from different IBM DB2 instances by folders inside one bucket.

Figure 2. Example bucket for storing backups

In this example, the primary folder for this database is SBX. The folder data will store the data backups, the folder config will store the configuration parameters, the folder keystore will store the backup of the keystore, and the folder logs will store the database logs.

  1. A user with programmatic access is required, because the only method of authentication available is using an access key (access key ID and secret access key). Create the user with the proper S3 permissions (the best practice is to use the principle of least privilege) and note the access key ID and secret access key. Then, create an IBM DB2 storage access alias using the following syntax:
db2 "catalog storage access alias <alias_name> vendor S3 server <S3 endpoint> user '<access_key>' password '<secret_access_key>' container '<bucket_name>'"
  1. Set the staging path to where the backups will be stored before moving to Amazon S3. This is done by defining the environment variable. Ensure this is set to avoid that the backup is written to an unwanted path.
db2set DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH=/backup/staging/data
  1. To validate if variable was properly set, check that the IBM DB2 variable DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH is set as follows:
db2set |grep -i STAGING

DB2_OBJECT_STORAGE_LOCAL_STAGING_PATH=/backup/staging/data
  1. Initiate the database backup either by the following command or with your backup script.

Note: make sure that the target is DB2REMOTE as follows:

db2 BACKUP DATABASE <instance> TO DB2REMOTE://<alias>//<path>/<additional path> compress without prompting

While the backup is running, you will see data being stored in the staging directory (for this example: /backup/staging/data), and then uploaded to Amazon S3.

The backup script can be integrated with AWS Systems Manager maintenance windows to run on schedule to allow control and visibility. When combined with Amazon SNS, you can send out notifications in case of success, failures, or both.

Set log and DB2 config backup to Amazon S3

There are different options when it comes to storing the database logs into Amazon S3. In this example, we’re using a very simple script initiated by AWS Systems Manager to sync the logs from the staging disk to Amazon S3. This, combined with CRR, increases the durability of the backup by replicating the logs to another Region of your choice. The same backup method for the logs is applied to the IBM DB2 configuration files (parameters and variables) and the keystore. Figure 3 shows the CRR configured on the target bucket, which is then automatically replicating the data to a secondary Region (us-east-1).

Figure 3. Example buckets for IBM DB2 backup and disaster recovery, respectively

Figure 4. Amazon S3 Replication rules configured from sa-east-1 to us-east-1 (São Paulo to N. Virginia)

Figure 5. IBM DB2 logs backed up in São Paulo (sa-east-1) and replicated to N. Virginia (us-east-1)

Amazon S3 Lifecycle policy

For this use case, we have defined a lifecycle policy to maintain the objects (full and log backups) stored as Amazon S3 Standard for 30 days, afterwards they will be moved from Amazon S3 Standard to Amazon S3 Standard-IA. After 30 days, any objects stored as Amazon S3 Standard-IA will be deleted. When used in the context of a database, this allows you to automatically manage the lifecycle of your backups. If you have compliance needs to store specific backups with longer retention times, you can backup to a separate folder (prefix) with a different lifecycle rule.

Figure 6. Amazon S3 Lifecycle policy configure for buckets in São Paulo (sa-east-1) and N. Virginia (us-east-1)

AMI to aid with automation

Up to this point, this blog post has covered how you can manage the backups for a better Recovery Point Objective (RPO). However, let’s consider what happens in case of a disaster or if you have issues with the server running the IBM DB2 database. The Recovery Time Objective (RTO) will be higher because you will have to launch an EC2 instance, prepare the server, install the IBM DB2 database, and restore the full data and log backups.

To reduce your RTO, we recommend using automated AMI backups for your EC2 instance. AWS Backup helps you generate automated AMIs based on tags and resource IDs. AWS Backup can ship the AMI backup generated from your instance to another Region, for a multi-Region disaster recovery strategy.

In this example, we have created an AWS Backup plan to run twice a day and to ship a copy of the AMI from São Paulo (sa-east-1) to N. Virginia (sa-east-1).

Figure 7. Automated AMIs copied from São Paulo (sa-east-1) to N. Virginia (us-east-1) by AWS Backup

Performance considerations

It is important to discuss the factors that impact overall backup and restore performance, and ultimately the RTO.

We recommend using VPC endpoints to ensure that the traffic from your EC2 to Amazon S3 does not traverse the internet, and to provide improved throughput for data upload. Another important factor is the type of EBS volumes used for storing the IBM DB2 data files. In this example, to cover a 170 GB database, the disk used was GP2 not striped in Logical Volume Manager (LVM). Because the degree of parallelism (number of tablespaces read in parallel by the IBM DB2 backup process) can increase CPU usage, caution is warranted when running online backups so as not to cause too much overhead on your database server. When considering optimization for EBS volumes, note the maximum throughput and IOPS that can be reached by instance type.

A test was run using AWS Command Line Interface to sync 100 GB of logs (100 files of 1 GB) from Amazon S3 to the newly created instance. It took 16 minutes. The amount of logs will vary depending on the backup schedule implemented. The Amazon S3 costs will vary depending on the lifecycle policies implemented. For further details, refer to Amazon S3 pricing.

Results

In our tests, the backup time for a 170 GB database took 38 minutes, with a restore time of 14 minutes.

The restore time can vary depending on the backup size, the amount of logs to roll forward, and disk type (mentioned previously in the Performance considerations section).

With the results of this test, the RTO was the restore time plus the time taken to launch the new server based off the AMI backup taken.

Table 1. Recovery test
Disk Type DB Size Instance Type (Backup) Parallel Channels (Backup) Backup Time Instance Type (Restore) Parallel Channels (Restore) Restore Time
GP2 170 GB m5.4xlarge 12 38 Minutes m5.4xlarge 12 14 Minutes

Conclusion

To summarize, in this blog post we described how to configure IBM DB2 backups to Amazon S3, to build an on-demand strategy for backup and disaster recovery. By following these architecture design principles, you will continue to develop resilient business continuity. Let us know if you have any comments or questions. We value your feedback!

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.