Tag Archives: Amazon Elastic Block Storage (EBS)

Amazon EBS Fast Snapshot Restore for Shared EBS Snapshots

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-ebs-fast-snapshot-restore-for-shared-ebs-snapshots/

Snapshots are an integral part of Amazon Elastic Block Store (EBS). Snapshots allow you to create a block-level, point-in-time copy of your volumes for backup, or disaster-recovery purposes. Snapshots are incremental, only data modified since the last snapshots are copied again. You can share snapshots between AWS Regions, or AWS Accounts. Once you have a snapshot, you can create a new Amazon Elastic Block Store (EBS) volume based on a snapshot. The new volume begins as an exact replica of the original volume that was used to create the snapshot.

When you restore volumes from snapshots, they are available for use almost instantaneously. In the background, EBS lazy loads the data from the snapshot as the operating systems accesses the blocks, this reduces the I/O performance of the volume until it is fully initialized. Some I/O demanding workloads however need the volume to operate at full capacity as soon as it is available. This is why we introduced Fast Snapshot Restore (FSR). Once enabled, FSR allows to create volumes that deliver their maximum performance and do not need to be initialized.

Many AWS customers are sharing their snapshots with other AWS Accounts, and there are many reasons to do this. You might want to centrally prepare and manage golden AMIs, with your applications, monitoring, or management tools pre-backed. In the context of Disaster Recovery (DR), your company policies might require to store all backups in one dedicated account. Until today, only the AWS Account owning the snapshot could enable FSR.

Today, you can enable Fast Snasphot Restore (FSR) on snapshots shared with you.

To enable FSR on a shared snapshot, I first create a snapshot on the source AWS Account. Once the snapshot is created, I share it with another account of mine. To do so, I click Actions, and Modify Permissions. I enter the destination AWS Account Number, click Add Permission and Save.

EBS Share Snapshots

I connect the destination account and navigate to EC2 console. When the snapshot is not visible, I check if the Private Snapshots option is selected.

EBS Private SnapshotsI select the snapshot I want to be available for FSR and select Actions, then Manage Fast Snapshot Restore.

EBS Enable fast snapshot restoreI select the Availability Zones where I want to be able to fast restore my snapshot and click Save.

Enable EBS Fast Snapshot Restore

After the settings are saved, I receive a confirmation:

FSR Restore Confirmation

The snapshot stays in enabling mode for a couple of minutes and then becomes enabled. Once done, you can create Amazon Elastic Block Store (EBS) volumes from it. The volumes are fully initialized.

You can also do all this from the API or the AWS Command Line Interface (CLI).

aws ec2 enable-fast-snapshot-restores            \
         --source-snapshot-ids snap-0b00000000d9 \
         --availability-zones us-west-1a         \
         --region us-west-1

{
    "Successful": [
        {
            "SnapshotId": "snap-0b00000000d9",
            "AvailabilityZone": "us-west-1a",
            "State": "enabling",
            "StateTransitionReason": "Client.UserInitiated",
            "OwnerId": "00123456789",
            "EnablingTime": "2020-06-26T16:40:19.720000+00:00"
        }
    ],
    "Unsuccessful": []
}

At any moment I can check what are the volumes I restored from a FSR.

aws ec2 describe-volumes --filters Name=fast-restored,Values=true

{
    "Volumes": [
        {
            "Attachments": [],
            "AvailabilityZone": "us-west-1a",
            "CreateTime": "2020-01-26T00:34:11.093Z",
            "Encrypted": true,
            "KmsKeyId": "arn:aws:kms:us-west-2:123456789012:key/8c5b2c63-0000-0000-0000-5513e232e843",
            "Size": 20,
            "SnapshotId": "snap-0b00000000d9",
            "State": "available",
            "VolumeId": "vol-0d000000000000b0",
            "Iops": 100,
            "VolumeType": "gp2",
            "FastRestored": true
        }
    ]
}

The AWS Account where you enable Fast Snapshot Restore is charged an hourly price. The owner of the snapshot is not charged for enabling FSR in another AWS Account. When the owner of your shared snapshot deletes the snapshot or stops sharing the snapshot with you, the FSR for your shared snapshot is automatically disabled and FSR billing for the snapshot is terminated.

You can enable Fast Snapshot Restore in all commercial AWS Regions today.

As usual, let us know your feedback by posting messages on the AWS Forum, or leave a comment on this post.

— seb

Must-know best practices for Amazon EBS encryption

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/must-know-best-practices-for-amazon-ebs-encryption/

This blog post covers common encryption workflows on Amazon EBS. Examples of these workflows are: setting up permissions policies, creating encrypted EBS volumes, running Amazon EC2 instances, taking snapshots, and sharing your encrypted data using customer-managed CMK.

Introduction

Amazon Elastic Block Store (Amazon EBS) service provides high-performance block-level storage volumes for Amazon EC2 instances. Customers have been using Amazon EBS for over a decade to support a broad range of applications including relational and non-relational databases, containerized applications, big data analytics engines, and many more. For Amazon EBS, security is always our top priority. One of the most powerful mechanisms we provide you to secure your data against unauthorized access is encryption.

Amazon EBS offers a straight-forward encryption solution of data at rest , data in transit, and all volume backups. Amazon EBS encryption is supported by all volume types, and includes built-in key management infrastructure without having you to build, maintain, and secure your own keys. We use AWS Key Management Service (AWS KMS) envelope encryption with customer master keys (CMK) for your encrypted volumes and snapshots. We also offer an easy way to ensure all your newly created Amazon EBS resources are always encrypted by simply selecting encryption by default. This means you no longer need to write IAM policies to require the use of encrypted volumes. All your new Amazon EBS volumes are automatically encrypted at creation.

You can choose from two types of CMKs: AWS managed and customer managed. AWS managed CMK is the default on Amazon EBS (unless you explicitly override it), and does not require you to create a key or manage any policies related to the key. Any user with EC2 permission in your account is able to encrypt/decrypt EBS resources encrypted with that key. If your compliance and security goals require more granular control over who can access your encrypted data- customer-managed CMK is the way to go.

In the following section, I dive into some best practices with your customer-managed CMK to accomplish your encryption workflows.

Defining permissions policies

To get started with encryption, using your own customer-manager CMK, you first need to create the CMK and set up the policies needed. For simplicity, I use a fictitious account ID 111111111111 and an AWS KMS customer master key (CMK) named with the alias cmk1 in Region us-east-1.
As you go through this post, be sure to change the account ID and the AWS KMS CMK to match your own.

  1. Log on to AWS Management Console with admin user. Navigate to AWS KMS service, and create a new KMS key in the desired Region.

kms console screenshot

      2. Go to the AWS Identity and Access Management (IAM) console and navigate to policies console. On create policy wizard, click on the JSON tab, and add the following policy:

{

    "Version": "2012-10-17",

    "Statement": [

            {

        "Sid": "VisualEditor0",

        "Effect": "Allow",

        "Action": [

            "kms:GenerateDataKeyWithoutPlaintext",

            "kms:ReEncrypt*",

            "kms:CreateGrant"

            ],

            "Resource": [

            "arn:aws:kms:us-east-1:<111111111111>:key/<key-id of cmk1>"

             ]

     }

  ]

}
  1. Go to IAM Users, click on Add permissions and Attach existing policies directly. Select the preceding policy you created along with AmazonEC2FullAccess policy.

You now have all the necessary policies to start encrypting data with you own CMK on Amazon EBS.

Enabling encryption by default

Encryption by default allows you to ensure that all new EBS volumes created in your account are always encrypted, even if you don’t specify encrypted=true request parameter. You have the option to choose the default key to be AWS managed or a key that you create. If you use IAM policies that require the use of encrypted volumes, you can use this feature to avoid launch failures that would occur if unencrypted volumes were inadvertently referenced when an instance is launched. Before turning on encryption by default, make sure to go through some of the limitations in the consideration section at the end of this blog.

Use the following steps to opt in to encryption by default:

  1. Logon to EC2 console in the AWS Management Console.
  2. Click on Settings- Amazon EBS encryption on the right side of the Dashboard console (note: settings are specific to individual AWS regions in your account).
  3. Check the box Always Encrypt new EBS volumes.
  4. By default, AWS managed key is used for Amazon EBS encryption. Click on Change the default key and select your desired key. In this blog, the desired key is cmk1.
  5. You’re done! Any new volume created from now on will be encrypted with the KMS key selected in the previous step.

Creating encrypted Amazon EBS volumes

To create an encrypted volume, simply go to Volumes under Amazon EBS in your EC2 console, and click Create Volume.

Then, select your preferred volume attributes and mark the encryption flag. Choose your designated master key (CMK) and voila- your volume is encrypted!

If you turned on encryption by default in the previous section, the encryption option is already selected and grayed out. Similarly, in the AWS CLI, your volume is always encrypted regardless if you set encrypted=True, and you can override the default encryption key by specifying a different one. The following image shows:

encryption and master key

Launching instances with encrypted volumes

When launching an EC2 instance, you can easily specify encryption with your CMK even if the Amazon Machine Image (AMI) you selected is not encrypted.

Follow the steps in the Launch Wizard under EC2 console, and select your CMK in the Add Storage section. If you previously set encryption by default, you see your selected default key, which can be changed to any other key of your choice as the following image shows:

adding encrypted storage to instance
Alternatively, using RunInstances API/CLI, you can provide the kmsKeyID for encrypting the volumes that are created from the AMI by specifying encryption in the block device mapping (BDM) object. If you don’t specify the kmsKeyID in BDM but set the encryption flag to “true”, then your default encryption key will be used for encrypting the volume. If you turned on encryption by default- any RunInstance call will result in encrypted volume, even if you haven’t set encryption flag to “true.”

For more detailed information on launch encrypted EBS-backed EC2 instances see this blog.

Auto Scaling Groups and Spot Instances

When you specify a customer-managed CMK, you must give the appropriate service-linked role access to the CMK so that EC2 Auto Scaling / Spot Instances can launch instances on your behalf (AWSServiceRoleForEC2Spot / AWSServiceRoleForAutoScaling). To do this, you must modify the CMK’s key policy. For more information, click here.

Creating and sharing encrypted snapshots

Now that you’ve launched an instance and have some encrypted EBS volumes, you may want to create snapshots to back up the data on your volumes. Whenever you create a snapshot from an encrypted volume, the snapshot is always be encrypted with the same key you provided for the volume. Other than create-snapshot permission, users do not need any additional key policy setting for creating encrypted snapshots.

Sharing encrypted snapshots

If you want another account at your org to create a volume from that snapshot (for use cases such as test/dev accounts, disaster recovery (DR) etc.), you can take that encrypted snapshot and share it with different accounts. To do that you need create a policy setting for the source (111111111111) and target (222222222222) accounts.

In the source account, complete the following steps:

  1. Select snapshots at the EC2 console.
  2. Click Actions- Modify Permissions
  3. Add the AWS Account Number of your target account
  4. Go to AWS KMS console and select the KMS key associated with your Snapshot
  5. In Other AWS accounts section click on Add other AWS Account and add the target account

Target account:
Users in the target account have several options with the shared snapshot. They can launch an instance directly or copy the snapshot to the target account. You can use the same CMK as in the original account (cmk1), or re-encrypt it with a different CMK.

I recommend that you re-encrypt the snapshot using a CMK owned by the target account. This protects you if the original CMK is compromised, or if the owner revokes permissions, which could cause you to lose access to any encrypted volumes that you created using the snapshot.
When re-encrypt with a different CMK (cmk2 in this example), you only need ReEncryptFrom permission on cmk1 (source). Also, make sure you have the required permissions on your target account for cmk2.

The following JSON policy document shows an example of these permissions:

{

    "Version": "2012-10-17",

    "Statement": [

    {

    "Effect": "Allow",

    "Action": [

            "kms:ReEncryptFrom"

            ],

    "Resource": [

    "arn:aws:kms:us-east-1:<111111111111>:key/<key-id of cmk1>"

    ]

  }

 ]

} ,

{

    "Version": "2012-10-17",

    "Statement": [

    {

        "Effect": "Allow",

        "Action": [

            "kms:GenerateDataKeyWithoutPlaintext",

            "kms:ReEncrypt*",

            "kms:CreateGrant"

        ],

        "Resource": [

        "arn:aws:kms:us-east-1:<222222222222>:key/<key-id of cmk2>"

        ]

   }

  ]

}

You can now select snapshots at the EC2 console in the target account. Locate the snapshot by ID or description.

If you want to copy the snapshot, you also must allow “kms:Describekey” policy. Keep in mind that changing the encryption status of a snapshot during a copy operation results in a full (not incremental) copy, which might incur greater data transfer and storage charges.

 

The same sharing capabilities can be apply to sharing AMI. Check out this blog for more information.

Considerations

  • A few old instance types don’t support Amazon EBS encryption. You won’t be able to launch new instances in the C1, M1, M2, or T1 families.
  • You won’t be able to share encrypted AMIs publicly, and any AMIs you share across accounts need access to your chosen KMS key.
  • You won’t be able to share snapshots / AMI if you encrypt with AWS managed CMK
  • Amazon EBS snapshots will encrypt with the key used by the volume itself.
  • The default encryption settings are per-region. As are the KMS keys.
  • Amazon EBS does not support asymmetric CMKs. For more information, see Using Symmetric and Asymmetric Keys

Conclusion

In this blog post, I discussed several best practices to use Amazon EBS encryption with your customer-managed CMK, which gives you more granular control to meet your compliance goals. I started with the policies needed, covered how to create encrypted volumes, launch encrypted instances, create encrypted backup, and share encrypted data. Now that you are an encryption expert – go ahead and turn on encryption by default so that you’ll have the peace of mind your new volumes are always encrypted on Amazon EBS. To learn more, visit the Amazon EBS landing page.
If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

 

 

Create Snapshots From Any Block Storage Using EBS Direct APIs

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-create-snapshots-from-any-block-storage/

I am excited to announce you can now create Amazon Elastic Block Store (EBS) snapshots from any block storage data, such as on-premises volumes, volumes from another cloud provider, existing block data stored on Amazon Simple Storage Service (S3), or even your own laptop 🙂

AWS customers using the cloud for disaster recovery of on-premises infrastructure all have the same question: how can I transfer my on-premises volume data to the cloud efficiently and at low cost? You usually create temporary Amazon Elastic Compute Cloud (EC2) instances, attach Amazon Elastic Block Store (EBS) volumes, transfer the data at block level from on-premises to these new Amazon Elastic Block Store (EBS) volumes, take a snapshot of every EBS volumes created and tear-down the temporary infrastructure. Some of you choose to use CloudEndure to simplify this process. Or maybe you just gave up and did not copy your on-premises volumes to the cloud because of the complexity.

To simplify this, we are announcing today 3 new APIs that are part of EBS direct API, a new set of APIs we announced at re:Invent 2019. We initially launched a read and diff APIs. We extend it today with write capabilities. These 3 new APIs allow to create Amazon Elastic Block Store (EBS) snapshots from your on-premises volumes, or any block storage data that you want to be able to store and recover in AWS.

With the addition of write capability in EBS direct API, you can now create new snapshots from your on-premises volumes, or create incremental snapshots, and delete them. Once a snapshot is created, it has all the benefits of snapshots created from Amazon Elastic Block Store (EBS) volumes. You can copy them, share them between AWS Accounts, keep them available for a Fast Snapshot Restore, or create Amazon Elastic Block Store (EBS) volumes from them.

Having Amazon Elastic Block Store (EBS) snapshots created from any volumes, without the need to spin up Amazon Elastic Compute Cloud (EC2) instances and Amazon Elastic Block Store (EBS) volumes, allows you to simplify and to lower the cost of the creation and management of your disaster recovery copy in the cloud.

Let’s have a closer look at the API
You first call StartSnapshot to create a new snapshot. When the snapshot is incremental, you pass the ID of the parent snapshot. You can also pass additional tags to apply to the snapshot, or encrypt these snapshots and manage the key, just like usual. If you choose to encrypt snapshots, be sure to check our technical documentation to understand the nuances and options.

Then, for each block of data, you call PutSnapshotBlock. This API has 6 mandatory parameters: snapshot-id, block-index, block-data, block-length, checksum, and checksum-algorithm. The API supports block lengths of 512 KB. You can send your blocks in any order, and in parallel, block-index keeps the order correct.

After you send all the blocks, you call CompleteSnapshot with changed-blocks-count parameter having the number of blocks you sent.

Let’s put all these together
Here is the pseudo code you must write to create a snapshot.

AmazonEBS amazonEBS = AmazonEBSClientBuilder.standard()
   .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpointName, awsRegion))
   .withCredentials(credentialsProvider)
   .build();

response = amazonEBS.startSnapshot(startSnapshotRequest)
snapshotId = response.getSnapshotId();

for each (block in changeset) {
    putResponse = amazonEBS.putSnapshotBlock(putSnapshotBlockRequest);
}
amazonEBS.completeSnapshot(completeSnapshotRequest);

As usual, when using this code, you must have appropriate IAM policies allowing to call the new API. For example:

{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ebs:StartSnapshot",
"ebs:PutSnapshotBlock",
"ebs:CompleteSnapshot"
],
"Resource": "arn:aws:ec2:<Region>::snapshot/*" }]

Also include some KMS related permissions when creating encrypted snapshots.

In addition of the storage cost for snapshots, there is a charge per API call when you call PutSnapshotBlock.

These new snapshot APIs are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), China (Beijing), China (Ningxia), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo).

You can start to use them today.

— seb

re:Invent 2019: Introducing the Amazon Builders’ Library (Part I)

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/reinvent-2019-introducing-the-amazon-builders-library-part-i/

Today, I’m going to tell you about a new site we launched at re:Invent, the Amazon Builders’ Library, a collection of living articles covering topics across architecture, software delivery, and operations. You get to peek under the hood of how Amazon architects, releases, and operates the software underpinning Amazon.com and AWS.

Want to know how Amazon.com does what it does? This is for you. In this two-part series (the next one coming December 23), I’ll highlight some of the best architecture articles written by Amazon’s senior technical leaders and engineers.

Avoiding insurmountable queue backlogs

Avoiding insurmountable queue backlogs

In queueing theory, the behavior of queues when they are short is relatively uninteresting. After all, when a queue is short, everyone is happy. It’s only when the queue is backlogged, when the line to an event goes out the door and around the corner, that people start thinking about throughput and prioritization.

In this article, I discuss strategies we use at Amazon to deal with queue backlog scenarios – design approaches we take to drain queues quickly and to prioritize workloads. Most importantly, I describe how to prevent queue backlogs from building up in the first place. In the first half, I describe scenarios that lead to backlogs, and in the second half, I describe many approaches used at Amazon to avoid backlogs or deal with them gracefully.

Read the full article by David Yanacek – Principal Engineer

Timeouts, retries, and backoff with jitter

Timeouts, retries and backoff with jitter

Whenever one service or system calls another, failures can happen. These failures can come from a variety of factors. They include servers, networks, load balancers, software, operating systems, or even mistakes from system operators. We design our systems to reduce the probability of failure, but impossible to build systems that never fail. So in Amazon, we design our systems to tolerate and reduce the probability of failure, and avoid magnifying a small percentage of failures into a complete outage. To build resilient systems, we employ three essential tools: timeouts, retries, and backoff.

Read the full article by Marc Brooker, Senior Principal Engineer

Challenges with distributed systems

Challenges with distributed systems

The moment we added our second server, distributed systems became the way of life at Amazon. When I started at Amazon in 1999, we had so few servers that we could give some of them recognizable names like “fishy” or “online-01”. However, even in 1999, distributed computing was not easy. Then as now, challenges with distributed systems involved latency, scaling, understanding networking APIs, marshalling and unmarshalling data, and the complexity of algorithms such as Paxos. As the systems quickly grew larger and more distributed, what had been theoretical edge cases turned into regular occurrences.

Developing distributed utility computing services, such as reliable long-distance telephone networks, or Amazon Web Services (AWS) services, is hard. Distributed computing is also weirder and less intuitive than other forms of computing because of two interrelated problems. Independent failures and nondeterminism cause the most impactful issues in distributed systems. In addition to the typical computing failures most engineers are used to, failures in distributed systems can occur in many other ways. What’s worse, it’s impossible always to know whether something failed.

Read the full article by Jacob Gabrielson, Senior Principal Engineer

Static stability using Availability Zones

Static stability using availability zones

At Amazon, the services we build must meet extremely high availability targets. This means that we need to think carefully about the dependencies that our systems take. We design our systems to stay resilient even when those dependencies are impaired. In this article, we’ll define a pattern that we use called static stability to achieve this level of resilience. We’ll show you how we apply this concept to Availability Zones, a key infrastructure building block in AWS and therefore a bedrock dependency on which all of our services are built.

Read the full article by Becky Weiss, Senior Principal Engineer, and Mike Furr, Principal Engineer

Check back in two weeks to read about some other architecture-based expert articles that let you in on how Amazon does what it does.

New – Programmatic Access to EBS Snapshot Content

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-programmatic-access-to-ebs-snapshot-content/

EBS Snapshots are really cool! You can create them interactively from the AWS Management Console:

You can create them from the Command Line (create-snapshot) or by making a call to the CreateSnapshot function, and you can use the Data Lifecycle Manager (DLM) to set up automated snapshot management.

All About Snapshots
The snapshots are stored in Amazon Simple Storage Service (S3), and can be used to quickly create fresh EBS volumes as needed. The first snapshot of a volume contains a copy of every 512K block on the volume. Subsequent snapshots contain only the blocks that have changed since the previous snapshot. The incremental nature of the snapshots makes them very cost-effective, since (statistically speaking) many of the blocks on an EBS volume do not change all that often.

Let’s look at a quick example. Suppose that I create and format an EBS volume with 8 blocks (this is smaller than the allowable minimum size, but bear with me), copy some files to it, and then create my first snapshot (Snap1). The snapshot contains all of the blocks, and looks like this:

Then I add a few more files, delete one, and create my second snapshot (Snap2). The snapshot contains only the blocks that were modified after I created the first one, and looks like this:

I make a few more changes, and create a third snapshot (Snap3):

Keep in mind that the relationship between directories, files, and the underlying blocks is controlled by the file system, and is generally quite complex in real-world situations.

Ok, so now I have three snapshots, and want to use them to create a new volume. Each time I create a snapshot of an EBS volume, an internal reference to the previous snapshot is created. This allows CreateVolume to find the most recent copy of each block, like this:

EBS manages all of the details for me behind the scenes. For example, if I delete Snap2, the copy of Block 0 in the snapshot also deleted since the copy in Snap3 is newer, but the copy of Block 4 in Snap2 becomes part of Snap3:

By the way, the chain of backward references (Snap3 to Snap1, or Snap3 to Snap2 to Snap1) is referred to as the lineage of the set of snapshots.

Now that I have explained all this, I should also tell you that you generally don’t need to know this, and can focus on creating, using, and deleting snapshots!

However…

Access to Snapshot Content
Today we are introducing a new set of functions that provide you with access to the snapshot content, as described above. These functions are designed for developers of backup/recovery, disaster recovery, and data management products & services, and will allow them to make their offerings faster and more cost-effective.

The new functions use a block index (0, 1, 2, and so forth), to identify a particular 512K block within a snapshot. The index is returned in the form of an encrypted token, which is meaningful only to the GetSnapshotBlock function. I have represented these tokens as T0, T1, and so forth below. The functions currently work on blocks of 512K bytes, with plans to support more block sizes in the future.

Here are the functions:

ListSnapshotBlocks – Identifies all of the blocks in a given snapshot as encrypted tokens. For Snap1, it would return [T0, T1, T2, T3, T4, T5, T6, T7] and for Snap2 it would return [T0, T4].

GetSnapshotBlock – Returns the content of a block. If the block is part of an encrypted snapshot, it will be returned in decrypted form.

ListChangedBlocks – Returns the list of blocks that have changed between two snapshots in a lineage, again as encrypted tokens. For Snap2 it would return [T0, T4] and for Snap3 it would return [T0, T5].

Like I said, these functions were built to address one specialized yet very important use case. Having said that, I am now highly confident that new and unexpected ones will pop up within 48 hours (feel free to share them with me)!

Available Now
The new functions are available now and you can start using them today in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Singapore), and Asia Pacific (Tokyo) Regions; they will become available in the remaining regions in the next few weeks. There is a charge for calls to the List and Get functions, and the usual KMS charges will apply when you call GetSnapshotBlock to access a block that is part of an encrypted snapshot.

Jeff;

AWS Now Available from a Local Zone in Los Angeles

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-now-available-from-a-local-zone-in-los-angeles/

AWS customers are always asking for more features, more bandwidth, more compute power, and more memory, while also asking for lower latency and lower prices. We do our best to meet these competing demands: we launch new EC2 instance types, EBS volume types, and S3 storage classes at a rapid pace, and we also reduce prices regularly.

AWS in Los Angeles
Today we are launching a Local Zone in Los Angeles, California. The Local Zone is a new type of AWS infrastructure deployment that brings select AWS services very close to a particular geographic area. This Local Zone is designed to provide very low latency (single-digit milliseconds) to applications that are accessed from Los Angeles and other locations in Southern California. It will be of particular interest to highly-demanding applications that are particularly sensitive to latency. This includes:

Media & Entertainment – Gaming, 3D modeling & rendering, video processing (including real-time color correction), video streaming, and media production pipelines.

Electronic Design Automation – Interactive design & layout, simulation, and verification.

Ad-Tech – Rapid decision making & ad serving.

Machine Learning – Fast, continuous model training; high-performance low-latency inferencing.

All About Local Zones
The new Local Zone in Los Angeles is a logical part of the US West (Oregon) Region (which I will refer to as the parent region), and has some unique and interesting characteristics:

Naming – The Local Zone can be accessed programmatically as us-west-2-lax-1a. All API, CLI, and Console access takes place through the us-west-2 API endpoint and the US West (Oregon) Console.

Opt-In – You will need to opt in to the Local Zone in order to use it. After opting in, you can create a new VPC subnet in the Local Zone, taking advantage of all relevant VPC features including Security Groups, Network ACLs, and Route Tables. You can target the Local Zone when you launch EC2 instances and other resources, or you can create a default subnet in the VPC and have it happen automatically.

Networking – The Local Zone in Los Angeles is connected to US West (Oregon) over Amazon’s private backbone network. Connections to the public internet take place across an Internet Gateway, giving you local ingress and egress to reduce latency. Elastic IP Addresses can be shared by a group of Local Zones in a particular geographic location, but they do not move between a Local Zone and the parent region. The Local Zone also supports AWS Direct Connect, giving you the opportunity to route your traffic over a private network connection.

Services – We are launching with support for seven EC2 instance types (T3, C5, M5, R5, R5d, I3en, and G4), two EBS volume types (io1 and gp2), Amazon FSx for Windows File Server, Amazon FSx for Lustre, Application Load Balancer, and Amazon Virtual Private Cloud. Single-Zone RDS is on the near-term roadmap, and other services will come later based on customer demand. Applications running in a Local Zone can also make use of services in the parent region.

Parent Region – As I mentioned earlier, the new Local Zone is a logical extension of the US West (Oregon) region, and is managed by the “control plane” in the region. API calls, CLI commands, and the AWS Management Console should use “us-west-2” or US West (Oregon).

AWS – Other parts of AWS will continue to work as expected after you start to use this Local Zone. Your IAM resources, CloudFormation templates, and Organizations are still relevant and applicable, as are your tools and (perhaps most important) your investment in AWS training.

Pricing & Billing – Instances and other AWS resources in Local Zones will have different prices than in the parent region. Billing reports will include a prefix that is specific to a group of Local Zones that share a physical location. EC2 instances are available in On Demand & Spot form, and you can also purchase Savings Plans.

Using a Local Zone
The first Local Zone is available today, and you can request access here:

In early 2020, you will be able opt in using the console, CLI, or by API call.

After opting in, I can list my AZs and see that the Local Zone is included:

Then I create a new VPC subnet for the Local Zone. This gives me transparent, seamless connectivity between the parent zone in Oregon and the Local Zone in Los Angeles, all within the VPC:

I can create EBS volumes:

They are, as usual, ready within seconds:

I can also see and use the Local Zone from within the AWS Management Console:

I can also use the AWS APIs, CloudFormation templates, and so forth.

Thinking Ahead
Local Zones give you even more architectural flexibility. You can think big, and you can think different! You now have the components, tools, and services at your fingertips to build applications that make use of any conceivable combination of legacy on-premises resources, modern on-premises cloud resources via AWS Outposts, resources in a Local Zone, and resources in one or more AWS regions.

In the fullness of time (as Andy Jassy often says), there could very well be more than one Local Zone in any given geographic area. In 2020, we will open a second one in Los Angeles (us-west-2-lax-1b), and are giving consideration to other locations. We would love to get your advice on locations, so feel free to leave me a comment or two!

Now Available
The Local Zone in Los Angeles is available now and you can start using it today. Learn more about Local Zones.

Jeff;

 

New – Amazon EBS Fast Snapshot Restore (FSR)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-ebs-fast-snapshot-restore-fsr/

Amazon Elastic Block Store (EBS) has been around for more than a decade and is a fundamental AWS building block. You can use it to create persistent storage volumes that can store up to 16 TiB and supply up to 64,000 IOPS (Input/Output Operations per Second). You can choose between four types of volumes, making the choice that best addresses your data transfer throughput, IOPS, and pricing requirements. If your requirements change, you can modify the type of a volume, expand it, or change the performance while the volume remains online and active. EBS snapshots allow you to capture the state of a volume for backup, disaster recovery, and other purposes. Once created, a snapshot can be used to create a fresh EBS volume. Snapshots are stored in Amazon Simple Storage Service (S3) for high durability.

Our ever-creative customers are using EBS snapshots in many interesting ways. In addition to the backup and disaster recovery use cases that I just mentioned, they are using snapshots to quickly create analytical or test environments using data drawn from production, and to support Virtual Desktop Interface (VDI) environments. As you probably know, the AMIs (Amazon Machine Images), that you use to launch EC2 instances are also stored as one or more snapshots.

Fast Snapshot Restore
Today we are launching Fast Snapshot Restore (FSR) for EBS. You can enable it for new and existing snapshots on a per-AZ (Availability Zone) basis, and then create new EBS volumes that deliver their maximum performance and do not need to be initialized.

This performance enhancement will allow you to build AWS-based systems that are even faster and more responsive than before. Faster boot times will speed up your VDI environments and allow your Auto Scaling Groups to come online and start processing traffic more quickly, even if you use large and/or custom AMIs. I am sure that you will soon dream up new applications that can take advantage of this new level of speed and predictability.

Fast Snapshot Restore can be enabled on a snapshot even while the snapshot is being created. If you create nightly backup snapshots, enabling them for FSR will allow you to do fast restores the following day regardless of the size of the volume or the snapshot.

Enabling & Using Fast Snapshot Restore
I can get started in minutes! I open the EC2 Console and find the first snapshot that I want to set up for fast restore:

I select the snapshot and choose Manage Fast Snapshot Restore from the Actions menu:

Then I select the Availability Zones where I plan to create EBS volumes, and click Save:

After the settings are saved, I receive a confirmation:

The console shows me that my snapshot is being enabled for Fast Snapshot Restore:

The status progresses from enabling to optimizing, and then to enabled. Behind the scenes and with no extra effort on my part, the optimization process provisions extra resources to deliver the fast restores, proceeding at a rate of one TiB per hour. By contrast, non-optimized volumes retrieve data directly from the S3-stored snapshot on an incremental, on-demand basis.

Once the optimization is complete, I can create volumes from the snapshot in the usual way, confident that they will be ready in seconds and pre-initialized for full performance! Each FSR-enabled snapshot supports creation of up to 10 initialized volumes per hour per Availability Zone; additional volume creations will be non-initialized. As my needs change, I can enable Fast Snapshot Restore in additional Availability Zones and I can disable it in Zones where I had previously enabled it.

When Fast Snapshot Restore is enabled for a snapshot in a particular Availability Zone, a bucket-based credit system governs the acceleration process. Creating a volume consumes a credit; the credits refill over time, and the maximum number of credits is a function of the FSR-enabled snapshot size. Here are some guidelines:

  • A 100 GiB FSR-enabled snapshot will have a maximum credit balance of 10, and a fill rate of 10 credits per hour.
  • A 4 TiB FSR-enabled snapshot will have a maximum credit balance of 1, and a fill rate of 1 credit every 4 hours.

In other words, you can do 1 TiB of restores per hour for a given FSR-enabled snapshot within an AZ.

Things to Know
Here are some things to know about Fast Snapshot Restore:

Regions & AZs – Fast Snapshot Restore is available in all Availability Zones of the US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Pricing – You pay $0.75 for each hour that Fast Snapshot Restore is enabled for a snapshot in a particular Availability Zone, pro-rated and with a minimum of one hour.

Monitoring – You can use the following per-minute CloudWatch metrics to track the state of the credit bucket for each FSR-enabled snapshot:

  • FastSnapshotRestoreCreditsBalance – The number of volume creation credits that are available.
  • FastSnapshotRestoreCreditsBucketSize – The maximum number of volume creation credits that can be accumulated.

CLI & Programmatic Access – You can use the enable-fast-snapshot-restores, describe-fast-snapshot-restores, and disable-fast-snapshot-restores commands to create and manage your accelerated snapshots from the command line. You can also use the EnableFastSnapshotRestores, DescribeFastSnapshotRestores, and DisableFastSnapshotRestores API functions from your application code.

CloudWatch Events – You can use the EBS Fast Snapshot Restore State-change Notification event type to invoke Lambda functions or other targets when the state of a snapshot/AZ pair changes. Events are emitted on successful and unsuccessful transitions to the enabling, optimizing, enabled, disabling, and disabled states.

Data Lifecycle Manager – You can enable FSR on snapshots created by your DLM lifecycle policies, specify AZs, and specify the number of snapshots to be FSR-enabled. You can use an existing CloudFormation template to integrate FSR into your DLM policies (read about the AWS::DLM::LifecyclePolicy to learn more).

In the Works
We are launching with support for snapshots that you own. Over time, we intend to expand coverage and allow you to enable Fast Snapshot Restore for snapshots that you have been granted access to.

Available Now
Fast Snapshot Restore is available now and you can start using it today!

Jeff;

 

Secure your data on Amazon EMR using native EBS and per bucket S3 encryption options

Post Syndicated from Duncan Chan original https://aws.amazon.com/blogs/big-data/secure-your-data-on-amazon-emr-using-native-ebs-and-per-bucket-s3-encryption-options/

Data encryption is an effective solution to bolster data security. You can make sure that only authorized users or applications read your sensitive data by encrypting your data and managing access to the encryption key. One of the main reasons that customers from regulated industries such as healthcare and finance choose Amazon EMR is because it provides them with a compliant environment to store and access data securely.

This post provides a detailed walkthrough of two new encryption options to help you secure your EMR cluster that handles sensitive data. The first option is native EBS encryption to encrypt volumes attached to EMR clusters. The second option is an Amazon S3 encryption that allows you to use different encryption modes and customer master keys (CMKs) for individual S3 buckets with Amazon EMR.

Local disk encryption on Amazon EMR

Previously you could only choose Linux Unified Key Setup (LUKS) for at-rest encryption. You now have a choice of using LUKS or native EBS encryption to encrypt EBS volumes attached to an EMR cluster. EBS encryption provides the following benefits:

  • End-to-end encryption – When you enable EBS encryption for Amazon EMR, all data on EBS volumes, including intermediate disk spills from applications and Disk I/O between the nodes and EBS volumes, are encrypted. The snapshots that you take of an encrypted EBS volume are also encrypted and you can move them between AWS Regions as needed.
  • Amazon EMR root volumes encryption – There is no need to create a custom Amazon Linux Image for encrypting root volumes.
  • Easy auditing for encryption When you use LUKS encryption, though your EBS volumes are encrypted along with any instance store volumes, you still see EBS with Not Encrypted status when you use an Amazon EC2 API or the EC2 console to check on the encryption status. This is because the API doesn’t look into the EMR cluster to check the disk status; your auditors would need to SSH into the cluster to check for disk encrypted compliance. However, with EBS encryption, you can check the encryptions status from the EC2 console or through an EC2 API call.
  • Transparent Encryption – EBS encryption is transparent to any applications running on Amazon EMR and doesn’t require you to modify any code.

Amazon EBS encryption integrates with AWS KMS to provide the encryption keys that protect your data. To use this feature, you have to use a CMK in your account and Region. A CMK gives you control to create and manage the key, including enabling and disabling the key, controlling access, rotating the key, and deleting it. For more information, see Customer Master Keys.

Enabling EBS encryption on Amazon EMR

To enable EBS encryption on Amazon EMR, complete the following steps:

  1. Create your CMK in AWS KMS.
    You can do this either through the AWS KMS console, AWS CLI, or the AWS KMS CreateKey API. Create keys in the same Region as your EMR cluster. For more information, see Creating Keys.
  2. Give the Amazon EMR service role and EC2 instance profile permission to use your CMK on your behalf.
    If you are using the EMR_DefaultRole, add the policy with the following steps:

    • Open the AWS KMS console.
    • Choose your AWS Region.
    • Choose the key ID or alias of the CMK you created.
    • On the key details page, under Key Users, choose Add.
    • Choose the Amazon EMR service role.The name of the default role is EMR_DefaultRole.
    • Choose Attach.
    • Choose the Amazon EC2 instance profile.The name of the default role for the instance profile is EMR_EC2_DefaultRole.
    • Choose Attach.
      If you are using a customized policy, add the following code to the service role to allow Amazon EMR to create and use the CMK, with the resource being the CMK ARN:

      { 
      "Version": "2012-10-17", 
      "Statement": [ 
         { 
         "Sid": "EmrDiskEncryptionPolicy", 
         "Effect": "Allow", 
         "Action": [ 
            "kms:Encrypt", 
            "kms:Decrypt", 
            "kms:ReEncrypt*", 
            "kms:CreateGrant", 
            "kms:GenerateDataKeyWithoutPlaintext", 
            "kms:DescribeKey" 
            ], 
         "Resource": [ 
            " arn:aws:kms:region:account-id:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx " " 
            ] 
         } 
      ] 
      } 

       

  3. Create and configure the Amazon EMR Security configuration template.Do this either through the console or using CLI or SDK, with the following steps:
    • Open the Amazon EMR console.
    • Choose Security Configuration.
    • Under Local disk encryption, choose Enable at-rest encryption for local disks
    • For Key provider type, choose AWS KMS.
    • For AWS KMS customer master key, choose the key ARN of your CMK.This post uses the key ARN ebsEncryption_emr_default_role.
    • Select Encrypt EBS volumes with EBS encryption.

Default encryption with EC2 vs. Amazon EMR EBS encryption

EC2 has a similar feature called default encryption. With this feature, all EBS volumes in your account are encrypted without exception using a single CMK that you specify per Region. With EBS encryption from Amazon EMR, you can use different a KMS key per EMR cluster to secure your EBS volumes. You can use both EBS encryption provided by Amazon EMR and default encryption provided by EC2.

For this post, EBS encryption provided by Amazon EMR takes precedent, and you encrypt the EBS volumes attached to the cluster with the CMK that you selected in the security configuration.

S3 encryption

Amazon S3 encryption also works with Amazon EMR File System (EMRFS) objects read from and written to S3. You can use either server-side encryption (SSE) or client-side encryption (CSE) mode to encrypt objects in S3 buckets. The following table summarizes the different encryption modes available for S3 encryption in Amazon EMR.

Encryption locationKey storageKey management
SSE-S3Server side on S3S3S3
SSE-KMSServer side on S3KMS

Choose the AWS managed CMK for Amazon S3 with the alias aws/s3, or create a custom CMK.

 

CSE-KMSClient side on the EMR clusterKMSA custom CMK that you create.
CSE-CustomClient side on the EMR clusterYouYour own key provider.

The encryption choice you make depends on your specific workload requirements. Though SSE-S3 is the most straightforward option that allows you to fully delegate the encryption of S3 objects to Amazon S3 by selecting a check box, SSE-KMS or CSE-KMS are better options that give you granular control over CMKs in KMS by using policies. With AWS KMS, you can see when, where, and by whom your customer managed keys (CMK) were used, because AWS CloudTrail logs API calls for key access and key management. These logs provide you with full audit capabilities for your keys. For more information, see Encryption at Rest for EMRFS Data in Amazon S3.

Encrypting your S3 buckets with different encryption modes and keys

With S3 encryption on Amazon EMR, all the encryption modes use a single CMK by default to encrypt objects in S3. If you have highly sensitive content in specific S3 buckets, you may want to manage the encryption of these buckets separately by using different CMKs or encryption modes for individual buckets. You can accomplish this using the per bucket encryption overrides option in Amazon EMR. To do so, complete the following steps:

  1. Open the Amazon EMR console.
  2. Choose Security Configuration.
  3. Under S3 encryption, select Enable at-rest encryption for EMRFS data in Amazon S3.
  4. For Default encryption mode, choose your encryption mode.This post uses SSE-KMS.
  5. For AWS KMS customer master key, choose your key.The key you provide here encrypts all S3 buckets used with Amazon EMR. This post uses ebsEncryption_emr_default_role.
  6. Choose Per bucket encryption overrides.You can set different encryption modes for different buckets.
  7. For S3 bucket, add your S3 bucket that you want to encrypt differently.
  8. For Encryption mode, choose an encryption mode.
  9. For Encryption materials, enter your CMK.

If you have already enabled default encryption for S3 buckets directly in Amazon S3, you can also choose to bypass the S3 encryption options in the security configuration setting in Amazon EMR. This allows Amazon EMR to delegate encrypting objects in the buckets to Amazon S3, which uses the encryption key specified in the bucket policy to encrypt objects before persisting it on S3.

Summary

This post walked through the native EBS and S3 encryption options available with Amazon EMR to encrypt and secure your data. Please share your feedback on how these optimizations benefit your real-world workloads.

 


About the Author

Duncan Chan is a software development engineer for Amazon EMR. He enjoys learning and working on big data technologies. When he is not working, he will be playing with his dogs.

 

 

 

New – Opt-in to Default Encryption for New EBS Volumes

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/

My colleagues on the AWS team are always looking for ways to make it easier and simpler for you to protect your data from unauthorized access. This work is visible in many different ways, and includes the AWS Cloud Security page, the AWS Security Blog, a rich collection of AWS security white papers, an equally rich set of AWS security, identity, and compliance services, and a wide range of security features within individual services. As you might recall from reading this blog, many AWS services support encryption at rest & in transit, logging, IAM roles & policies, and so forth.

Default Encryption
Today I would like to tell you about a new feature that makes the use of encrypted Amazon EBS (Elastic Block Store) volumes even easier. This launch builds on some earlier EBS security launches including:

You can now specify that you want all newly created EBS volumes to be created in encrypted form, with the option to use the default key provided by AWS, or a key that you create. Because keys and EC2 settings are specific to individual AWS regions, you must opt-in on a region-by-region basis.

This new feature will let you reach your protection and compliance goals by making it simpler and easier for you to ensure that newly created volumes are created in encrypted form. It will not affect existing unencrypted volumes.

If you use IAM policies that require the use of encrypted volumes, you can use this feature to avoid launch failures that would occur if unencrypted volumes were inadvertently referenced when an instance is launched. Your security team can enable encryption by default without having to coordinate with your development team, and with no other code or operational changes.

Encrypted EBS volumes deliver the specified instance throughput, volume performance, and latency, at no extra charge. I open the EC2 Console, make sure that I am in the region of interest, and click Settings to get started:

Then I select Always encrypt new EBS volumes:

I can click Change the default key and choose one of my keys as the default:

Either way, I click Update to proceed. One thing to note here: This setting applies to a single AWS region; I will need to repeat the steps above for each region of interest, checking the option and choosing the key.

Going forward, all EBS volumes that I create in this region will be encrypted, with no additional effort on my part. When I create a volume, I can use the key that I selected in the EC2 Settings, or I can select a different one:

Any snapshots that I create are encrypted with the key that was used to encrypt the volume:

If I use the volume to create a snapshot, I can use the original key or I can choose another one:

Things to Know
Here are some important things that you should know about this important new AWS feature:

Older Instance Types – After you enable this feature, you will not be able to launch any more C1, M1, M2, or T1 instances or attach newly encrypted EBS volumes to existing instances of these types. We recommend that you migrate to newer instance types.

AMI Sharing – As I noted above, we recently gave you the ability to share encrypted AMIs with other AWS accounts. However, you cannot share them publicly, and you should use a separate account to create community AMIs, Marketplace AMIs, and public snapshots. To learn more, read How to Share Encrypted AMIs Across Accounts to Launch Encrypted EC2 Instances.

Other AWS Services – AWS services such as Amazon Relational Database Service (RDS) and Amazon WorkSpaces that use EBS for storage perform their own encryption and key management and are not affected by this launch. Services such as Amazon EMR that create volumes within your account will automatically respect the encryption setting, and will use encrypted volumes if the always-encrypt feature is enabled.

API / CLI Access – You can also access this feature from the EC2 CLI and the API.

No Charge – There is no charge to enable or use encryption. If you are using encrypted AMIs and create a separate one for each AWS account, you can now share the AMI with other accounts, leading to a reduction in storage utilization and charges.

Per-Region – As noted above, you can opt-in to default encryption on a region-by-region basis.

Available Now
This feature is available now and you can start using it today in all public AWS regions and in GovCloud. It is not available in the AWS regions in China.

Jeff;

 

Learn about AWS Services & Solutions – April AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-april-aws-online-tech-talks/

AWS Tech Talks

Join us this April to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Blockchain

May 2, 2019 | 11:00 AM – 12:00 PM PTHow to Build an Application with Amazon Managed Blockchain – Learn how to build an application on Amazon Managed Blockchain with the help of demo applications and sample code.

Compute

April 29, 2019 | 1:00 PM – 2:00 PM PTHow to Optimize Amazon Elastic Block Store (EBS) for Higher Performance – Learn how to optimize performance and spend on your Amazon Elastic Block Store (EBS) volumes.

May 1, 2019 | 11:00 AM – 12:00 PM PTIntroducing New Amazon EC2 Instances Featuring AMD EPYC and AWS Graviton Processors – See how new Amazon EC2 instance offerings that feature AMD EPYC processors and AWS Graviton processors enable you to optimize performance and cost for your workloads.

Containers

April 23, 2019 | 11:00 AM – 12:00 PM PTDeep Dive on AWS App Mesh – Learn how AWS App Mesh makes it easy to monitor and control communications for services running on AWS.

March 22, 2019 | 9:00 AM – 10:00 AM PTDeep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application.

Databases

April 23, 2019 | 1:00 PM – 2:00 PM PTSelecting the Right Database for Your Application – Learn how to develop a purpose-built strategy for databases, where you choose the right tool for the job.

April 25, 2019 | 9:00 AM – 10:00 AM PTMastering Amazon DynamoDB ACID Transactions: When and How to Use the New Transactional APIs – Learn how the new Amazon DynamoDB’s transactional APIs simplify the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables.

DevOps

April 24, 2019 | 9:00 AM – 10:00 AM PTRunning .NET applications with AWS Elastic Beanstalk Windows Server Platform V2 – Learn about the easiest way to get your .NET applications up and running on AWS Elastic Beanstalk.

Enterprise & Hybrid

April 30, 2019 | 11:00 AM – 12:00 PM PTBusiness Case Teardown: Identify Your Real-World On-Premises and Projected AWS Costs – Discover tools and strategies to help you as you build your value-based business case.

IoT

April 30, 2019 | 9:00 AM – 10:00 AM PTBuilding the Edge of Connected Home – Learn how AWS IoT edge services are enabling smarter products for the connected home.

Machine Learning

April 24, 2019 | 11:00 AM – 12:00 PM PTStart Your Engines and Get Ready to Race in the AWS DeepRacer League – Learn more about reinforcement learning, how to build a model, and compete in the AWS DeepRacer League.

April 30, 2019 | 1:00 PM – 2:00 PM PTDeploying Machine Learning Models in Production – Learn best practices for training and deploying machine learning models.

May 2, 2019 | 9:00 AM – 10:00 AM PTAccelerate Machine Learning Projects with Hundreds of Algorithms and Models in AWS Marketplace – Learn how to use third party algorithms and model packages to accelerate machine learning projects and solve business problems.

Networking & Content Delivery

April 23, 2019 | 9:00 AM – 10:00 AM PTSmart Tips on Application Load Balancers: Advanced Request Routing, Lambda as a Target, and User Authentication – Learn tips and tricks about important Application Load Balancers (ALBs) features that were recently launched.

Productivity & Business Solutions

April 29, 2019 | 11:00 AM – 12:00 PM PTLearn How to Set up Business Calling and Voice Connector in Minutes with Amazon Chime – Learn how Amazon Chime Business Calling and Voice Connector can help you with your business communication needs.

May 1, 2019 | 1:00 PM – 2:00 PM PTBring Voice to Your Workplace – Learn how you can bring voice to your workplace with Alexa for Business.

Serverless

April 25, 2019 | 11:00 AM – 12:00 PM PTModernizing .NET Applications Using the Latest Features on AWS Development Tools for .NET – Get a dive deep and demonstration of the latest updates to the AWS SDK and tools for .NET to make development even easier, more powerful, and more productive.

May 1, 2019 | 9:00 AM – 10:00 AM PTCustomer Showcase: Improving Data Processing Workloads with AWS Step Functions’ Service Integrations – Learn how innovative customers like SkyWatch are coordinating AWS services using AWS Step Functions to improve productivity.

Storage

April 24, 2019 | 1:00 PM – 2:00 PM PTAmazon S3 Glacier Deep Archive: The Cheapest Storage in the Cloud – See how Amazon S3 Glacier Deep Archive offers the lowest cost storage in the cloud, at prices significantly lower than storing and maintaining data in on-premises magnetic tape libraries or archiving data offsite.

Recovering files from an Amazon EBS volume backup

Post Syndicated from Josh Rad original https://aws.amazon.com/blogs/compute/recovering-files-from-an-amazon-ebs-volume-backup/

Contributed by Jeff Bartley, Storage Solutions Architect, AWS

Amazon Elastic Block Store (Amazon EBS) enables you to back up volumes at any time using EBS snapshots. Volume backups can be triggered manually or they can be scheduled using Amazon Data Lifecycle Manager (Amazon DLM) or AWS Backup.

Each backup creates a unique EBS snapshot. The snapshot has all of the data necessary to restore the volume to the exact state that it was in when the backup was made. You can then attach that volume to an Amazon EC2 instance.

A common use case for restoring volumes is to re-create production workloads in test and development environments. For example, you might take a snapshot of your production database and copy that snapshot to your test/dev account. Then you can restore a volume from the snapshot and attach it to one of your test EC2 instances.

You can also use EBS snapshots to recover files and folders from volume backups where the volume was formatted with a Windows or Linux-compatible file system. This commonly occurs when users accidentally modify or delete one or more files and must go back to a previous version.

This post guides you through the process of restoring an EBS volume for the purpose of recovering files or folders from either a Windows or Linux volume. You learn how to restore a volume from an EBS snapshot, attach the volume to a running EC2 instance, and copy the files or folders to be recovered.

NOTE: This process can only be used to recover files or folders from EBS data volumes. To recover an EBS root volume, see the Amazon EBS-backed Instances section in the Amazon EC2 Root Device Volume topic.

Restore a volume from an EBS snapshot
The first step to recovering your files is to identify the EBS snapshot that contains the needed data and then create a volume from it.

On the Create Volume page, you are prompted to choose the volume type, volume size, and the Availability Zone in which the volume should be created. The default volume type is either gp2 or standard, depending on the AWS Region. That is applicable to most use cases.

The default volume size is the size of the volume from which the EBS snapshot was created. For recovering files and folders, the size should not be modified. Create a new volume that is an exact copy of the original volume.

For the Availability Zone, select the same zone as the EC2 instance to be used for recovery. EBS volumes can only be attached to EC2 instances in the same zone in which they were created. Tag the new volume for identification.

Select the new volume to monitor the status until the state is set to available.

Attach the volume to an EC2 instance
To access the files or folders to be recovered, the volume must be attached to an EC2 instance. The instance should be running the same version of Windows or Linux that was running when the volume backup was made. The instance does not need to be stopped, as an EBS volume can be attached to a running EC2 instance.

Recover your files on Windows
If your files were originally created on Windows, connect to the Windows EC2 instance using a desktop viewer that supports RDP. Then, make the EBS volume available for use.

Open Windows Explorer, navigate to the files or folders to be recovered and copy them to the desired destination.

When the recovery effort is complete, you can unmount the volume and detach it from the EC2 instance.

Recover your files on Linux
If your files were originally created on Linux, begin by logging in to the Linux EC2 instance using SSH. Then, make the EBS volume available for use.

The new volume can be identified by the name that corresponds to the device ID specified when the volume was attached to the instance.

When the recovery effort is complete, you can unmount the volume and detach it from the EC2 instance.

Clean up
With the files or folders recovered and the volume detached, you are free to delete the volume. You can always re-create it from the EBS snapshot as needed.

Summary
Maintaining regular backups of your EBS volumes helps protect you against events like the accidental deletion or unintentional modification of files and folders. AWS provides you with a number of tools to schedule and manage your EBS volume backups. These tools include Amazon DLM, AWS Backup, and manual backups using the AWS CLI or SDKs.

In this post, you learned how to recover files or folders from a backup of a volume that was formatted with a Windows or Linux file system. Now you can quickly get access to your files and folders and expedite your recovery process.

For more information, see Restoring an Amazon EBS Volume from a Snapshot.

Optimizing a Lift-and-Shift for Security

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-security/

This is the third and final blog within a three-part series that examines how to optimize lift-and-shift workloads. A lift-and-shift is a common approach for migrating to AWS, whereby you move a workload from on-prem with little or no modification. This third blog examines how lift-and-shift workloads can benefit from an improved security posture with no modification to the application codebase. (Read about optimizing a lift-and-shift for performance and for cost effectiveness.)

Moving to AWS can help to strengthen your security posture by eliminating many of the risks present in on-premise deployments. It is still essential to consider how to best use AWS security controls and mechanisms to ensure the security of your workload. Security can often be a significant concern in lift-and-shift workloads, especially for legacy workloads where modern encryption and security features may not present. By making use of AWS security features you can significantly improve the security posture of a lift-and-shift workload, even if it lacks native support for modern security best practices.

Adding TLS with Application Load Balancers

Legacy applications are often the subject of a lift-and-shift. Such migrations can help reduce risks by moving away from out of date hardware but security risks are often harder to manage. Many legacy applications leverage HTTP or other plaintext protocols that are vulnerable to all manner of attacks. Often, modifying a legacy application’s codebase to implement TLS is untenable, necessitating other options.

One comparatively simple approach is to leverage an Application Load Balancer or a Classic Load Balancer to provide SSL offloading. In this scenario, the load balancer would be exposed to users, while the application servers that only support plaintext protocols will reside within a subnet which is can only be accessed by the load balancer. The load balancer would perform the decryption of all traffic destined for the application instance, forwarding the plaintext traffic to the instances. This allows  you to use encryption on traffic between the client and the load balancer, leaving only internal communication between the load balancer and the application in plaintext. Often this approach is sufficient to meet security requirements, however, in more stringent scenarios it is never acceptable for traffic to be transmitted in plaintext, even if within a secured subnet. In this scenario, a sidecar can be used to eliminate plaintext traffic ever traversing the network.

Improving Security and Configuration Management with Sidecars

One approach to providing encryption to legacy applications is to leverage what’s often termed the “sidecar pattern.” The sidecar pattern entails a second process acting as a proxy to the legacy application. The legacy application only exposes its services via the local loopback adapter and is thus accessible only to the sidecar. In turn the sidecar acts as an encrypted proxy, exposing the legacy application’s API to external consumers via TLS. As unencrypted traffic between the sidecar and the legacy application traverses the loopback adapter, it never traverses the network. This approach can help add encryption (or stronger encryption) to legacy applications when it’s not feasible to modify the original codebase. A common approach to implanting sidecars is through container groups such as pod in EKS or a task in ECS.

Implementing the Sidecar Pattern With Containers

Figure 1: Implementing the Sidecar Pattern With Containers

Another use of the sidecar pattern is to help legacy applications leverage modern cloud services. A common example of this is using a sidecar to manage files pertaining to the legacy application. This could entail a number of options including:

  • Having the sidecar dynamically modify the configuration for a legacy application based upon some external factor, such as the output of Lambda function, SNS event or DynamoDB write.
  • Having the sidecar write application state to a cache or database. Often applications will write state to the local disk. This can be problematic for autoscaling or disaster recovery, whereby having the state easily accessible to other instances is advantages. To facilitate this, the sidecar can write state to Amazon S3, Amazon DynamoDB, Amazon Elasticache or Amazon RDS.

A sidecar requires customer development, but it doesn’t require any modification of the lift-and-shifted application. A sidecar treats the application as a blackbox and interacts with it via its API, configuration file, or other standard mechanism.

Automating Security

A lift-and-shift can achieve a significantly stronger security posture by incorporating elements of DevSecOps. DevSecOps is a philosophy that argues that everyone is responsible for security and advocates for automation all parts of the security process. AWS has a number of services which can help implement a DevSecOps strategy. These services include:

  • Amazon GuardDuty: a continuous monitoring system which analyzes AWS CloudTrail Events, Amazon VPC Flow Log and DNS Logs. GuardDuty can detect threats and trigger an automated response.
  • AWS Shield: a managed DDOS protection services
  • AWS WAF: a managed Web Application Firewall
  • AWS Config: a service for assessing, tracking, and auditing changes to AWS configuration

These services can help detect security problems and implement a response in real time, achieving a significantly strong posture than traditional security strategies. You can build a DevSecOps strategy around a lift-and-shift workload using these services, without having to modify the lift-and-shift application.

Conclusion

There are many opportunities for taking advantage of AWS services and features to improve a lift-and-shift workload. Without any alteration to the application you can strengthen your security posture by utilizing AWS security services and by making small environmental and architectural changes that can help alleviate the challenges of legacy workloads.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Optimizing a Lift-and-Shift for Cost Effectiveness and Ease of Management

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-cost/

Lift-and-shift is the process of migrating a workload from on premise to AWS with little or no modification. A lift-and-shift is a common route for enterprises to move to the cloud, and can be a transitionary state to a more cloud native approach. This is the second blog post in a three-part series which investigates how to optimize a lift-and-shift workload. The first post is about performance.

A key concern that many customers have with a lift-and-shift is cost. If you move an application as is  from on-prem to AWS, is there any possibility for meaningful cost savings? By employing AWS services, in lieu of self-managed EC2 instances, and by leveraging cloud capability such as auto scaling, there is potential for significant cost savings. In this blog post, we will discuss a number of AWS services and solutions that you can leverage with minimal or no change to your application codebase in order to significantly reduce management costs and overall Total Cost of Ownership (TCO).

Automate

Even if you can’t modify your application, you can change the way you deploy your application. The adopting-an-infrastructure-as-code approach can vastly improve the ease of management of your application, thereby reducing cost. By templating your application through Amazon CloudFormation, Amazon OpsWorks, or Open Source tools you can make deploying and managing your workloads a simple and repeatable process.

As part of the lift-and-shift process, rationalizing the workload into a set of templates enables less time to spent in the future deploying and modifying the workload. It enables the easy creation of dev/test environments, facilitates blue-green testing, opens up options for DR, and gives the option to roll back in the event of error. Automation is the single step which is most conductive to improving ease of management.

Reserved Instances and Spot Instances

A first initial consideration around cost should be the purchasing model for any EC2 instances. Reserved Instances (RIs) represent a 1-year or 3-year commitment to EC2 instances and can enable up to 75% cost reduction (over on demand) for steady state EC2 workloads. They are ideal for 24/7 workloads that must be continually in operation. An application requires no modification to make use of RIs.

An alternative purchasing model is EC2 spot. Spot instances offer unused capacity available at a significant discount – up to 90%. Spot instances receive a two-minute warning when the capacity is required back by EC2 and can be suspended and resumed. Workloads which are architected for batch runs – such as analytics and big data workloads – often require little or no modification to make use of spot instances. Other burstable workloads such as web apps may require some modification around how they are deployed.

A final alternative is on-demand. For workloads that are not running in perpetuity, on-demand is ideal. Workloads can be deployed, used for as long as required, and then terminated. By leveraging some simple automation (such as AWS Lambda and CloudWatch alarms), you can schedule workloads to start and stop at the open and close of business (or at other meaningful intervals). This typically requires no modification to the application itself. For workloads that are not 24/7 steady state, this can provide greater cost effectiveness compared to RIs and more certainty and ease of use when compared to spot.

Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides a fully managed Windows filesystem that has full compatibility with SMB and DFS and full AD integration. Amazon FSx is an ideal choice for lift-and-shift architectures as it requires no modification to the application codebase in order to enable compatibility. Windows based applications can continue to leverage standard, Windows-native protocols to access storage with Amazon FSx. It enables users to avoid having to deploy and manage their own fileservers – eliminating the need for patching, automating, and managing EC2 instances. Moreover, it’s easy to scale and minimize costs, since Amazon FSx offers a pay-as-you-go pricing model.

Amazon EFS

Amazon Elastic File System (EFS) provides high performance, highly available multi-attach storage via NFS. EFS offers a drop-in replacement for existing NFS deployments. This is ideal for a range of Linux and Unix usecases as well as cross-platform solutions such as Enterprise Java applications. EFS eliminates the need to manage NFS infrastructure and simplifies storage concerns. Moreover, EFS provides high availability out of the box, which helps to reduce single points of failure and avoids the need to manually configure storage replication. Much like Amazon FSx, EFS enables customers to realize cost improvements by moving to a pay-as-you-go pricing model and requires a modification of the application.

Amazon MQ

Amazon MQ is a managed message broker service that provides compatibility with JMS, AMQP, MQTT, OpenWire, and STOMP. These are amongst the most extensively used middleware and messaging protocols and are a key foundation of enterprise applications. Rather than having to manually maintain a message broker, Amazon MQ provides a performant, highly available managed message broker service that is compatible with existing applications.

To use Amazon MQ without any modification, you can adapt applications that leverage a standard messaging protocol. In most cases, all you need to do is update the application’s MQ endpoint in its configuration. Subsequently, the Amazon MQ service handles the heavy lifting of operating a message broker, configuring HA, fault detection, failure recovery, software updates, and so forth. This offers a simple option for reducing management overhead and improving the reliability of a lift-and-shift architecture. What’s more is that applications can migrate to Amazon MQ without the need for any downtime, making this an easy and effective way to improve a lift-and-shift.

You can also use Amazon MQ to integrate legacy applications with modern serverless applications. Lambda functions can subscribe to MQ topics and trigger serverless workflows, enabling compatibility between legacy and new workloads.

Integrating Lift-and-Shift Workloads with Lambda via Amazon MQ

Figure 1: Integrating Lift-and-Shift Workloads with Lambda via Amazon MQ

Amazon Managed Streaming Kafka

Lift-and-shift workloads which include a streaming data component are often built around Apache Kafka. There is a certain amount of complexity involved in operating a Kafka cluster which incurs management and operational expense. Amazon Kinesis is a managed alternative to Apache Kafka, but it is not a drop-in replacement. At re:Invent 2018, we announced the launch of Amazon Managed Streaming Kafka (MSK) in public preview. MSK provides a managed Kafka deployment with pay-as-you-go pricing and an acts as a drop-in replacement in existing Kafka workloads. MSK can help reduce management costs and improve cost efficiency and is ideal for lift-and-shift workloads.

Leveraging S3 for Static Web Hosting

A significant portion of any web application is static content. This includes videos, image, text, and other content that changes seldom, if ever. In many lift-and-shifted applications, web servers are migrated to EC2 instances and host all content – static and dynamic. Hosting static content from an EC2 instance incurs a number of costs including the instance, EBS volumes, and likely, a load balancer. By moving static content to S3, you can significantly reduce the amount of compute required to host your web applications. In many cases, this change is non-disruptive and can be done at the DNS or CDN layer, requiring no change to your application.

Reducing Web Hosting Costs with S3 Static Web Hosting

Figure 2: Reducing Web Hosting Costs with S3 Static Web Hosting

Conclusion

There are numerous opportunities for reducing the cost of a lift-and-shift. Without any modification to the application, lift-and-shift workloads can benefit from cloud-native features. By using AWS services and features, you can significantly reduce the undifferentiated heavy lifting inherent in on-prem workloads and reduce resources and management overheads.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Optimizing a Lift-and-Shift for Performance

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-performance/

Many organizations begin their cloud journey with a lift-and-shift of applications from on-premise to AWS. This approach involves migrating software deployments with little, or no, modification. A lift-and-shift avoids a potentially expensive application rewrite but can result in a less optimal workload that a cloud native solution. For many organizations, a lift-and-shift is a transitional stage to an eventual cloud native solution, but there are some applications that can’t feasibly be made cloud-native such as legacy systems or proprietary third-party solutions. There are still clear benefits of moving these workloads to AWS, but how can they be best optimized?

In this blog series post, we’ll look at different approaches for optimizing a black box lift-and-shift. We’ll consider how we can significantly improve a lift-and-shift application across three perspectives: performance, cost, and security. We’ll show that without modifying the application we can integrate services and features that will make a lift-and-shift workload cheaper, faster, more secure, and more reliable. In this first blog, we’ll investigate how a lift-and-shift workload can have improved performance through leveraging AWS features and services.

Performance gains are often a motivating factor behind a cloud migration. On-premise systems may suffer from performance bottlenecks owing to legacy infrastructure or through capacity issues. When performing a lift-and-shift, how can you improve performance? Cloud computing is famous for enabling horizontally scalable architectures but many legacy applications don’t support this mode of operation. Traditional business applications are often architected around a fixed number of servers and are unable to take advantage of horizontal scalability. Even if a lift-and-shift can’t make use of auto scaling groups and horizontal scalability, you can achieve significant performance gains by moving to AWS.

Scaling Up

The easiest alternative to scale up to compute is vertical scalability. AWS provides the widest selection of virtual machine types and the largest machine types. Instances range from small, burstable t3 instances series all the way to memory optimized x1 series. By leveraging the appropriate instance, lift-and-shifts can benefit from significant performance. Depending on your workload, you can also swap out the instances used to power your workload to better meet demand. For example, on days in which you anticipate high load you could move to more powerful instances. This could be easily automated via a Lambda function.

The x1 family of instances offers considerable CPU, memory, storage, and network performance and can be used to accelerate applications that are designed to maximize single machine performance. The x1e.32xlarge instance, for example, offers 128 vCPUs, 4TB RAM, and 14,000 Mbps EBS bandwidth. This instance is ideal for high performance in-memory workloads such as real time financial risk processing or SAP Hana.

Through selecting the appropriate instance types and scaling that instance up and down to meet demand, you can achieve superior performance and cost effectiveness compared to running a single static instance. This affords lift-and-shift workloads far greater efficiency that their on-prem counterparts.

Placement Groups and C5n Instances

EC2 Placement groups determine how you deploy instances to underlying hardware. One can either choose to cluster instances into a low latency group within a single AZ or spread instances across distinct underlying hardware. Both types of placement groups are useful for optimizing lift-and-shifts.

The spread placement group is valuable in applications that rely on a small number of critical instances. If you can’t modify your application  to leverage auto scaling, liveness probes, or failover, then spread placement groups can help reduce the risk of simultaneous failure while improving the overall reliability of the application.

Cluster placement groups help improve network QoS between instances. When used in conjunction with enhanced networking, cluster placement groups help to ensure low latency, high throughput, and high network packets per second. This is beneficial for chatty applications and any application that leveraged physical co-location for performance on-prem.

There is no additional charge for using placement groups.

You can extend this approach further with C5n instances. These instances offer 100Gbps networking and can be used in placement group for the most demanding networking intensive workloads. Using both placement groups and the C5n instances require no modification to your application, only to how it is deployed – making it a strong solution for providing network performance to lift-and-shift workloads.

Leverage Tiered Storage to Optimize for Price and Performance

AWS offers a range of storage options, each with its own performance characteristics and price point. Through leveraging a combination of storage types, lift-and-shifts can achieve the performance and availability requirements in a price effective manner. The range of storage options include:

Amazon EBS is the most common storage service involved with lift-and-shifts. EBS provides block storage that can be attached to EC2 instances and formatted with a typical file system such as NTFS or ext4. There are several different EBS types, ranging from inexpensive magnetic storage to highly performant provisioned IOPS SSDs. There are also storage-optimized instances that offer high performance EBS access and NVMe storage. By utilizing the appropriate type of EBS volume and instance, a compromise of performance and price can be achieved. RAID offers a further option to optimize EBS. EBS utilizes RAID 1 by default, providing replication at no additional cost, however an EC2 instance can apply other RAID levels. For instance, you can apply RAID 0 over a number of EBS volumes in order to improve storage performance.

In addition to EBS, EC2 instances can utilize the EC2 instance store. The instance store provides ephemeral direct attached storage to EC2 instances. The instance store is included with the EC2 instance and provides a facility to store non-persistent data. This makes it ideal for temporary files that an application produces, which require performant storage. Both EBS and the instance store are expose to the EC2 instance as block level devices, and the OS can use its native management tools to format and mount these volumes as per a traditional disk – requiring no significant departure from the on prem configuration. In several instance types including the C5d and P3d are equipped with local NVMe storage which can support extremely IO intensive workloads.

Not all workloads require high performance storage. In many cases finding a compromise between price and performance is top priority. Amazon S3 provides highly durable, object storage at a significantly lower price point than block storage. S3 is ideal for a large number of use cases including content distribution, data ingestion, analytics, and backup. S3, however, is accessible via a RESTful API and does not provide conventional file system semantics as per EBS. This may make S3 less viable for applications that you can’t easily modify, but there are still options for using S3 in such a scenario.

An option for leveraging S3 is AWS Storage Gateway. Storage Gateway is a virtual appliance than can be run on-prem or on EC2. The Storage Gateway appliance can operate in three configurations: file gateway, volume gateway and tape gateway. File gateway provides an NFS interface, Volume Gateway provides an iSCSI interface, and Tape Gateway provides an iSCSI virtual tape library interface. This allows files, volumes, and tapes to be exposed to an application host through conventional protocols with the Storage Gateway appliance persisting data to S3. This allows an application to be agnostic to S3 while leveraging typical enterprise storage protocols.

Using S3 Storage via Storage Gateway

Figure 1: Using S3 Storage via Storage Gateway

Conclusion

A lift-and-shift can achieve significant performance gains on AWS by making use of a range of instance types, storage services, and other features. Even without any modification to the application, lift-and-shift workloads can benefit from cutting edge compute, network, and IO which can help realize significant, meaningful performance gains.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Check it Out – New AWS Pricing Calculator for EC2 and EBS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/check-it-out-new-aws-pricing-calculator-for-ec2-and-ebs/

The blog post that we published over a decade ago to launch the Simple Monthly Calculator still shows up on our internal top-10 lists from time to time! Since that post was published, we have extended, redesigned, and even rebuilt the calculator a time or two.

New Calculator
Starting with a blank screen, an empty code repo, and plenty of customer feedback, we are building a brand-new AWS Pricing Calculator. The new calculator is designed to help you estimate and understand your eventual AWS costs. We did our best to avoid excessive jargon and to make the calculations obvious, transparent, and accessible. You can see the options that are available to you, explore the associated costs, and make high-quality data-driven decisions.

We’re starting out with support for EC2 instances, EBS volumes, and a very wide variety of purchasing models, with plans to add support for more services as quickly as possible.

A Quick Tour
The new calculator lives at https://calculator.aws . Each estimate consists of one or more groups and the first one is created automatically:

Each group has a name, and has pricing for services in a particular AWS region. I click Edit group to change the name and pick a region, and click Apply:

Back at the main page of the calculator, I click Add service and choose to configure some EC2 instances. The group can contain multiple types and configurations of instances; I click Configure to move ahead:

At this point I can make a Quick estimate (the default), or supply more details as part of an Advanced estimate. I’ll start with a Quick estimate:

Here are a couple of things to keep in mind when you make a quick estimate:

Instance Type – I have two options for choosing EC2 instance types; I can enter my resource requirements (vCPU count, memory size, and GPU count) and have the calculator choose the option with the lowest price, or I can pick an EC2 instance type by name.

Pricing Strategy – I can choose to use On-Demand Instances, Convertible Reserved Instances, or Standard Reserved Instances, and can choose payment terms and options for RI’s.

EBS Volumes – I can choose the type and size of an EBS volume for the instance. Right now, the calculator allows you to associate one volume with each EC2 instance. If you need more than one, specify the total amount of storage you need across all volumes.

Details – I can expand the Show calculation section to see the math:

After I have made my choices, I click Add to my estimate to move ahead. My selections, along with their costs (annual, upfront, and monthly), are displayed:

I can go back and add another service, or create another group. I’ll add another EC2 instance, using an Advanced estimate this time around. Here’s where I start:

I have very fine-grained control over each aspect of my estimate. For example, I can characterize my workload in great detail. I click on Workload, and have the ability to select the graph that best represents my monthly workload:

I can even model workloads that have two or more independent daily (in this case) spike patterns. As I refine my model, the calculator figures out the most economical combination of On-Demand and Reserved Instances, and shows me the results:

The calculations are driven by the selection in the Pricing strategy. The default value, and the one that I used for the previous screen shot, is Cost optimized. I have other choices as well:

I can also model my data transfer in, out, and to other AWS regions:

Once I am happy with the results I click Add to my estimate, and take a look at my selections and their prices:

I can click Export to capture my estimate in spreadsheet form:

Here’s the data (I hid a few columns for clarity):

As you can see, the new calculator will quickly become a useful part of your planning and decision-making process.

One important thing to keep in mind: your estimates are stored in state that is local to the browser tab, and will be lost if you close the tab. The team is already hard at work on features that will allow you to save and even share your estimates, for launch in early 2019.

Stay Tuned
We will be adding more services and more features to the calculator in the months to come, and I’ll share some updates with you from time to time, either in this blog or via Twitter. If you have ideas, complaints, or other feedback, don’t hesitate to click on the Feedback link at the top of the page.

Jeff;

 

How to import split disks into AWS

Post Syndicated from Josh Rad original https://aws.amazon.com/blogs/compute/how-to-import-split-disks-into-aws/

Contributed by Raphael Sack

In an amusing coincidence, I was recently asked by two separate customers a nearly identical question: how to import split disks in the form of VMDK to AWS. This post covers one way to do this.

There are quite a few names, official and unofficial, for split disks. These include: non-monolithic, split disks, multi-part disks, delta parts, and more. For consistency, I refer to them as split disks in this post. The Image Formats page of VM Import/Export’s documentation specifies the officially supported formats when importing disks and virtual machines.

How to import split disks:

  1. Set up a conversion instance.
  2. Convert and build out the new disk image.
  3. Import the image.

After the image is imported, you can validate the import and start using the image.

Setting up a conversion instance

First things first—you need an instance to be used for the conversion job. You can use any machine that is able to run the QEMU image utility.

For convenience, spin up an Amazon EC2 instance running Amazon Linux. The only modification that you apply to the instance is installing the qemu-img disk image utility. To install the utility on Amazon Linux, use YUM:

yum install qemu-img

The instance requires the ec2:ImportImage and ec2:DescribeImportImageTasks permissions to be able to run the commands detailed in the steps below.

Converting and building out the new disk image

Next, you need your source VMDK files to be available on the machine. In this case, store on Amazon S3 and copy them over to the EC2 instance.

If you examine your source machine’s main .vmdk file, you find the “Extent description” section that specifies the disk parts. Make sure that you’ve got all of them.

As mentioned previously, you use qemu-img to convert the disks. Use the following command format:

qemu-img convert -f [SOURCE FORMAT] [SOURCE DISK FILENAME] -O [OUTPUT FORMAT] [OUTPUT FILENAME]

For example:

qemu-img convert -f vmdk MyDisk.vmdk -O raw MyDisk.raw

Because there are multiple parts, you can use a quick Bash one-liner loop:

for i in *-*.vmdk; do qemu-img convert -f vmdk $i -O raw $i.raw; done

A quick note about this command, it creates files such as AppServer-s001.vmdk.raw. You could use some command line tinkering and improve the naming convention. Because this post is only a temporary conversion, you can also leave the names as they are.

Now, concatenate all these newly converted RAW files into one large file, using the following command:

for i in *raw; do cat $i > AppServer.raw; done

Again, you could optimize this step and merge it into the previous loop. For the sake of clarity, I separated them out.

If you use qemu-img, you can now see that you have a single, RAW disk image:

Importing the disk image

The final step is to import the image, either as a snapshot, or as an image. In this example, use the image option and you can launch several instances off these disks.

To import an image or snapshot, the source has to be in an S3 bucket. Create a bucket named “import-blog-post”. Using the AWS CLI, upload the source to the bucket.

The last step is to create an import job. Jobs can represent a physical import (where you ship a physical device containing the data), or an upload of such data.

The corresponding CLI command for importing images is import-image and supports specifying parameters using command line arguments, or using a JSON file. For this post, use a combination of both for readability:

aws ec2 import-image --description <DESCRIPTION FOR THE IMPORT TASK> --license-type <LICENSE TYPE> --disk-containers <INFORMATION ABOUT THE DISKS>

Your command:

aws ec2 import-image --description "Blog import job" --license-type BYOL --disk-containers file://import-task.json

The contents of the JSON file:

[
        {
                "Description": "AppServer Ubuntu Image",
                "Format": "raw",
                "UserBucket": {
                        "S3Bucket": "import-blog-post",
                        "S3Key": "AppServer.raw"
                }
        }
]

You are specifying the source image format, location (in S3), and a description to be used for the AMI created from it.

Important: VM Import requires a service role to perform certain operations (such as downloading from S3, creating an image). If you do not have a vmimport role, follow the steps from the VM Import Service Role section in the Import Your VM as an Image topic. Additionally, before running the commands, I recommend that you verify that the role has the required access to the S3 bucket used for storing the image.

After you issue the command, a JSON response is displayed indicating that a task has been created:

{
    "Status": "active", 
    "SnapshotDetails": [
        {
            "UserBucket": {
                "S3Bucket": "import-blog-post", 
                "S3Key": "AppServer.raw"
            }, 
            "DiskImageSize": 0.0, 
            "Format": "RAW"
        }
    ], 
    "Progress": "2", 
    "StatusMessage": "pending", 
    "ImportTaskId": "import-ami-ffffffff"
}

The ImportTaskId value can be used to track or cancel the task:

aws ec2 describe-import-image-tasks --import-task-id import-ami- ffffffff

Looking at the task’s status, you can clearly see that it is active, with a percentage progress to help you understand where you are in the process:

{
    "ImportImageTasks": [
        {
            "Status": "active", 
            "Progress": "7", 
            "SnapshotDetails": [
                {
                    "UserBucket": {
                        "S3Bucket": "import-blog-post", 
                        "S3Key": "AppServer.raw"
                    }, 
                    "DiskImageSize": 26843545600.0, 
                    "Format": "RAW"
                }
            ], 
            "StatusMessage": "validated", 
            "ImportTaskId": "import-ami-ffffffff"
        }
    ]
}

The task progresses through the various import stages:

  • Converting
  • Updating
  • Updated
  • Preparing to boot
  • Booting
  • Booted
  • Preparing AMI
  • Completed

After the status transitions to Completed, you can find a value for ImageId in the response JSON. Start using your newly imported image, via the console, API, or CLI.

Conclusion

In this post, I presented a process for converting VMDK files, single or multiple, and importing them into AWS. This same process could be used for other source formats, such as QCOW, QCOW2, VDI, and more.

For those of you interested in other options, I would strongly recommend looking at AWS Server Migration Service, which makes it easier and faster to migrate workloads. For information about both partners and software solutions to assist you in such migrations, see Migration Partner Solutions.

Amazon ECS and Docker volume drivers, part 1: Amazon EBS

Post Syndicated from tiffany jernigan (@tiffanyfayj) original https://aws.amazon.com/blogs/compute/amazon-ecs-and-docker-volume-drivers-amazon-ebs/

→ Part 2: Amazon EFS

 

Post by: Jeremy Cowan, Ronnie Eichler, and Tiffany Jernigan

Introduction

Containers are emerging as the default compute primitive for building cloud-native applications.  They facilitate the adoption of continuous delivery, and help increase infrastructure use.

However, deploying stateful application as containers has been challenging because containers have short life-spans, get re-deployed frequently, are scaled up and down dynamically, and often share the same host with other containers. All of these factors make it challenging for you to appropriately align the lifecycles of storage volumes and containers.

Before Docker volume driver support was added to Amazon ECS, you had to manage storage volumes manually using custom tooling such as bash scripts, Lambda functions, or manual configuration of Docker volumes. Now, you can now take full advantage of the Docker plugin ecosystem by using popular plugins such as REX-Ray or Portworx.

ECS support for Docker volumes means that you can now deploy stateful and storage-intensive use cases. These include:

  • Machine learning and data processing workloads
  • Applications such as GitLab or Jenkins that share a filesystem across multiple tasks
  • Databases such as Cassandra or RocksDB
  • Streaming tools such as Kafka
  • Additional scratch space added to containers that process large workloads and are storage-intensive

To support this broad array of use cases, ECS offers you the flexibility to configure the lifecycle of the Docker volume. For example, you can specify whether it is a scratch space volume specific to a single instantiation of a task, or a persistent volume that persists beyond the lifecycle of a unique instantiation of the task. You can also choose to use a Docker volume that you’ve created before launching your task.

In addition to managing the Docker volume configuration and lifecycle, the ECS scheduler is now plugin-aware. ECS takes the availability of the requested driver into account in its placement decisions, so that tasks that require a certain driver are only placed on container instances that have the driver installed.

Docker and Docker volumes

Docker volumes are a way to persist data outside of the lifecycle of a container. Containers themselves are made up of multiple immutable layers of storage with an ephemeral layer, which is read/write. If your application writes files to the ephemeral layer, these changes are lost when the container stops.

Volumes are managed outside of the container lifecycle—stopping or removing the container does not remove the volume. Docker also supports volume drivers that allow you to use volumes as an abstraction between containers and persistent storage such as Amazon EBS or Amazon EFS. By default, Docker provides a driver called ‘local’ that provides local storage volumes to containers. With Docker plugins, you can now add volume drivers to provision and manage EBS and EFS storage, such as REX-Ray, Portworx, and NetShare.

To deploy a stateful application such as Cassandra, MongoDB, Zookeeper, or Kafka, you likely need high-performance persistent storage like EBS. Docker volumes allow you to present an EBS volume to your application as a Docker volume.

There are other applications such as Jenkins and GitLab, where multiple copies of the application need access to the same data. With volume drivers and EFS, you can present EFS as a shared volume to multiple instances of your container so that you can scale your application yet still retain and persist shared data on EFS.

Another overlooked use case involves applications that need scratch space. When you define a task in ECS and your application writes to the filesystem inside of the container (not on a Docker volume), the task consumes space on the underlying EC2 instance that is shared by all other running tasks. This can lead to issues of ‘noisy neighbors’ if a task were to write a bunch of data to /tmp on its local filesystem.

Now with Docker volume support in ECS, you can map an EBS volume to /tmp (or whatever your scratch space directory you prefer). You can ensure good performance while limiting the size of the underlying EBS volume using arguments in your ECS task to the volume driver.

What is REX-Ray?

REX-Ray is just one example of a Docker volume driver plugin that provides an abstraction between Docker volumes and the underlying storage. Built on top of the libStorage framework, REX-Ray’s simplified architecture consists of a single binary. It runs as a stateless service on every host, using a configuration file to orchestrate multiple storage platforms. REX-Ray supports multiple storage backends. For this post, we focus on EBS as a storage backend. Part two of this series focuses on EFS.

Using a plugin such as REX-Ray, your Docker container is able to persist data outside of the lifespan of a running container. You don’t have to worry about the underlying storage. Instead, you simply reference a Docker volume in your task definition and let REX-Ray provide the abstraction. While this post is specific to REX-Ray, ECS is designed to be open and pass through the volume driver arguments from your task definition to Docker. You can use any volume driver (such as Portworx) that is supported by Docker.

Putting it all together

Before you can get started using Docker volumes with ECS, there are a few things you need to do.

First, you need a suitable volume driver plugin, such as REX-Ray, to provide an abstraction between the Docker volume and the underlying storage, for example, EBS or EFS. Docker designed volumes and the associated driver mechanism to be pluggable to support a variety of storage backends. Although we’ve chosen to highlight REX-Ray for this post, there are several others to choose from, including Portworx and NetShare.

Because the volume plugin interacts with the AWS storage services on your behalf, an IAM role has to be assigned to the ECS container instances. This allows REX-Ray to issue the appropriate AWS API calls and perform actions such as attaching and detaching EBS volumes, and so on.

Using REX-Ray with Amazon EBS

To help you get started, we’ve created an AWS CloudFormation template that builds a two-node ECS cluster.  The template bootstraps the rexray/ebs volume driver onto each node and assigns them an IAM role with an inline policy that allows them to call the API actions that REX-Ray needs.  The template also creates a Network Load Balancer, which is used to expose an ECS service to the internet.

Finally, you create a task definition for a stateful service—MySQL—that uses the the rexray/ebs driver. Observe how the volume where MySQL stores its data is moved when the MySQL task is scheduled on another instance in the cluster.

Set up the environment

Here’s how to set up the environment for this walkthrough.

Step 1: Instantiate the AWS CloudFormation template

aws cloudformation create-stack --stack-name rexray-demo \
--capabilities CAPABILITY_NAMED_IAM \
--template-url http://s3.amazonaws.com/ecs-refarch-volume-plugins/rexray-demo.json \
--parameters ParameterKey=KeyName,ParameterValue=<keypair-name>

The ECS container instances are bootstrapped using the following script, which is given as user data in rexyray-demo.json.

#open file descriptor for stderr
exec 2>>/var/log/ecs/ecs-agent-install.log
set -x
#verify that the agent is running
until curl -s http://localhost:51678/v1/metadata
do
	sleep 1
done
#install the Docker volume plugin
docker plugin install rexray/ebs REXRAY_PREEMPT=true EBS_REGION=<AWS_REGION> --grant-all-permissions
#restart the ECS agent
stop ecs 
start ecs

Step 2: Export output parameters as environment variables

This shell script exports the output parameters from the CloudFormation template and imports them as OS environment variables.  You use these variables later to create task and service definitions.

cat > get-outputs.sh << 'EOF'
#!/bin/bash
function usage {
  echo "usage: source <(./get-outputs.sh <stackname-or-stackid> <region>)"
  echo "stack name or ID must be provided or exported as the CloudFormationStack environment variable"
  echo "region must be provided or set with aws configure"
}

function main {
    #Get stack
    if [ -z "$1" ]; then
        if [ -z "$CloudFormationStack" ]; then
            echo "please provide stack name or ID"
            usage
            exit 1
        fi
    else
        CloudFormationStack="$1"
    fi
    #Get region
    if [ -z "$2" ]; then
        region=$(aws configure get region)
        if [ -z $region ]; then
            echo "please provide region"
            usage
            exit 1
        fi
    else
        region="$2"
    fi
    
    echo "#Region: $region"
    echo "#Stack: $CloudFormationStack"
    echo "#---"
    
    echo "#Checking if stack exists..."
    aws cloudformation wait stack-exists \
    --region $region \
    --stack-name $CloudFormationStack
    
    echo "#Checking if stack creation is complete..."
    aws cloudformation wait stack-create-complete \
    --region $region \
    --stack-name $CloudFormationStack
     
    echo "#Getting output keys and values..."
    echo "#---"
    aws cloudformation describe-stacks \
    --region $region \
    --stack-name $CloudFormationStack \
    --query 'Stacks[].Outputs[].[OutputKey, OutputValue]' \
    --output text | awk '{print "export", $1"="$2}'
}
main "[email protected]"
EOF

#Add executable permissions
chmod +x get-outputs.sh

Export the output parameters. The region parameter is only needed if your Region configuration is not us-west-2, as defined in the CloudFormation template.

./get-outputs.sh && source <(./get-outputs.sh)

Step 3: Create the task definition

In this step, you create a task definition for MySQL.  MySQL is considered stateful service because the data stored in the database has to persist beyond the life of the task.

When the MySQL task is restarted on another instance in the cluster, the scheduler and the rexray/ebs plugin ensure that the task is launched on an instance that can re-establish a connection to the EBS volume where the database is stored.

The placement constraint in the task definition informs the ECS service scheduler to launch the task in a specific Availability Zone; the available zone where the EBS volume was originally created.  Such a constraint is necessary because instances cannot connect to volumes in a different Availability Zone.

cat > mysql-taskdef.json << EOF 
{
    "containerDefinitions": [
        {
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "${CWLogGroupName}",
                    "awslogs-region": "${AWSRegion}",
                    "awslogs-stream-prefix": "ecs"
                }
            },
            "portMappings": [
                {
                    "containerPort": 3306,
                    "protocol": "tcp"
                }
            ],
            "environment": [
                {
                    "name": "MYSQL_ROOT_PASSWORD",
                    "value": "my-secret-pw"
                }
            ],
            "mountPoints": [
                {
                    "containerPath": "/var/lib/mysql",
                    "sourceVolume": "rexray-vol"
                }
            ],
            "image": "mysql",
            "essential": true,
            "name": "mysql"
        }
    ],
    "placementConstraints": [
        {
            "type": "memberOf",
            "expression": "attribute:ecs.availability-zone==${AvailabilityZone}"
        }
    ],
    "memory": "512",
    "family": "mysql",
    "networkMode": "awsvpc",
    "requiresCompatibilities": [
        "EC2"
    ],
    "cpu": "512",
    "volumes": [
        {
            "name": "rexray-vol",
            "dockerVolumeConfiguration": {
                "autoprovision": true,
                "scope": "shared",
                "driver": "rexray/ebs",
                "driverOpts": {
                    "volumetype": "gp2",
                    "size": "5"
                }
            }
        }
    ]
}
EOF

Docker volumes support adds several new the parameters to the ECS task definition. These include the volume type, scope, drivers, and Docker options and labels. A volume can either be scoped to a single, specific task or it can be shared among multiple tasks.

When a volume is scoped to a task, it is not meant to be shared across different running tasks.  In contrast, a shared volume is for use cases where the volume lifecycle is independent of the ECS task. The volume can be used by different tasks concurrently or at different times. It is primarily intended for use cases such as single-task applications where the volume persists after the task dies and is re-used when the task starts again. Another use case is when multiple tasks on the same EC2 container instance access the volume concurrently.

The autoprovision parameter is used to specify whether ECS manages the lifecycle of the volume.  When this is set to true, ECS automatically provisions the volume for you, which is what you are doing in the above example.  When it’s set to false, ECS assumes that the volume already exists.  For this example, you could instead set autoprovision to false and run the following command to create a volume:

aws create-volume --size 1 --volume-type gp2 \
--availability-zone $AvailabilityZone \
--tag-specifications 'ResourceType=volume,Tags=[{Key=Name,Value=rexray-vol}]'

The driver options are used to configure the type of EBS storage use, for example, gp2, standard, io1, and so on, the size of the volume to provision, IOPS, and encryption.  The specific options vary depending on the volume plugin that you are using.

Register the task definition and extract the task definition ARN from the result:

TaskDefinitionArn=$(aws ecs register-task-definition \
--cli-input-json 'file://mysql-taskdef.json' \
| jq -r .taskDefinition.taskDefinitionArn)

Step 4: Create a service definition

In this step, you create a service definition for MySQL.  An ECS service is a long running task that is monitored by the service scheduler.  If the task dies or becomes unhealthy, the scheduler automatically attempts to restart the task.

The MySQL service is fronted by a Network Load Balancer that is configured for forward traffic on port 3306 to the tasks registered with a specific target group.  The desired count is the desired number of task copies to run. The minimum and maximum healthy percent parameters inform the scheduler to only run exactly the number of desired copies of this task at a time. Unless a task has been stopped, it does not try starting a new one.

cat > mysql-svcdef.json << EOF 
{
    "cluster": "${ECSClusterName}",
    "serviceName": "mysql-svc",
    "taskDefinition": "${TaskDefinitionArn}",
    "loadBalancers": [
        {
            "targetGroupArn": "${MySQLTargetGroupArn}",
            "containerName": "mysql",
            "containerPort": 3306
        }
    ],
    "desiredCount": 1,
    "launchType": "EC2",
    "healthCheckGracePeriodSeconds": 60, 
    "deploymentConfiguration": {
        "maximumPercent": 100,
        "minimumHealthyPercent": 0
    },
    "networkConfiguration": {
        "awsvpcConfiguration": {
            "subnets": [
                "${SubnetId}"
            ],
            "securityGroups": [
                "${SecurityGroupId}"
            ],
            "assignPublicIp": "DISABLED"
        }
    }
}
EOF

Create the MySQL service:

SvcDefinitionArn=$(aws ecs create-service \
--cli-input-json file://mysql-svcdef.json \
| jq -r .service.serviceArn)

Step 5: Connect to the MySQL service

After the service is running, configure a MySQL client, such as MySQL Workbench, to connect to the service:

  1. For Connection Name, type “rexray-demo”.
  2. For Hostname, copy and paste the DNS name of the Network Load Balancer.
  3. For Password, type the default password found in the mysql-taskdef.json file.
  4. Choose Test Connection, Close.
  5. Under MySQL Connections, open the rexray-demo connection.

MySQL Workbench

In the Query window, paste the following:

CREATE DATABASE rexraydb;
USE rexraydb;
CREATE TABLE pets (name VARCHAR(20), breed VARCHAR(20));
SHOW TABLES;
DESCRIBE pets;
INSERT INTO pets VALUES ('Fluffy', 'Poodle');
SELECT * FROM pets;

You can execute each line separately by placing the cursor on a line and clicking the execute statement button.

Execute MySQL commands

Step 6: Drain the instance

Now that you have a running MySQL database server running under a container and persisting its data, make sure that it will survive a container replacement.

Docker containers by their nature are designed to be ephemeral. If you upgrade the underlying host operating system, you must drain the tasks off of the instance and let them be re-scheduled onto another ECS host. Below, I show the behavior of persisting the MySQL instance’s data to an EBS volume and allowing the task to be re-scheduled.

The following script identifies the instance that is currently running the task and puts it in a draining state.  This forces the task to be rescheduled onto the other EC2 container instance in the cluster.

cat > drain-instance.sh << 'EOF'

echo "Region [$AWSRegion]"
echo "Cluster [$ECSClusterName]"
echo "Task Definition [$TaskDefinitionArn]"

TaskArns=$(aws ecs list-tasks --region $AWSRegion \
--cluster $ECSClusterName --query taskArns --output text)
echo "Task ARNs [$TaskArns]"

ContainerInstanceArns=$(aws ecs describe-tasks \
--region $AWSRegion --cluster $ECSClusterName \
--tasks $TaskArns \
--query 'tasks[?taskDefinitionArn==`'$TaskDefinitionArn'`]' \
--query 'tasks[].containerInstanceArn' --output text)
echo "Container Instance ARNs [$ContainerInstanceArns]"

echo "DRAINING Instances"
aws ecs update-container-instances-state --region $AWSRegion \
--cluster $ECSClusterName --container-instances $ContainerInstanceArns \
--status "DRAINING"

EOF

In the ECS console, if you click on the cluster and then the tab for the cluster’s tasks, you see the container instance ID for the MySQL task:

Clicking the link of the container instance ID takes you to another page that shows the EC2 instance ID of the instance where the MySQL task is running:

Now run the script:

chmod +x drain-instance.sh
./drain-instance.sh

When you run the script, the tasks on the draining instance are stopped. Because you have an ECS service definition for MySQL, ECS launches new tasks on other ECS instances in the cluster that meet the placement constraints. In this example, you placed a constraint on the Availability Zone of the EBS volume as it’s not possible to detach and re-attach volumes across Availability Zones. Because the volume already exists, REX-Ray attaches the existing volume to the new task. When MySQL starts, it sees this as its data volume and you have access to the recently stored data.

Step 7: Re-connect to the MySQL service

After you see that a new task has been provisioned on the ECS cluster, you can return to MySQL Workbench and attempt to run the following query:

USE rexraydb;
SELECT * FROM pets;

You may get an error message stating “The MySQL server has gone away.” This usually means that the new ECS task has not completed starting or hasn’t been registered yet as a healthy target behind the Network Load Balancer. If you wait a little longer and try again, you should see the same results in the query grid as before.

This environment is meant as a demonstration on how to use Docker volume plugins with ECS for supporting persistent workloads. For an actual production implementation, I recommend scoping the VPC and security groups to only allow network access from trusted resources. This post creates a MySQL server that is accessible from the internet. In addition, you should implement your own strong MySQL root password, among other things.

To clean up this demo, take the following steps.

Delete the service.

aws ecs update-service --cluster $ECSClusterName \
--service $SvcDefinitionArn \
--desired-count 0
aws ecs delete-service --cluster $ECSClusterName \
--service $SvcDefinitionArn

Delete the volume.

Even though you deleted the task and the service, you still need to clean up the EBS volume that you created. You created this volume and referenced it in the ECS task definition. ECS passed this information along to Docker running on the host, which in turn handed it to REX-Ray (your volume driver), which knew how to attach the EBS volume and map it to the container.

The easiest way to delete this volume is from the EC2 console. In the list of volumes, you should see a volume named rexray-vol that is unattached (state=available). Delete this volume as it is no longer needed.

 

REX-Ray Volume

Otherwise, you can run the following command, which grabs the volume ID and deletes it:

rexrayVolumeID=$(aws ec2 describe-volumes --filter Name="tag:Name",Values=rexray-vol \
--query "Volumes[].VolumeId" --output text)
aws ec2 delete-volume --volume-id $rexrayVolumeID

Delete the CloudFormation template.

Lastly, delete the CloudFormation template. This removes the rest of the environment that was pre-created for this exercise.

aws cloudformation delete-stack --stack-name rexray-demo

Summary

While it was possible to use Docker volume plugins with ECS previously, doing so required you to create volumes out of band, that is, outside of ECS, and create placement constraints to restrict where tasks could be run. With native support for Docker volumes, volumes can now be provisioned simply by adding a handful of parameters to an ECS task definition.

Moreover, the ECS scheduler is now volume plugin aware.  Instances that have a volume driver installed on them automatically get annotated with attributes that inform the scheduler where to place tasks that use a particular driver.  Together, these features help you to run stateful, storage intensive applications such as databases, machine learning, and data processing applications, streaming applications like Kafka, as well as applications that need additional scratch space.  We look forward to hearing about the use cases that this new feature enables.

– Jeremy, Ronnie, and Tiffany

Improving application performance and reducing costs with Amazon EBS-Optimized Instance burst capability

Post Syndicated from Geoff Murase original https://aws.amazon.com/blogs/compute/improving-application-performance-and-reducing-costs-with-amazon-ebs-optimized-instance-burst-capability/

Contributed by Sooraj Prasannan, Senior Product Manager, Amazon Elastic Block Store

In November 2017, Amazon EC2 introduced C5 compute-intensive instances and M5 general-purpose instances. In the first half of 2018, we released EC2 C5d instances and M5d instances by adding high-speed, ultra-low latency local NVMe storage to the EC2 C5 and M5 instance families. EC2 C5/C5d and M5/M5d instances are built on the Nitro system. This collection of AWS-built hardware and software components enables high performance, high availability, high security, and bare metal capabilities to reduce virtualization overhead.

During the design of the Nitro system, we analyzed real-world workloads and recognized the need for smaller instance sizes to drive higher performance from their Amazon EBS volumes. We found that the majority of application storage needs are bursty, with short, intense periods of high I/O and plenty of idle time between bursts. To improve the experience for these workloads, we developed burst capability for smaller instance sizes. Available on EC2 C5/C5d and M5/M5d instances, this feature enables large, xlarge, and 2xlarge instance sizes to drive the same performance as the 4xlarge instance for 30 minutes each day.

For applications with spiky Amazon EBS demand, you can right-size your instances based on your CPU and memory requirements and still meet your EBS-optimized instance performance requirements. This higher performance also enables you to speed up sections of your workflow dependent on EBS-optimized instance performance. Faster workflows result in quicker job completions and improved resource utilization. The burst capability ultimately enables you to reduce costs by right-sizing your instance and improving total resource usage.

With this performance increase, you will be able to handle unplanned spikes in demand without any impact to your application performance. You can now size your instances based on historical average trends. This burst capability gives you more performance to absorb spikes without affecting your customer experience.

Using Amazon CloudWatch metrics to monitor burst usage

For better visibility into your performance, instances based on the Nitro system provide Amazon CloudWatch metrics to help profile your usage. Based on the usage profile, you can decide if smaller instances meet your requirements.

These instances give you the ability to monitor your usage via instance level CloudWatch metrics for operations (EBSReadOpsandEBSWriteOps) and bytes transferred (EBSReadBytesand EBSWriteBytes). For more information on these metrics, see List of available CloudWatch metrics for your instances. These metrics support basic monitoring (five-minute frequency) by default, but you can enable detailed monitoring (one-minute frequency) for an additional cost. For more information, see Amazon CloudWatch pricing.

For large, xlarge, and 2xlarge instances, we also provide burst balance metrics. EBSIOBalance% monitors the instance I/O burst bucket, and EBSByteBalance% monitors the instance byte burst bucket. These metrics give information about the percentage of I/O or bytes credits remaining in the respective burst buckets. The metrics are expressed as a percentage, where 100% means that the instance has accumulated the maximum number of credits. You can set up an alarm that triggers if the balance gets too low.

To demonstrate these metrics, we launched an m5.large instance. We then attached a 500GB io1 Amazon EBS volume with 32,000 provisioned IOPS to the instance. Amazon EBS volumes attached to instances based on the Nitro system are exposed as NVMe devices.

First, we ran a large block (128 KiB) test using fio to /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol02f2f9a66c2ebfd66 and monitored both EBSIOBalance% and EBSByteBalance%.

$ sudo fio --filename= /dev/disk/by-id/nvme-
Amazon_Elastic_Block_Store_vol02f2f9a66c2ebfd66 --rw=randread --
bs=128k --runtime=2400 --time_based=1 --iodepth=32 --
ioengine=libaio --direct=1 --name=large-block-test 

Because this is a large block workload, it’s not driving enough IOPS to deplete EBSIOBalance%. It depletes EBSByteBalance% instead, as shown in the following image.

Then we ran a small block test to understand how it affects EBSIOBalance% and EBSByteBalance%.

$ sudo fio --filename= /dev/disk/by-id/nvme-
Amazon_Elastic_Block_Store_vol02f2f9a66c2ebfd66 --rw=randread --
bs=16k --runtime=2400 --time_based=1 --iodepth=32 --
ioengine=libaio --direct=1 --name=small-block-test 

Because this is a small block test, it drives higher IOPS than bytes/second. Hence, EBSIOBalance% drops faster than EBSByteBalance%, as shown in the following image.

As long as EBSIOBalance% and EBSByteBalance% are above 0%, the instance can drive the burst performance. When the instance I/O activity is below the baseline rate, the burst buckets refill. After the tests finished, we paused all I/O from the instance. This period of inactivity allows the instance burst buckets to refill, as EBSIOBalance% and EBSByteBalance% show in the following image.

The refill rate for a burst bucket is the difference between the baseline rate and the instance I/O activity. For example, m5.large has a baseline throughput rate of 60 MB/s and a baseline IOPS rate of 3600 IOPS. Suppose the instance I/O activity is 10 MB/s and 1000 IOPS. The byte bucket fills at the rate of 50 MB/s (60 MB/s minus 10 MB/s). The IOPS bucket fills at the rate of 2600 IOPS (3600 IOPS minus 1000 IOPS). For the baseline rates for the different instances, see Amazon EBS–optimized instances. In addition, we top off the burst buckets every 24 hours, which means that the instance has burst performance available for 30 minutes each day.

Performance enhancements

We have continued to make enhancements to the Nitro system. With the latest set of enhancements, we have increased the maximum burst bandwidth on the large, xlarge, and 2xlarge EC2 C5/C5d and M5/M5d instances to 3.5 Gbps, up from 2.25 Gbps and 2.12 Gbps, respectively. We have also increased the maximum burst IOPS for EC2 C5/C5d to 20,000 IOPS and to 18,750 IOPS for M5/M5d, up from 16,000 IOPS for both. All new EC2 C5/C5d and M5/M5d smaller instances can take advantage of this performance increase at no additional cost.

For the latest list of instances based on the Nitro system that support this burst feature and their corresponding performance numbers, see Amazon EBS–optimized instances.

New – Lifecycle Management for Amazon EBS Snapshots

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-lifecycle-management-for-amazon-ebs-snapshots/

It is always interesting to zoom in on the history of a single AWS service or feature and watch how it has evolved over time in response to customer feedback. For example, Amazon Elastic Block Store (EBS) launched a decade ago and has been gaining more features and functionality every since. Here are a few of the most significant announcements:

Several of the items that I chose to highlight above make EBS snapshots more useful and more flexible. As you may already know, it is easy to create snapshots. Each snapshot is a point-in-time copy of the blocks that have changed since the previous snapshot, with automatic management to ensure that only the data unique to a snapshot is removed when it is deleted. This incremental model reduces your costs and minimizes the time needed to create a snapshot.

Because snapshots are so easy to create and use, our customers create a lot of them, and make great use of tags to categorize, organize, and manage them. Going back to my list, you can see that we have added multiple tagging features over the years.

Lifecycle Management – The Amazon Data Lifecycle Manager
We want to make it even easier for you to create, use, and benefit from EBS snapshots! Today we are launching Amazon Data Lifecycle Manager to automate the creation, retention, and deletion of Amazon EBS volume snapshots. Instead of creating snapshots manually and deleting them in the same way (or building a tool to do it for you), you simply create a policy, indicating (via tags) which volumes are to be snapshotted, set a retention model, fill in a few other details, and let Data Lifecycle Manager do the rest. Data Lifecycle Manager is powered by tags, so you should start by setting up a clear and comprehensive tagging model for your organization (refer to the links above to learn more).

It turns out that many of our customers have invested in tools to automate the creation of snapshots, but have skimped on the retention and deletion. Sooner or later they receive a surprisingly large AWS bill and find that their scripts are not working as expected. The Data Lifecycle Manager should help them to save money and to be able to rest assured that their snapshots are being managed as expected.

Creating and Using a Lifecycle Policy
Data Lifecycle Manager uses lifecycle policies to figure out when to run, which volumes to snapshot, and how long to keep the snapshots around. You can create the policies in the AWS Management Console, from the AWS Command Line Interface (CLI) or via the Data Lifecycle Manager APIs; I’ll use the Console today. Here are my EBS volumes, all suitably tagged with a department:

I access the Lifecycle Manager from the Elastic Block Store section of the menu:

Then I click Create Snapshot Lifecycle Policy to proceed:

Then I create my first policy:

I use tags to specify the volumes that the policy applies to. If I specify multiple tags, then the policy applies to volumes that have any of the tags:

I can create snapshots at 12 or 24 hour intervals, and I can specify the desired snapshot time. Snapshot creation will start no more than an hour after this time, with completion based on the size of the volume and the degree of change since the last snapshot.

I can use the built-in default IAM role or I can create one of my own. If I use my own role, I need to enable the EC2 snapshot operations and all of the DLM (Data Lifecycle Manager) operations; read the docs to learn more.

Newly created snapshots will be tagged with the aws:dlm:lifecycle-policy-id and  aws:dlm:lifecycle-schedule-name automatically; I can also specify up to 50 additional key/value pairs for each policy:

I can see all of my policies at a glance:

I took a short break and came back to find that the first set of snapshots had been created, as expected (I configured the console to show the two tags created on the snapshots):

Things to Know
Here are a couple of things to keep in mind when you start to use Data Lifecycle Manager to automate your snapshot management:

Data Consistency – Snapshots will contain the data from all completed I/O operations, also known as crash consistent.

Pricing – You can create and use Data Lifecyle Manager policies at no charge; you pay the usual storage charges for the EBS snapshots that it creates.

Availability – Data Lifecycle Manager is available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions.

Tags and Policies – If a volume has more than one tag and the tags match multiple policies, each policy will create a separate snapshot and both policies will govern the retention. No two policies can specify the same key/value pair for a tag.

Programmatic Access – You can create and manage policies programmatically! Take a look at the CreateLifecyclePolicy, GetLifecyclePolicies, and UpdateLifeCyclePolicy functions to get started. You can also write an AWS Lambda function in response to the createSnapshot event.

Error Handling – Data Lifecycle Manager generates a “DLM Policy State Change” event if a policy enters the error state.

In the Works – As you might have guessed from the name, we plan to add support for additional AWS data sources over time. We also plan to support policies that will let you do weekly and monthly snapshots, and also expect to give you additional scheduling flexibility.

Jeff;

Now You Can Create Encrypted Amazon EBS Volumes by Using Your Custom Encryption Keys When You Launch an Amazon EC2 Instance

Post Syndicated from Nishit Nagar original https://aws.amazon.com/blogs/security/create-encrypted-amazon-ebs-volumes-custom-encryption-keys-launch-amazon-ec2-instance-2/

Amazon Elastic Block Store (EBS) offers an encryption solution for your Amazon EBS volumes so you don’t have to build, maintain, and secure your own infrastructure for managing encryption keys for block storage. Amazon EBS encryption uses AWS Key Management Service (AWS KMS) customer master keys (CMKs) when creating encrypted Amazon EBS volumes, providing you all the benefits associated with using AWS KMS. You can specify either an AWS managed CMK or a customer-managed CMK to encrypt your Amazon EBS volume. If you use a customer-managed CMK, you retain granular control over your encryption keys, such as having AWS KMS rotate your CMK every year. To learn more about creating CMKs, see Creating Keys.

In this post, we demonstrate how to create an encrypted Amazon EBS volume using a customer-managed CMK when you launch an EC2 instance from the EC2 console, AWS CLI, and AWS SDK.

Creating an encrypted Amazon EBS volume from the EC2 console

Follow these steps to launch an EC2 instance from the EC2 console with Amazon EBS volumes that are encrypted by customer-managed CMKs:

  1. Sign in to the AWS Management Console and open the EC2 console.
  2. Select Launch instance, and then, in Step 1 of the wizard, select an Amazon Machine Image (AMI).
  3. In Step 2 of the wizard, select an instance type, and then provide additional configuration details in Step 3. For details about configuring your instances, see Launching an Instance.
  4. In Step 4 of the wizard, specify additional EBS volumes that you want to attach to your instances.
  5. To create an encrypted Amazon EBS volume, first add a new volume by selecting Add new volume. Leave the Snapshot column blank.
  6. In the Encrypted column, select your CMK from the drop-down menu. You can also paste the full Amazon Resource Name (ARN) of your custom CMK key ID in this box. To learn more about finding the ARN of a CMK, see Working with Keys.
  7. Select Review and Launch. Your instance will launch with an additional Amazon EBS volume with the key that you selected. To learn more about the launch wizard, see Launching an Instance with Launch Wizard.

Creating Amazon EBS encrypted volumes from the AWS CLI or SDK

You also can use RunInstances to launch an instance with additional encrypted Amazon EBS volumes by setting Encrypted to true and adding kmsKeyID along with the actual key ID in the BlockDeviceMapping object, as shown in the following command:

$> aws ec2 run-instances –image-id ami-b42209de –count 1 –instance-type m4.large –region us-east-1 –block-device-mappings file://mapping.json

In this example, mapping.json describes the properties of the EBS volume that you want to create:


{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"VolumeSize": 100,
"VolumeType": "gp2",
"Encrypted": true,
"kmsKeyID": "arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef"
}
}

You can also launch instances with additional encrypted EBS data volumes via an Auto Scaling or Spot Fleet by creating a launch template with the above BlockDeviceMapping. For example:

$> aws ec2 create-launch-template –MyLTName –image-id ami-b42209de –count 1 –instance-type m4.large –region us-east-1 –block-device-mappings file://mapping.json

To learn more about launching an instance with the AWS CLI or SDK, see the AWS CLI Command Reference.

In this blog post, we’ve demonstrated a single-step, streamlined process for creating Amazon EBS volumes that are encrypted under your CMK when you launch your EC2 instance, thereby streamlining your instance launch workflow. To start using this functionality, navigate to the EC2 console.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon EC2 forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.