Tag Archives: launch

Some Unique Sessions at re:Invent 2018

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/some-unique-sessions-at-reinvent-2018/

We recently added three unique breakout sessions to the re:Invent Session Catalog and I want to make sure that you are aware of them.

It’s rare for Distinguished Engineers like Peter Vosshall, Principal Engineers like Colm MacCarthaigh, and Directors and VPs responsible for entire AWS services to speak within a three day period. So, you should take this opportunity to hear from Peter and Colm, and from Deepak Singh (AWS Containers), David Richardson (Serverless), and Ken Exner (Developer Tools) at re:Invent 2018.

Grab a seat at How AWS Minimizes the Blast Radius of Failures to hear Peter Vosshall speak candidly about the philosophies that guide operations at AWS and the techniques AWS uses to reduce the blast radius of systems failures.

Join Closing Loops and Opening Minds: How to Take Control of Systems, Big and Small to deep dive into the theories behind AWS control plane design with Colm MacCarthaigh.

Or, sit in while the AWS leaders behind Containers, Serverless, and Developer Tools discuss the changes to architectural patterns, operational models, and software delivery that takes place on the journey from monolith to microservices in their joint Leadership Session: Using DevOps, Microservices, and Serverless to Accelerate Innovation.

Seats are still available and you still have a chance to get one. Reserve yours before it is too late.

See you in Vegas!

Jeff;

Amazon S3 Block Public Access – Another Layer of Protection for Your Accounts and Buckets

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-s3-block-public-access-another-layer-of-protection-for-your-accounts-and-buckets/

Newly created Amazon S3 buckets and objects are (and always have been) private and protected by default, with the option to use Access Control Lists (ACLs) and bucket policies to grant access to other AWS accounts or to public (anonymous) requests. The ACLs and policies give you lots of flexibility. You can grant permissions to multiple accounts, restrict access to specific IP addresses, require the use of Multi-Factor Authentication (MFA), allow other accounts to upload new objects to a bucket, and much more.

We want to make sure that you use public buckets and objects as needed, while giving you tools to make sure that you don’t make them publicly accessible due to a simple mistake or misunderstanding. For example, last year we provided you with a Public indicator to let you know at a glance which buckets are publicly accessible:

The bucket view is sorted so that public buckets appear at the top of the page by default.

We also made Trusted Advisor‘s bucket permission check free:

New Amazon S3 Block Public Access
Today we are making it easier for you to protect your buckets and objects with the introduction of Amazon S3 Block Public Access. This is a new level of protection that works at the account level and also on individual buckets, including those that you create in the future. You have the ability to block existing public access (whether it was specified by an ACL or a policy) and to ensure that public access is not granted to newly created items. If an AWS account is used to host a data lake or another business application, blocking public access will serve as an account-level guard against accidental public exposure. Our goal is to make clear that public access is to be used for web hosting!

This feature is designed to be easy to use, and can be accessed from the S3 Console, the CLI, the S3 APIs, and from within CloudFormation templates. Let’s start with the S3 Console and a bucket that is public:

I can exercise control at the account level by clicking Public access settings for this account:

I have two options for managing public ACLs and two for managing public bucket policies. Let’s take a closer look at each one:

Block new public ACLs and uploading public objects – This option disallows the use of new public bucket or object ACLs, and is used to ensure that future PUT requests that include them will fail. It does not affect existing buckets or objects. Use this setting to protect against future attempts to use ACLs to make buckets or objects public. If an application tries to upload an object with a public ACL or if an administrator tries to apply a public access setting to the bucket, this setting will block the public access setting for the bucket or the object.

Remove public access granted through public ACLs – This option tells S3 not to evaluate any public ACL when authorizing a request, ensuring that no bucket or object can be made public by using ACLs. This setting overrides any current or future public access settings for current and future objects in the bucket. If an existing application is currently uploading objects with public ACLs to the bucket, this setting will override the setting on the object.

Block new public bucket policies – This option disallows the use of new public bucket policies, and is used to ensure that future PUT requests that include them will fail. Again, this does not affect existing buckets or objects. This setting ensures that a bucket policy cannot be updated to grant public access.

Block public and cross-account access to buckets that have public policies – If this option is set, access to buckets that are publicly accessible will be limited to the bucket owner and to AWS services. This option can be used to protect buckets that have public policies while you work to remove the policies; it serves to protect information that is logged to a bucket by an AWS service from becoming publicly accessible.

To make changes, I click Edit, check the desired public access settings, and click Save:

I recommend that you use these settings for any account that is used for internal AWS applications!

Then I confirm my intent:

After I do this, I need to test my applications and scripts to ensure that everything still works as expected!

When I make these settings at the account level, they apply to my current buckets, and also to those that I create in the future. However, I can also set these options on individual buckets if I want to take a more fine-grained approach to access control. If I set some options at the account level and others on a bucket, the protections are additive. I select a bucket and click Edit public access settings:

Then I select the desired options:

Since I have already denied all public access at the account level, this is actually redundant, but I want you to know that you have control at the bucket level. One thing to note: I cannot override an account-level setting by changing the options that I set at the bucket level.

I can see the public access status of all of my buckets at a glance:

Programmatic Access
I can also access this feature by making calls to the S3 API. Here are the functions:

GetPublicAccessBlock – Retrieve the public access block options for an account or a bucket.

PutPublicAccessBlock – Set the public access block options for an account or a bucket.

DeletePublicAccessBlock – Remove the public access block options from an account or a bucket.

GetBucketPolicyStatus – See if the bucket access policy is public or not.

I can also set the options for a bucket when I create it via a CloudFormation template:

{
   "Type":"AWS::S3::Bucket",
   "Properties":{
      "PublicAccessBlockConfiguration":{
         "BlockPublicAcls":true,
         "IgnorePublicAcls":false,
         "BlockPublicPolicy":true,
         "RestrictPublicBucket":true
      }
   }
}

Things to Know
Here are a couple of things to keep in mind when you are making use of S3 Block Public Access:

New Buckets – Going forward, buckets that you create using the S3 Console will have all four of the settings enabled, as recommended for any application other than web hosting. You will need to disable one or more of the settings in order to make the bucket public.

Automated Reasoning – The determination of whether a given policy or ACL is considered public is made using our Zelkova Automated Reasoning system (you can read How AWS Uses Automated Reasoning to Help You Achieve Security at Scale to learn more).

Organizations – If you are using AWS Organizations, you can use a Service Control Policy (SCP) to restrict the settings that are available to the AWS account within the organization. For example, you can set the desired public access settings for any desired accounts and then use an SCP to ensure that the settings cannot be changed by the account owners.

Charges – There is no charge for the use of this feature; you pay the usual prices for all requests that you make to the S3 API.

Available Now
Amazon S3 Block Public Access is available now in all commercial AWS regions and you can (and should) start using it today!

Jeff;

New – Train Custom Document Classifiers with Amazon Comprehend

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-train-custom-document-classifiers-with-amazon-comprehend/

Amazon Comprehend gives you the power to process natural-language text at scale (read my introductory post, Amazon Comprehend – Continuously Trained Natural Language Processing, to learn more). After launching late 2017 with support for English and Spanish, we have added customer-driven features including Asynchronous Batch Operations, Syntax Analysis, support for additional languages (French, German, Italian, and Portuguese), and availability in more regions.

Using automatic machine learning (AutoML), Comprehend lets you create custom Natural Language Processing (NLP) models using data that you already have, without the need to learn the ins and outs of ML. Based on your data set and use case, it automatically selects the right algorithm, tuning parameter, builds, and tests the resulting model.

If you already have a collection of tagged documents—support tickets, call center conversations (via Amazon Transcribe, forum posts, and so forth)— you can use them as a starting point. In this context, tagged simply means that you have examined each document and assigned a label that characterizes it in the desired way. Custom Classification needs at least 50 documents for each label, but can do an even better job if it has hundreds or thousands.

In this post I will focus on Custom Classification, and will show you how to train a model that separates clean text from text that contains profanities. Then I will show you how to use the model to classify new text.

Using Classifiers
My starting point is a CSV file of training text that looks like this (I blurred all of the text; trust me that there’s plenty of profanity):

The training data must reside in an S3 object, with one label and one document per line:

Next, I navigate to the Amazon Comprehend Console and click Classification. I don’t have any existing classifiers, so I click Create classifier to make one:

I name my classifier and select a language for my documents, choose the S3 bucket where my training data resides, and then create an AWS Identity and Access Management (IAM) role that has permission to access the bucket. Then I click Create classifier to proceed:

The training process begins right away:

The status changes to Trained within minutes, and now I am ready to create an analysis job to classify some text, some of it also filled with profanity:

I put this text into another S3 bucket, click Analysis in the console, and click Create job. Then I give my job a name, choose Custom classification as the Analysis type, and select the classifier that I just built. I also point to the input bucket (with the file above), and another bucket that will receive the results, classified per my newly built classifier, and click Create job to proceed (important safety tip: if you use the same S3 bucket for the source and the destination, be sure to reference the input document by name):

The job begins right away, and also takes just minutes to complete:

The results are stored in the S3 bucket that I selected when I created the job:

Each line of output corresponds to a document in the input file:

Here’s a detailed look at one line:

{
   "File":"profanity_test.csv",
   "Line":"0",
   "Classes":[
      {
         "Name":"PROFANITY",
         "Score":1.0
      },
      {
         "Name":"NON_PROFANITY",
         "Score":0.0
      }
   ]
}

As you can see, the new Classification Service is powerful and easy to use. I was able to get useful, high-quality results in minutes without knowing anything about Machine Learning.

By the way, ou can also train and test models using the Amazon Comprehend CLI and the Amazon Comprehend APIs.

Available Now
Amazon Comprehend Classification Service is available today, in all regions where Comprehend is available.

Jeff;

New – EC2 Auto Scaling Groups With Multiple Instance Types & Purchase Options

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ec2-auto-scaling-groups-with-multiple-instance-types-purchase-options/

Earlier this year I told you about EC2 Fleet, an AWS building block that makes it easy for you to create fleets that are built from a combination of EC2 On-Demand, Reserved, and Spot Instances that span multiple EC2 instance types. In that post I showed you how to create a fleet and walked through an example that created a genomics processing pipeline that used a mix of M4 and M5 instances. I also dropped a hint to let you know that we were working on integrating EC2 Fleet with Auto Scaling and other AWS services.

Auto Scaling Across Multiple Instance Types & Purchase Options
Today I am happy to let you know that you can now create Auto Scaling Groups that grow and shrink in response to changing conditions, while also making use of the most economical combination of EC2 instance types and pricing models. You have full control of the instance types that will be used to build your group, along with the ability to control the mix of On-Demand and Spot. You can also update your existing Auto Scaling Groups to take advantage of this new feature.

The Auto Scaling Groups that you create are optimized anew each time a scale-out or scale-in event takes place, always seeking the lowest overall cost while meeting the other requirements set by your configuration. You can modify the configuration as newer instance types become available, allowing you to create a group that evolves in step with EC2.

Creating an Auto Scaling Group
I can create an Auto Scaling Group from the EC2 Console, CLI, or API. The first step is to make sure that I have a suitable Launch Template (it should not specify the use of Spot Instances). Here’s mine:

Then I navigate to my Auto Scaling Groups and click Create Auto Scaling group:

I click Launch Template, select my ProdWebServer template, and click Next Step to proceed:

I name my group and select Combine purchase models and instances to unlock the new functionality:

Now I select the instance types that I want to use. The list is prioritized: instances at the top of the list will be used in preference to those lower down when On-Demand instances are launched. My app will run fine on M4 or M5 instances with 2 or more vCPUs:

I can accept the default settings for my group’s composition or I can set them myself by unchecking Use default:

Here’s what I can do:

Maximum Spot Price – Sets the maximum Spot price that I want to pay. The default setting caps this bid at the On-Demand price.

Spot Allocation Strategy – Control the amount of per-AZ diversity for the Spot Instances. A larger number adds some flexibility at times when a particular instance type is in high demand within an AZ.

Optional On-Demand Base  – Controls how much of the initial capacity is made up of On-Demand Instances. Keeping this set to 0 indicates that I prefer to launch On-Demand Instances as a percentage of the total group capacity that is running at any given time.

On-Demand Percentage Above Base – Controls the percentage of the add-on to the initial group that is made up of On-Demand Instances versus the percentage that is made up of Spot Instances.

As you can see, I have full control over how my group is built. I leave them all as-is, set my group to start with 4 instances, choose my VPC subnets, and click Next to set up my scaling policies, as usual:

I disable scale-in for demo purposes (you don’t need to do this for your group):

I click past the Configure Notifications, and indicate that I want to tag my group and the EC2 instances in it:

Then I review my settings and click Create Auto Scaling Group to move ahead:

My initial group of four instances is ready to go within minutes:

I can filter by tag in the EC2 Console and display the Lifecycle column to see the mix of On-Demand and Spot Instances:

I can modify my Auto Scaling Group, reducing the On-Demand Percentage to 20% and doubling the Desired Capacity (this is my demo-mode way of showing you what happens when the group scales out):

The changes take effect within minutes; new Spot Instances are launched, some of the existing On-Demand Instances are terminated, and the composition of my group reflects the new settings:

Here are a couple of things to keep in mind when you start to use this cool new feature:

Reserved Instances – We plan to add support for the preferential use of Reserved Instances in the near future. Today, if you own Reserved Instances, specify their instance types as early as possible in the list I showed you earlier. Your discounts will apply to any On-Demand instances that match available Reserved Instances.

Weight – All instance types have the same weight; we plan to give you the ability to specify weights in the near future. This will allow you to specify custom capacity units for each instance using either memory or vCPUs, and to specify the overall desired capacity in the same units.

Cost – The feature itself is available to you at no charge. If you switch part or all of your Auto Scaling Groups over to Spot Instances, you may be able to save up to 90% when compared to On-Demand Instances.

ECS and EKS – If you are running Amazon ECS or Amazon Elastic Container Service for Kubernetes on a cluster that makes use of an Auto Scaling Group, you can update the group to make use of multiple instance types and purchase options.

Available Now
This feature is available now and you can start using today in all commercial AWS regions!

Jeff;

New – CloudFormation Drift Detection

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cloudformation-drift-detection/

AWS CloudFormation supports you in your efforts to implement Infrastructure as Code (IaC). You can use a template to define the desired AWS resource configuration, and then use it to launch a CloudFormation stack. The stack contains the set of resources defined in the template, configured as specified. When you need to make a change to the configuration, you update the template and use a CloudFormation Change Set to apply the change. Your template completely and precisely specifies your infrastructure and you can rest assured that you can use it to create a fresh set of resources at any time.

That’s the ideal case! In reality, many organizations are still working to fully implement IaC. They are educating their staff and adjusting their processes, both of which take some time. During this transition period, they sometimes end up making direct changes to the AWS resources (and their properties) without updating the template. They might make a quick out-of-band fix to change an EC2 instance type, fix an Auto Scaling parameter, or update an IAM permission. These unmanaged configuration changes become problematic when it comes time to start fresh. The configuration of the running stack has drifted away from the template and is no longer properly described by it. In severe cases, the change can even thwart attempts to update or delete the stack.

New Drift Detection
Today we are announcing a powerful new drift detection feature that was designed to address the situation that I described above. After you create a stack from a template, you can detect drift from the Console, CLI, or from your own code. You can detect drift on an entire stack or on a particular resource, and see the results in just a few minutes. You then have the information necessary to update the template or to bring the resource back into compliance, as appropriate.

When you initiate a check for drift detection, CloudFormation compares the current stack configuration to the one specified in the template that was used to create or update the stack and reports on any differences, providing you with detailed information on each one.

We are launching with support for a core set of services, resources, and properties, with plans to add more over time. The initial list of resources spans API Gateway, Auto Scaling, CloudTrail, CloudWatch Events, CloudWatch Logs, DynamoDB, Amazon EC2, Elastic Load Balancing, IAM, AWS IoT, Lambda, Amazon RDS, Route 53, Amazon S3, Amazon SNS, Amazon SQS, and more.

You can perform drift detection on stacks that are in the CREATE_COMPLETE, UPDATE_COMPLETE, UPDATE_ROLLBACK_COMPLETE, and UPDATE_ROLLBACK_FAILED states. The drift detection does not apply to other stacks that are nested within the one you check; you can do these checks yourself instead.

Drift Detection in Action
I tested this feature on the simple stack that I used when I wrote about Provisioned Throughput for Amazon EFS. I simply select the stack and choose Detect drift from the Action menu:

I confirm my intent and click Yes, detect:

Drift detection starts right away; I can Close the window while it runs:

After it completes I can see that the Drift status of my stack is IN_SYNC:

I can also see the drift status of each checked resource by taking a look at the Resources tab:

Now, I will create a fake change by editing the IAM role, adding a new policy:

I detect drift a second time, and this time I find (not surprise) that my stack has drifted:

I click View details, and I inspect the Resource drift status to learn more:

I can expand the status line for the modified resource to learn more about the drift:

Available Now
This feature is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo) Regions. As I noted above, we are launching with support for a strong, initial set of resources, and plan to add many more in the months to come.

Jeff;

 

In the Works – AWS Region in Milan, Italy

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-milan-italy/

Late last month I announced that we are working on an AWS Region in South Africa. Today I would like to let you know that we are also building an AWS Region in Italy and plan to open it up in early 2020.

Milan in 2020
The upcoming Europe (Milan) Region will have three Availability Zones and will be our sixth region in Europe, joining the existing regions in France, Germany, Ireland, the UK, and the new region in Sweden that is set to launch later this year. We currently have 57 Availability Zones in 19 geographic regions worldwide, and another 15 Availability Zones across five regions in the works for launch between now and the first half of 2020 (check out the AWS Global Infrastructure page for more info). Like all of our existing regions, this one is designed and built to meet the most rigorous compliance standards and to provide the highest level of security for AWS customers.

AWS in Italy
AWS customers in Italy have been using our existing regions for more than a decade. Hot startups, enterprises, and public sector organizations in Italy are all running their mission-critical applications on the AWS Cloud. Here’s a tasting menu to give you an idea of what’s already happening:

Ferrero is one of the world’s largest chocolate manufacturers (including the Pocket Coffee that powers my blogging). They have been using AWS since 2010, and use a template-driven model that lets them share features and functions across 250 web sites for 80 countries, giving them the ability to handle traffic surges while reducing costs by 30%.

Mediaset runs multiple broadcast networks and digital channels, as well as a pay-TV service, advertising agencies, and Italian film studio Medusa. The Mediaset Premium Online soccer service now attracts over 600,000 unique month visitors, doubling in size since it was launched last year. AWS allows them to meet this demand without adding more hardware, while also scaling up and down on an as-needed basis.

Eataly is the largest online marketplace for Italian food and wine products. After moving from physical stores to the web, they decided to use AWS to ensure scalability. Today, they use a wide range of AWS services, deliver 1.5 to 3 million page views daily, and handle holiday peaks ranging from 100 to 1000 orders per day.

Vodafone Italy has more than 30 million customers for their mobile services. They used AWS to power a new pay-as-you-go service to allow mobile customers to add credit to their accounts, building the service from scratch to be PCI DSS Level 1 compliant and to scale rapidly, all in just 3 months, and with a 30% reduction in capital expenses.

The European Space Agency (ESA) Centre for Earth Observation in Frascati, Italy runs the Data User Element (DUE) program. Although much of the work takes place in Earth-orbiting satellites, the program also takes advantage of EC2 and S3, storing up to 30 terabytes of images and observations at peak times and available to a 50,000 person user community.

The new region will give these customers (and many others) a new option with even lower latency for their local customers, and will also open the door to applications that must comply with strict data sovereignty requirements.

Investing in Italy’s Future
The upcoming Europe (Milan) Region is just one step along a long path! Back in 2012 we launched the first Point of Presence (PoP) in Milan and now use it to deliver Amazon CloudFront, Amazon Route 53, AWS Shield, and AWS WAF services to Italy, sharing the load with a PoP in Palermo that we launched in 2017. In 2016 we acquired Asti-based NICE Software (read Amazon Web Services to Acquire NICE).

We are also working to help prepare developers in Italy for the digital future, with programs like AWS Educate, AWS Academy, and AWS Activate. Dozens of universities and business schools across Italy are already participating in our educational programs, as are a plethora of startups and accelerators.

Stay Tuned
I’ll be sure to share additional news about this and other upcoming AWS regions as soon as I have it, so stay tuned!

Jeff;

 

AWS GovCloud (US-East) Now Open

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-govcloud-us-east-now-open/

Last year I told you that we were working on AWS GovCloud (US-East), an eastern US companion to the existing AWS GovCloud (US-West) Region that we launched in 2011. The new region is now open and ready to serve the needs of federal, state, and local government agencies, the IT contractors that serve them, and customers with regulated workloads. It offers added redundancy, data durability, and resiliency, and also provides additional options for disaster recovery. This is an isolated AWS region, subject to FedRAMP High and Moderate baselines, operated by US citizens on US soil. It is accessible only to vetted US entities and root account holders, who must confirm that they are US Persons (citizens or permanent residents) in order to gain access. You can read Achieve FedRAMP High Compliance in the AWS GovCloud (US) Region to learn more.

AWS GovCloud (US) gives vetted government customers and regulated industry customers and their partners the flexibility to architect secure cloud solutions that comply with: the FedRAMP High baseline, the DOJ’s Criminal Justice Information Systems (CJIS) Security Policy, U.S. International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR), Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG) for Impact Levels 2, 4 and 5, FIPS 140-2, IRS-1075, and other compliance regimes.

Lots of Services
Applications running in this region can make use of Auto Scaling (EC2 and Application), AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon ElastiCache, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Elastic Load Balancing (Application, Network, and Classic), Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM) (including Access Key Last Used), Amazon Inspector, AWS Key Management Service (KMS), Amazon Kinesis Data Streams, AWS Lambda, Amazon Aurora (MySQL and PostgreSQL), Amazon Redshift, Amazon Relational Database Service (RDS), AWS Server Migration Service, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon EC2 Systems Manager (SSM), AWS Trusted Advisor, Amazon Virtual Private Cloud, VM Import, VPN, Amazon API Gateway, AWS Snowball, AWS Snowball Edge, AWS Server Migration Service, and AWS Step Functions.

Crossing the Regions
Many of the cool cross-regions features of AWS can be used to span AWS GovCloud (US-East) and AWS GovCloud (US-West) in order to reduce latency or to increase workload resiliency & availability for mission-critical systems. Here’s what you can do:

We are working to add support for DynamoDB Global Tables and Inter-Region VPC Peering.

AWS GovCloud (US) in Action
Our customers are already hosting many different types of applications in AWS GovCloud (US-West); here’s a small sample:

Enterprise Apps – Oracle, SAP, and Microsoft workloads that were traditionally provisioned for peak demand are now being run on scalable, cloud-based infrastructure.

HPC / Big Data – Organizations with large data sets are spinning up HPC clusters in the cloud in order to extract intelligence and to better serve their constituents.

Storage / DR – The ability to tap in to vast amounts of cost-effective, highly durable cloud storage managed by US Persons supports a variety of DR approaches, from simple backups to hot standby. The addition of a second region allows you to use of the cross-region features that I mentioned earlier.

Learn More
To learn more, check out the AWS GovCloud (US) page. If you are looking forward to making use of AWS GovCloud (US) and need a partner to help you to make it happen, take a look at the list of AWS GovCloud (US) Partners.

Jeff;

New – Redis 5.0 Compatibility for Amazon ElastiCache

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-redis-5-0-compatibility-for-amazon-elasticache/

Earlier this year we announced Redis 4.0 compatibility for Amazon ElastiCache. In that post, Randall explained how ElastiCache for Redis clusters can scale to terabytes of memory and millions of reads and writes per second! Other recent improvements to Amazon ElastiCache for Redis include:

Read Replica Scaling – Support for adding or removing read replica nodes to a Redis Cluster, along with a reduction of up to 40% in cluster creation time.

PCI DSS Compliance – Certification as Payment Card Industry Data Security Standard (PCI DSS) compliant. This allows you to use ElastiCache for Redis (engine versions 4.0.10 and higher) to build low-latency, high-throughput applications that process sensitive payment card data.

FedRAMP Authorized and Available in AWS GovCloud (US) – United States government customers and their partners can use ElastiCache for Redis to process and store their FedRAMP systems and data for mission-critical, high-impact workloads in the AWS GovCloud (US) Region, and at moderate impact level in the other AWS Regions in the US. To learn more, read the ElastiCache for Redis Compliance documentation.

In-Place Upgrades – Support for upgrading a Redis Cluster to a newer engine version in-place and while maintaining availability except for a failover period measured in seconds.

New Instance Types – Support for the use of M5 and R5 instances, with significant performance improvements.

5.0 Compatibility
Today I am happy to announce Redis 5.0 compatibility to Amazon ElastiCache for Redis. This version of Redis includes support for a new Streams data type and new commands (ZPOPMIN and ZPOPMAX) for use on Sorted Sets, and also does a better job of defragmenting memory. To learn more, read What’s New in Redis 5?

As usual, you can use the ElastiCache Console, CLI, APIs, or a CloudFormation template to get started. I’ll use the Console, with the following settings:

My cluster is up and running within minutes:

I can also use the in-place upgrade feature that I mentioned earlier on my existing 4.0-compatible cluster. I select the cluster, click Modify, and the 5.0-compatible engine is already selected. I confirm the other settings and click Modify to proceed:

Streams in Action
The new Stream data type is very powerful! Each Stream has a name, and can be created by simply referencing it as part of an XADD command. Let’s say that I have a long-running process that generates files that need to be scanned and validated. For testing purposes, I can add a bunch of files to a stream name Files from the shell like this:

$  find /usr -name 'a*' -exec redis-cli -h r5cluster.seutl3.ng.0001.use1.cache.amazonaws.com \
    XADD Files \* f {} \;

I can retrieve values starting from the beginning of the stream using the command XREAD BLOCK 1000 STREAMS Files 0:

I can also read the values that are after a given ID:

In most cases, I would be doing the reads and the writes from code rather than from the command line, of course. This is a very simple example of the power of Redis 5 Streams and I am sure that you can do better!

Available Now
You can upgrade existing 4.0-compatible clusters and create new 5.0-compatible clusters today in all commercial AWS regions.

Jeff;

New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-lower-cost-amd-powered-ec2-instances/

From the start, AWS has focused on choice and economy. Driven by a never-ending torrent of customer requests that power our well-known Virtuous Cycle, I think we have delivered on both over the years:

Choice – AWS gives you choices in a wide range of dimensions including locations (18 operational geographic regions, 4 more in the works, and 1 local region), compute models (instances, containers, and serverless), EC2 instance types, relational and NoSQL database choices, development languages, and pricing/purchase models.

Economy – We have reduced prices 67 times so far, and work non-stop to drive down costs and to make AWS an increasingly better value over time. We study usage patterns, identify areas for innovation and improvement, and deploy updates across the entire AWS Cloud on a very regular and frequent basis.

Today I would like to tell you about our latest development, one that provides you with a choice of EC2 instances that are more economical than ever!

Powered by AMD
The newest EC2 instances are powered by custom AMD EPYC processors running at 2.5 GHz and are priced 10% lower than comparable instances. They are designed to be used for workloads that don’t use all of compute power available to them, and provide you with a new opportunity to optimize your instance mix based on cost and performance.

Here’s what we are launching:

General Purpose – M5a instances are designed for general purpose workloads: web servers, app servers, dev/test environments, and gaming. The M5a instances are available in 6 sizes.

Memory Optimized – R5a instances are designed for memory-intensive workloads: data mining, in-memory analytics, caching, and so forth. The R5a instances are available in 6 sizes, with lower per-GiB memory pricing in comparison to the R5 instances.

The new instances are built on the AWS Nitro System. They can make use of existing HVM AMIs (as is the case with all other recent EC2 instance types, the AMI must include the ENA and NVMe drivers), and can be used in Cluster Placement Groups.

These new instances should be a great fit for customers who are looking to further cost-optimize their Amazon EC2 compute environment. As always, we recommend that you measure performance and cost on your own workloads when choosing your instance types.

General Purpose Instances
Here are the specs for the M5a instances:

Instance Name vCPUs RAM EBS-Optimized Bandwidth Network Bandwidth
m5a.large
2 8 GiB Up to 2.120 Gbps Up to 10 Gbps
m5a.xlarge
4 16 GiB Up to 2.120 Gbps Up to 10 Gbps
m5a.2xlarge
8 32 GiB Up to 2.120 Gbps Up to 10 Gbps
m5a.4xlarge
16 64 GiB 2.120 Gbps Up to 10 Gbps
m5a.12xlarge
48 192 GiB 5 Gbps 10 Gbps
m5a.24xlarge
96 384 GiB 10 Gbps 20 Gbps

Memory Optimized Instances
Here are the specs for the R5a instances:

Instance Name vCPUs RAM EBS-Optimized Bandwidth Network Bandwidth
r5a.large
2 16 GiB Up to 2.120 Gbps Up to 10 Gbps
r5a.xlarge
4 32 GiB Up to 2.120 Gbps Up to 10 Gbps
r5a.2xlarge
8 64 GiB Up to 2.120 Gbps Up to 10 Gbps
r5a.4xlarge
16 128 GiB 2.120 Gbps Up to 10 Gbps
r5a.12xlarge
48 384 GiB 5 Gbps 10 Gbps
r5a.24xlarge
96 768 GiB 10 Gbps 20 Gbps

Available Now
These instances are available now and you can start using them today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions in On-Demand, Spot, and Reserved Instance form. Pricing, as I noted earlier, is 10% lower than the equivalent existing instances. To learn more, visit our new AMD Instances page.

Jeff;

PS – We are also working on T3a instances; stay tuned for more info!

 

In the Works – AWS Region in South Africa

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-south-africa/

Last year we launched new AWS Regions in France and China (Ningxia), and announced that we are working on regions in Bahrain, Hong Kong SAR, Sweden, and a second GovCloud Region in the United States.

South Africa in Early 2020
Today, I am happy to announce that we will be opening an AWS Region in South Africa in the first half of 2020. The new Region will be based in Cape Town, will be comprised of three Availability Zones, and will give AWS customers and partners the ability to run their workloads and store their data in South Africa. The addition of the AWS Africa (Cape Town) Region will also enable organizations to provide lower latency to end users across Sub-Saharan Africa and will enable more African organizations to leverage advanced technologies such as Artificial Intelligence, Machine Learning, Internet of Things (IoT), mobile services, and more to drive innovation.

AWS customers are already making use of 55 Availability Zones across 19 infrastructure regions worldwide. Today’s announcement brings the total number of global regions (operational and in the works) up to 23.

A Growing Presence
The new Region is the latest of a series of investments in South Africa, and is part our commitment to support South Africa’s transformation. In 2004, Amazon opened a Development Center in Cape Town that focuses on building pioneering networking technologies, next generation software for customer support, and the technology behind Amazon EC2. AWS has also added a number of teams including account managers, customer services reps, partner managers, solutions architects, and more, helping customers of all sizes as they move to the cloud.

In 2015, we continued our expansion, opening an office in Johannesburg, and in 2017 we brought the Amazon Global Network to Africa through AWS Direct Connect. Earlier this year we launched infrastructure on the African continent introducing Amazon CloudFront to South Africa, with two new edge locations in Cape Town and Johannesburg. We also support the growth of technology education with AWS Academy and AWS Educate and have supported the growth of new businesses through AWS Activate in the country for many years.

The addition of the AWS Region in South Africa will help builders and entrepreneurs in enterprises, the public sector, and startups across Sub-Saharan Africa to innovate and grow their organizations.

Talk to Us
As always, we are looking forward to serving new and existing customers in South Africa and working with partners across the region. Of course, the new Region will also be open to existing AWS customers who would like to serve users in South Africa and across the African continent.

To learn more about the AWS South Africa Region feel free to contact our team at [email protected]. If you are interested in joining the team and would like to learn more about AWS positions in South Africa, take a look at the Amazon Jobs site.

Jeff;

Check it Out – New AWS Pricing Calculator for EC2 and EBS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/check-it-out-new-aws-pricing-calculator-for-ec2-and-ebs/

The blog post that we published over a decade ago to launch the Simple Monthly Calculator still shows up on our internal top-10 lists from time to time! Since that post was published, we have extended, redesigned, and even rebuilt the calculator a time or two.

New Calculator
Starting with a blank screen, an empty code repo, and plenty of customer feedback, we are building a brand-new AWS Pricing Calculator. The new calculator is designed to help you estimate and understand your eventual AWS costs. We did our best to avoid excessive jargon and to make the calculations obvious, transparent, and accessible. You can see the options that are available to you, explore the associated costs, and make high-quality data-driven decisions.

We’re starting out with support for EC2 instances, EBS volumes, and a very wide variety of purchasing models, with plans to add support for more services as quickly as possible.

A Quick Tour
The new calculator lives at https://calculator.aws . Each estimate consists of one or more groups and the first one is created automatically:

Each group has a name, and has pricing for services in a particular AWS region. I click Edit group to change the name and pick a region, and click Apply:

Back at the main page of the calculator, I click Add service and choose to configure some EC2 instances. The group can contain multiple types and configurations of instances; I click Configure to move ahead:

At this point I can make a Quick estimate (the default), or supply more details as part of an Advanced estimate. I’ll start with a Quick estimate:

Here are a couple of things to keep in mind when you make a quick estimate:

Instance Type – I have two options for choosing EC2 instance types; I can enter my resource requirements (vCPU count, memory size, and GPU count) and have the calculator choose the option with the lowest price, or I can pick an EC2 instance type by name.

Pricing Strategy – I can choose to use On-Demand Instances, Convertible Reserved Instances, or Standard Reserved Instances, and can choose payment terms and options for RI’s.

EBS Volumes – I can choose the type and size of an EBS volume for the instance. Right now, the calculator allows you to associate one volume with each EC2 instance. If you need more than one, specify the total amount of storage you need across all volumes.

Details – I can expand the Show calculation section to see the math:

After I have made my choices, I click Add to my estimate to move ahead. My selections, along with their costs (annual, upfront, and monthly), are displayed:

I can go back and add another service, or create another group. I’ll add another EC2 instance, using an Advanced estimate this time around. Here’s where I start:

I have very fine-grained control over each aspect of my estimate. For example, I can characterize my workload in great detail. I click on Workload, and have the ability to select the graph that best represents my monthly workload:

I can even model workloads that have two or more independent daily (in this case) spike patterns. As I refine my model, the calculator figures out the most economical combination of On-Demand and Reserved Instances, and shows me the results:

The calculations are driven by the selection in the Pricing strategy. The default value, and the one that I used for the previous screen shot, is Cost optimized. I have other choices as well:

I can also model my data transfer in, out, and to other AWS regions:

Once I am happy with the results I click Add to my estimate, and take a look at my selections and their prices:

I can click Export to capture my estimate in spreadsheet form:

Here’s the data (I hid a few columns for clarity):

As you can see, the new calculator will quickly become a useful part of your planning and decision-making process.

One important thing to keep in mind: your estimates are stored in state that is local to the browser tab, and will be lost if you close the tab. The team is already hard at work on features that will allow you to save and even share your estimates, for launch in early 2019.

Stay Tuned
We will be adding more services and more features to the calculator in the months to come, and I’ll share some updates with you from time to time, either in this blog or via Twitter. If you have ideas, complaints, or other feedback, don’t hesitate to click on the Feedback link at the top of the page.

Jeff;

 

Amazon RDS Update – Console Update, RDS Recommendations, Performance Insights, M5 Instances, MySQL 8, MariaDB 10.3, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rds-update-console-update-rds-recommendations-performance-insights-m5-instances-mysql-8-mariadb-10-3-and-more/

It is time for a quick Amazon RDS update. I’ve got lots of news to share:

Console Update – The RDS Console has a fresh, new look.

RDS Recommendations – You now get recommendations that will help you to configure your database instances per our best practices.

Performance Insights for MySQL – You can peer deep inside of MySQL and understand more about how your queries are processed.

M5 Instances – You can now use MySQL and MariaDB on M5 instances.

MySQL 8.0 – You can now use MySQL 8.0 in production form.

MariaDB 10.3 – You can now use MariaDB 10.3 in production form.

Let’s take a closer look…

Console Update
The RDS Console took on a fresh, new look earlier this year. We made it available to you in preview form during development, and it is now the standard experience for all AWS users. You can see an overview of your RDS resources at a glance, create a new database, access documentation, and more, all from the home page:

You also get direct access to Performance Insights and to the new RDS Recommendations.

RDS Recommendations
We want to make it easy for you to take our best practices in to account when you configure your RDS database instances, even as those practices improve. The new RDS Recommendations feature will periodically check your configuration, usage, and performance data and display recommended changes and improvements, focusing on performance, stability, and security. It works with all of the database engines, and is very easy to use. Open the RDS Console and click Recommendations to get started:

I can see all of the recommendations at a glance:

I can open a recommendation to learn more:

I have four options that I can take with respect to this recommendation:

Fix Immediately – I select some database instances and click Apply now.

Fix Later – I select some database instances and click Schedule for the next maintenance window.

Dismiss – I select some database instances and click Dismiss to indicate that I do not want to make any changes, and to acknowledge that I have seen the recommendation.

Defer – If I do nothing, the recommendations remain active and I can revisit them at another time.

Other recommendations may include other options, or might require me to take some other actions. For example, the procedure for enabling encryption depends on the database engine:

RDS Recommendations are available today at no charge in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions. We plan to add additional recommendations over time, and also expect to make the recommendations available via an API.

Performance Insights for MySQL
I can now peek inside of MySQL to see which queries, hosts, and users are consuming the most time, and why:

You can identify expensive SQL queries and other bottlenecks with a couple of clicks, looking back across the timeframe of your choice: an hour, a day, a week, or even longer.

This feature was first made available for PostgreSQL (both RDS and Aurora) and is now available for MySQL (again, both RDS and Aurora). To learn more, read Using Amazon RDS Performance Insights.

M5 Instances
The M5 instances deliver improved price/performance compared to M4 instances, and offer up to 10 Gbps of dedicated network bandwidth for database storage.

You can now launch M5 instances (including the new high-end m5.24xlarge) when using RDS for MySQL and RDS for MariaDB. You can scale up to these new instance types by modifying your existing DB instances:

MySQL 8
Version 8 of MySQL is now available on Amazon RDS. This version of MySQL offers better InnoDB performance, JSON improvements, better GIS support (new spatial datatypes, indexes, and functions), common table expressions to reduce query complexity, window functions, atomic DDLs for faster online schema modification, and much more (read the documentation to learn more).

MariaDB 10.3
Version 10.3 of MariaDB is now available on Amazon RDS. This version of MariaDB includes a new temporal data processing feature, improved Oracle compatibility, invisible columns, performance enhancements including instant ADD COLUMN operations & fast-fail DDL operations, and much more (read the documentation for a detailed list).

Available Now
All of the new features, engines, and instance types are available now and you can start using them today!

Jeff;

 

 

New – Managed Databases for Amazon Lightsail

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-managed-databases-for-amazon-lightsail/

Amazon Lightsail makes it easy for you to get started with AWS. You choose the operating system (and optional application) that you want to run, pick an instance plan, and create an instance, all in a matter of minutes. Lightsail offers low, predictable pricing, with instance plans that include compute power, storage, and data transfer:

Managed Databases
Today we are making Lightsail even more useful by giving you the ability to create a managed database with a couple of clicks. This has been one of our top customer requests and I am happy to be able to share this news.

This feature is going to be of interest to a very wide range of current and future Lightsail users, including students, independent developers, entrepreneurs, and IT managers. We’ve addressed the most common and complex issues that arise when setting up and running a database. As you will soon see, we have simplified and fine-tuned the process of choosing, launching, securing, accessing, monitoring, and maintaining a database!

Each Lightsail database bundle has a fixed, monthly price that includes the database instance, a generous amount of SSD-backed storage, a terabyte or more of data transfer to the Internet and other AWS regions, and automatic backups that give you point-in-time recovery for a 7-day period. You can also create manual database snapshots that are billed separately.

Creating a Managed Database
Let’s walk through the process of creating a managed database and loading an existing MySQL backup into it. I log in to the Lightsail Console and click Databases to get started. Then I click Create database to move forward:

I can see and edit all of the options at a glance. I choose a location, a database engine and version, and a plan, enter a name, and click Create database (all of these options have good defaults; a single click often suffices):

We are launching with support for MySQL 5.6 and 5.7, and will add support for PostgreSQL 9.6 and 10 very soon. The Standard database plan creates a database in one Availability Zone with no redundancy; the High Availability plan also creates a presence in a second AZ, and is recommended for production use.

Database creation takes just a few minutes, the status turns to Available, and my database is ready to use:

I click on Database-Oregon-1, and I can see the connection details, and have access to other management information & tools:

I’m ready to connect! I create an SSH connection to my Lightsail instance, ensure that the mysql package is installed, and connect using the information above (read Connecting to Your MySQL Database to learn more):

Now I want to import some existing data into my database. Lightsail lets me enable Data import mode in order to defer any backup or maintenance operations:

Enabling data import mode deletes any existing automatic snapshots; you may want to take a manual snapshot before starting your import if you are importing fresh data into an existing database.

I have a large (13 GB) , ancient (2013-era) MySQL backup from a long-dead personal project; I download it from S3, uncompress it, and import it:

I can watch the metrics while the import is underway:

After the import is complete I disable data import mode, and I can run queries against my tables:

To learn more, read Importing Data into Your Database.

Lightsail manages all routine database operations. If I make a mistake and mess up my data, I can use the Emergency Restore to create a fresh database instance from an earlier point in time:

I can rewind by up to 7 days, limited to when I last disabled data import mode.

I can also take snapshots, and use them later to create a fresh database instance:

Things to Know
Here are a couple of things to keep in mind when you use this new feature:

Engine Versions – We plan to support the two latest versions of MySQL, and will do the same for other database engines as we make them available.

High Availability – As is always the case for production AWS systems, you should use the High Availability option in order to maintain a database footprint that spans two Zones. You can switch between Standard and High Availability using snapshots.

Scaling Storage – You can scale to a larger database instance by creating and then restoring a snapshot.

Data Transfer – Data transfer to and from Lightsail instances in the same AWS Region does not count against the usage that is included in your plan.

Amazon RDS – This feature shares core technology with Amazon RDS, and benefits from our operational experience with that family of services.

Available Now
Managed databases are available today in all AWS Regions where Lightsail is available:

Jeff;

re:Invent 2018 – 55 Days to Go….

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/reinvent-2018-55-days-to-go/

As I write this, there are just 55 calendar days until AWS re:Invent 2018. My colleagues and I are working flat-out to bring you the best possible learning experience and I want to give you a quick update on a couple of things…

Transportation – Customer Obsession is the first Amazon Leadership Principle and we take your feedback seriously! The re:Invent 2018 campus is even bigger this year, and our transportation system has been tuned and scaled to match. This includes direct shuttle routes from venue to venue so that you don’t spend time waiting at other venues, access to real-time transportation info from within the re:Invent app, and on-site signage. The mobile app will even help you to navigate to your sessions while letting you know if you are on time. If you are feeling more independent and don’t want to ride the shuttles, we’ll have partnerships with ridesharing companies including Lyft and Uber. Visit the re:Invent Transportation page to learn more about our transportation plans, routes, and options.

Reserved Seating – In order to give you as many opportunities to see the technical content that matters the most to you, we are bringing back reserved seating. You will be able to make reservations starting at 10 AM PT on Thursday, October 11, so mark your calendars. Reserving a seat is the best way to ensure that you will get a seat in your favorite session without waiting in a long line, so be sure to arrive at least 10 minutes before the scheduled start. As I have mentioned before, we have already scheduled repeats of the most popular sessions, and made them available for reservation in the Session Catalog. Repeats will take place all week in all re:Invent venues, along with overflow sessions in our Content Hubs (centralized overflow rooms in every venue). We will also stream live content to the Content Hubs as the sessions fill up.

Trivia Night – Please join me at 7:30 PM on Wednesday in the Venetian Theatre for the first-ever Camp re:Invent Trivia Night. Come and test your re:Invent and AWS knowledge to see if you and your team can beat me at trivia (that should not be too difficult). The last person standing gets bragging rights and an awesome prize.

How to re:Invent – Whether you are a first-time attendee or a veteran re:Invent attendee, please take the time to watch our How to re:Invent videos. We want to make sure that you arrive fully prepared, ready to learn about the latest and greatest AWS services, meet your peers and members of the AWS teams, and to walk away with the knowledge and the skills that will help you to succeed in your career.

See you in Vegas!

Jeff;

Saving Koalas Using Genomics Research and Cloud Computing

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/saving-koalas-using-genomics-research-and-cloud-computing/

Today is Save the Koala Day and a perfect time to to tell you about some noteworthy and ground-breaking research that was made possible by AWS Research Credits and the AWS Cloud.

Five years ago, a research team led by Dr. Rebecca Johnson (Director of the Australian Museum Research Institute) set out to learn more about koala populations, genetics, and diseases. As a biologically unique animal with a limited appetite, maintaining a healthy and genetically diverse population are both key elements of any conservation plan. In addition to characterizing the genetic diversity of koala populations, the team wanted to strengthen Australia’s ability to lead large-scale genome sequencing projects.

Inside the Koala Genome
Last month the team published their results in Nature Genetics. Their paper (Adaptation and Conservation Insights from the Koala Genome) identifies the genomic basis for the koala’s unique biology. Even though I had to look up dozens of concepts as I read the paper, I was able to come away with a decent understanding of what they found. Here’s my lay summary:

Toxic Diet – The eucalyptus leaves favored by koalas contain a myriad of substances that are toxic to other species if ingested. Gene expansions and selection events in genes encoding enzymes with detoxification functions enable koalas to rapidly detoxify these substances, making them able to subsist on a diet favored by no other animal. The genetic repertoire underlying the accelerated metabolics also renders common anti-inflammatory medications and antibiotics ineffective for treating ailing koalas.

Food Choice – Koalas are, as I noted earlier, very picky eaters. Genetically speaking, this comes about because their senses of smell and taste are enhanced, with 6 genes giving them the ability to discriminate between plant metabolites on the basis of smell. The researchers also found that koalas have a gene that helps them to select eucalyptus leaves with a high water content, and another that enhances their ability to perceive bitter and umami flavors.

Reproduction – Specific genes which control ovulation and birth were identified. In the interest of frugality, female koalas produce eggs only when needed.

Koala Milk – Newborn koalas are the size of a kidney bean and weigh less than half of a gram! They nurse for about a year, taking milk that changes in composition over time, with a potential genetic correlation. The researchers also identified genes known to function as anti-microbial properties.

Immune Systems – The researchers identified genes that formed the basis for resistance, immunity, or susceptibility to certain diseases that affect koalas. They also found evidence of a “genomic invasion” (their words) where the koala retrovirus actually inserts itself into the genome.

Genetic Diversity – The researchers also examined how geological events like habitat barriers and surface temperatures have shaped genetic diversity and population evolution. They found that koalas from some areas had markedly less genetic diversity than those from others, with evidence that allowed them to correlate diversity (or the lack of it) with natural barriers such as the Hunter Valley.

Powered by AWS
Creating a complete gene sequence requires (among many other things) an incredible amount of compute power and vast amount of storage.

While I don’t fully understand the process, I do know that it works on a bottom-up basis. The DNA samples are broken up into manageable pieces, each one containing several tens of thousands of base pairs. A variety of chemicals are applied to cause the different base constituents (A, T, C, or G) to fluoresce, and the resulting emission is captured, measured, and stored. Since this study generated a koala reference genome, the sequencing reads were assembled using an overlapping layout consensus assembly algorithm known as Falcon which was run on AWS. The koala genome comes in at 3.42 billion base pairs, slightly larger than the human genome.

I’m happy to report that this groundbreaking work was performed on AWS. The research team used cfnCluster to create multiple clusters, each with 500 to 1000 vCPUs, and running Falcon from Pacific Biosciences. All in all, the team used 3 million EC2 core hours, most of which were EC2 Spot Instances. Having access to flexible, low-cost compute power allowed the bioinformatics team to experiment with the configuration of the Falcon pipeline as they tuned and adapted it to their workload.

We are happy to have done our small part to help with this interesting and valuable research!

Jeff;

Now Available – Amazon EC2 High Memory Instances with 6, 9, and 12 TB of Memory, Perfect for SAP HANA

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-ec2-high-memory-instances-with-6-9-and-12-tb-of-memory-perfect-for-sap-hana/

The Altair 8800 computer that I built in 1977 had just 4 kilobytes of memory. Today I was able to use an EC2 instance with 12 terabytes (12 tebibytes to be exact) of memory, almost 4 billion times as much!

The new Amazon EC2 High Memory Instances let you take advantage of other AWS services including Amazon Elastic Block Store (EBS), Amazon Simple Storage Service (S3), AWS Identity and Access Management (IAM), Amazon CloudWatch, and AWS Config. They are designed to allow AWS customers to run large-scale SAP HANA installations, and can be used to build production systems that provide enterprise-grade data protection and business continuity.

Here are the specs:

Instance Name Memory Logical Processors
Dedicated EBS Bandwidth Network Bandwidth
u-6tb1.metal 6 TiB 448 14 Gbps 25 Gbps
u-9tb1.metal 9 TiB 448 14 Gbps 25 Gbps
u-12tb1.metal 12 TiB 448 14 Gbps 25 Gbps

Each Logical Processor is a hyperthread on one of the 224 physical CPU cores. All three sizes are powered by the latest generation Intel® Xeon® Platinum 8176M (Skylake) processors running at 2.1 GHz (with Turbo Boost to 3.80 GHz), and are available as EC2 Dedicated Hosts for launch within a new or existing Amazon Virtual Private Cloud (VPC). You can launch them using the AWS Command Line Interface (CLI) or the EC2 API, and manage them there or in the EC2 Console.

The instances are EBS-Optimized by default, and give you low-latency access to encrypted and unencrypted EBS volumes. You can choose between Provisioned IOPS, General Purpose (SSD), and Streaming Magnetic volumes, and can attach multiple volumes, each with a distinct type and size, to each instance.

SAP HANA in Minutes
The EC2 High Memory instances are certified by SAP for OLTP and OLAP workloads such as S4/HANA, Suite on HANA, BW4/HANA, BW on HANA, and Datamart (see the SAP HANA Hardware Directory for more information).

We ran the SAP Standard Application Benchmark and measured the instances at 480,600 SAPS, making them suitable for very large workloads. Here’s an excerpt from the benchmark:

In anticipation of today’s launch, the EC2 team provisioned a u-12tb1.metal instance for my AWS account and I located it in the Dedicated Hosts section of the EC2 Console:

Following the directions in the SAP HANA on AWS Quick Start, I copy the Host Reservation ID, hop over to the CloudFormation Console and click Create Stack to get started. I choose my template, give my stack a name, and enter all of the necessary parameters, including the ID that I copied, and click Next to proceed:

On the next page I indicate that I want to tag my resources, leave everything else as-is, and click Next:

I review my settings, acknowledge that the stack might create IAM resources, and click Next to create the stack:

The AWS resources are created and SAP HANA is installed, all in less than 40 minutes:

Using an EC2 instance on the public subnet of my VPC, I can access the new instance. Here’s the memory:

And here’s the CPU info:

I can also run an hdbsql query:

SELECT 
  DISTINCT HOST, CAST(VALUE/1024/1024/1024 AS INTEGER) AS TOTAL_MEMORY_GB 
  FROM SYS.M_MEMORY
  WHERE NAME='SYSTEM_MEMORY_SIZE';

Here’s the output, showing that SAP HANA has access to 12 TiB of memory:

Another option is to have the template create a second EC2 instance, this one running Windows on a public subnet, and accessible via RDP:

I could install HANA Studio on this instance and use its visual interface to run my SAP HANA queries.

The Quick Start implementation uses high performance SSD-based EBS storage volumes for all of your data. This gives you the power to switch to a larger instance in minutes without having to migrate any data.

Available Now
Just like the existing SAP-certified X1 and X1e instances, the EC2 High Memory instances are very cost-effective. For example, the effective hourly rate for the All Upfront 3-Year Reservation for a u-12tb1.metal Dedicated Host in the US East (N. Virginia) Region is $30.539 per hour.

These instances are now available in the US East (N. Virginia) and Asia Pacific (Tokyo) Regions as Dedicated Hosts with a 3-year term, and will be available soon in the US West (Oregon), Europe (Ireland), and AWS GovCloud (US) Regions. If you are ready to get started, contact your AWS account team or use the Contact Us page to make a request.

In the Works
We’re not stopping at 12 TiB, and are planning to launch instances with 18 TiB and 24 TiB of memory in 2019.

Jeff;

PS – If you have applications that might need multiple terabytes in the future but can run comfortably in less memory today, be sure to consider the R5, X1, and X1e instances.

 

New – Parallel Query for Amazon Aurora

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-parallel-query-for-amazon-aurora/

Amazon Aurora is a relational database that was designed to take full advantage of the abundance of networking, processing, and storage resources available in the cloud. While maintaining compatibility with MySQL and PostgreSQL on the user-visible side, Aurora makes use of a modern, purpose-built distributed storage system under the covers. Your data is striped across hundreds of storage nodes distributed over three distinct AWS Availability Zones, with two copies per zone, on fast SSD storage. Here’s what this looks like (extracted from Getting Started with Amazon Aurora):

New Parallel Query
When we launched Aurora we also hinted at our plans to apply the same scale-out design principle to other layers of the database stack. Today I would like to tell you about our next step along that path.

Each node in the storage layer pictured above also includes plenty of processing power. Aurora is now able to make great use of that processing power by taking your analytical queries (generally those that process all or a large part of a good-sized table) and running them in parallel across hundreds or thousands of storage nodes, with speed benefits approaching two orders of magnitude. Because this new model reduces network, CPU, and buffer pool contention, you can run a mix of analytical and transactional queries simultaneously on the same table while maintaining high throughput for both types of queries.

The instance class determines the number of parallel queries that can be active at a given time:

  • db.r*.large – 1 concurrent parallel query session
  • db.r*.xlarge – 2 concurrent parallel query sessions
  • db.r*.2xlarge – 4 concurrent parallel query sessions
  • db.r*.4xlarge – 8 concurrent parallel query sessions
  • db.r*.8xlarge – 16 concurrent parallel query sessions
  • db.r4.16xlarge – 16 concurrent parallel query sessions

You can use the aurora_pq parameter to enable and disable the use of parallel queries at the global and the session level.

Parallel queries enhance the performance of over 200 types of single-table predicates and hash joins. The Aurora query optimizer will automatically decide whether to use Parallel Query based on the size of the table and the amount of table data that is already in memory; you can also use the aurora_pq_force session variable to override the optimizer for testing purposes.

Parallel Query in Action
You will need to create a fresh cluster in order to make use of the Parallel Query feature. You can create one from scratch, or you can restore a snapshot.

To create a cluster that supports Parallel Query, I simply choose Provisioned with Aurora parallel query enabled as the Capacity type:

I used the CLI to restore a 100 GB snapshot for testing, and then explored one of the queries from the TPC-H benchmark. Here’s the basic query:

SELECT
  l_orderkey,
  SUM(l_extendedprice * (1-l_discount)) AS revenue,
  o_orderdate,
  o_shippriority

FROM customer, orders, lineitem

WHERE
  c_mktsegment='AUTOMOBILE'
  AND c_custkey = o_custkey
  AND l_orderkey = o_orderkey
  AND o_orderdate < date '1995-03-13'
  AND l_shipdate > date '1995-03-13'

GROUP BY
  l_orderkey,
  o_orderdate,
  o_shippriority

ORDER BY
  revenue DESC,
  o_orderdate LIMIT 15;

The EXPLAIN command shows the query plan, including the use of Parallel Query:

+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
| id | select_type | table    | type | possible_keys                 | key  | key_len | ref  | rows      | Extra                                                                                                                          |
+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
|  1 | SIMPLE      | customer | ALL  | PRIMARY                       | NULL | NULL    | NULL |  14354602 | Using where; Using temporary; Using filesort                                                                                   |
|  1 | SIMPLE      | orders   | ALL  | PRIMARY,o_custkey,o_orderdate | NULL | NULL    | NULL | 154545408 | Using where; Using join buffer (Hash Join Outer table orders); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra)   |
|  1 | SIMPLE      | lineitem | ALL  | PRIMARY,l_shipdate            | NULL | NULL    | NULL | 606119300 | Using where; Using join buffer (Hash Join Outer table lineitem); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra) |
+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
3 rows in set (0.01 sec)

Here is the relevant part of the Extras column:

Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra)

The query runs in less than 2 minutes when Parallel Query is used:

+------------+-------------+-------------+----------------+
| l_orderkey | revenue     | o_orderdate | o_shippriority |
+------------+-------------+-------------+----------------+
|   92511430 | 514726.4896 | 1995-03-06  |              0 |
|  593851010 | 475390.6058 | 1994-12-21  |              0 |
|  188390981 | 458617.4703 | 1995-03-11  |              0 |
|  241099140 | 457910.6038 | 1995-03-12  |              0 |
|  520521156 | 457157.6905 | 1995-03-07  |              0 |
|  160196293 | 456996.1155 | 1995-02-13  |              0 |
|  324814597 | 456802.9011 | 1995-03-12  |              0 |
|   81011334 | 455300.0146 | 1995-03-07  |              0 |
|   88281862 | 454961.1142 | 1995-03-03  |              0 |
|   28840519 | 454748.2485 | 1995-03-08  |              0 |
|  113920609 | 453897.2223 | 1995-02-06  |              0 |
|  377389669 | 453438.2989 | 1995-03-07  |              0 |
|  367200517 | 453067.7130 | 1995-02-26  |              0 |
|  232404000 | 452010.6506 | 1995-03-08  |              0 |
|   16384100 | 450935.1906 | 1995-03-02  |              0 |
+------------+-------------+-------------+----------------+
15 rows in set (1 min 53.36 sec)

I can disable Parallel Query for the session (I can use an RDS custom cluster parameter group for a longer-lasting effect):

set SESSION aurora_pq=OFF;

The query runs considerably slower without it:

+------------+-------------+-------------+----------------+
| l_orderkey | o_orderdate | revenue     | o_shippriority |
+------------+-------------+-------------+----------------+
|   92511430 | 1995-03-06  | 514726.4896 |              0 |
...
|   16384100 | 1995-03-02  | 450935.1906 |              0 |
+------------+-------------+-------------+----------------+
15 rows in set (1 hour 25 min 51.89 sec)

This was on a db.r4.2xlarge instance; other instance sizes, data sets, access patterns, and queries will perform differently. I can also override the query optimizer and insist on the use of Parallel Query for testing purposes:

set SESSION aurora_pq_force=ON;

Things to Know
Here are a couple of things to keep in mind when you start to explore Amazon Aurora Parallel Query:

Engine Support – We are launching with support for MySQL 5.6, and are working on support for MySQL 5.7 and PostgreSQL.

Table Formats – The table row format must be COMPACT; partitioned tables are not supported.

Data Types – The TEXT, BLOB, and GEOMETRY data types are not supported.

DDL – The table cannot have any pending fast online DDL operations.

Cost – You can make use of Parallel Query at no extra charge. However, because it makes direct access to storage, there is a possibility that your IO cost will increase.

Give it a Shot
This feature is available now and you can start using it today!

Jeff;

 

AWS Data Transfer Price Reductions – Up to 34% (Japan) and 28% (Australia)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-data-transfer-price-reductions-up-to-34-japan-and-28-australia/

I’ve got good good news for AWS customers who make use of our Asia Pacific (Tokyo) and Asia Pacific (Sydney) Regions. Effective September 1, 2018 we are reducing prices for data transfer from Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), and Amazon CloudFront by up to 34% in Japan and 28% in Australia.

EC2 and S3 Data Transfer
Here are the new prices for data transfer from EC2 and S3 to the Internet:

EC2 & S3 Data Transfer Out to Internet Japan Australia
Old Rate New Rate Change Old Rate New Rate Change
Up to 1 GB / Month $0.000 $0.000 0% $0.000 $0.000 0%
Next 9.999 TB / Month $0.140 $0.114 -19% $0.140 $0.114 -19%
Next 40 TB / Month $0.135 $0.089 -34% $0.135 $0.098 -27%
Next 100 TB / Month $0.130 $0.086 -34% $0.130 $0.094 -28%
Greater than 150 TB / Month $0.120 $0.084 -30% $0.120 $0.092 -23%

You can consult the EC2 Pricing and S3 Pricing pages for more information.

CloudFront Data Transfer
Here are the new prices for data transfer from CloudFront edge nodes to the Internet

CloudFront Data Transfer Out to Internet Japan Australia
Old Rate New Rate Change Old Rate New Rate Change
Up to 10 TB / Month $0.140 $0.114 -19% $0.140 $0.114 -19%
Next 40 TB / Month $0.135 $0.089 -34% $0.135 $0.098 -27%
Next 100 TB / Month $0.120 $0.086 -28% $0.120 $0.094 -22%
Next 350 TB / Month $0.100 $0.084 -16% $0.100 $0.092 -8%
Next 524 TB / Month $0.080 $0.080 0% $0.095 $0.090 -5%
Next 4 PB / Month $0.070 $0.070 0% $0.090 $0.085 -6%
Over 5 PB / Month $0.060 $0.060 0% $0.085 $0.080 -6%

Visit the CloudFront Pricing page for more information.

We have also reduced the price of data transfer from CloudFront to your Origin. The price for CloudFront Data Transfer to Origin from edge locations in Australia has been reduced 20% to $0.080 per GB. This represents content uploads via POST and PUT.

Things to Know
Here are a couple of interesting things that you should know about AWS and data transfer:

AWS Free Tier – You can use the AWS Free Tier to get started with, and to learn more about, EC2, S3, CloudFront, and many other AWS services. The AWS Getting Started page contains lots of resources to help you with your first project.

Data Transfer from AWS Origins to CloudFront – There is no charge for data transfers from an AWS origin (S3, EC2, Elastic Load Balancing, and so forth) to any CloudFront edge location.

CloudFront Reserved Capacity Pricing – If you routinely use CloudFront to deliver 10 TB or more content per month, you should investigate our Reserved Capacity pricing. You can receive a significant discount by committing to transfer 10 TB or more content from a single region, with additional discounts at higher levels of usage. To learn more or to sign up, simply Contact Us.

Jeff;

 

New – AWS Storage Gateway Hardware Appliance

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-storage-gateway-hardware-appliance/

AWS Storage Gateway connects your on-premises applications to AWS storage services such as Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and Amazon Glacier. It runs in your existing virtualized environment and is visible to your applications and your client operating systems as a file share, a local block volume, or a virtual tape library. The resulting hybrid storage model gives our customers the ability to use their AWS Storage Gateways for backup, archiving, disaster recovery, cloud data processing, storage tiering, and migration.

New Hardware Appliance
Today we are making Storage Gateway available as a hardware appliance, adding to the existing support for VMware ESXi, Microsoft Hyper-V, and Amazon EC2. This means that you can now make use of Storage Gateway in situations where you do not have a virtualized environment, server-class hardware or IT staff with the specialized skills that are needed to manage them. You can order appliances from Amazon.com for delivery to branch offices, warehouses, and “outpost” offices that lack dedicated IT resources. Setup (as you will see in a minute) is quick and easy, and gives you access to three storage solutions:

File Gateway – A file interface to Amazon S3, accessible via NFS or SMB. The files are stored as S3 objects, allowing you to make use of specialized S3 features such as lifecycle management and cross-region replication. You can trigger AWS Lambda functions, run Amazon Athena queries, and use Amazon Macie to discover and classify sensitive data.

Volume Gateway – Cloud-backed storage volumes, accessible as local iSCSI volumes. Gateways can be configured to cache frequently accessed data locally, or to store a full copy of all data locally. You can create EBS snapshots of the volumes and use them for disaster recovery or data migration.

Tape Gateway – A cloud-based virtual tape library (VTL), accessible via iSCSI, so you can replace your on-premises tape infrastructure, without changing your backup workflow.

To learn more about each of these solutions, read What is AWS Storage Gateway.

The AWS Storage Gateway Hardware Appliance is based on a specially configured Dell EMC PowerEdge R640 Rack Server that is pre-loaded with AWS Storage Gateway software. It has 2 Intel® Xeon® processors, 128 GB of memory, 6 TB of usable SSD storage for your locally cached data, redundant power supplies, and you can order one from Amazon.com:

If you have an Amazon Business account (they’re free) you can use a purchase order for the transaction. In addition to simplifying deployment, using this standardized configuration helps to assure consistent performance for your local applications.

Hardware Setup
As you know, I like to go hands-on with new AWS products. My colleagues shipped a pre-release appliance to me; I left it under the watchful guide of my CSO (Canine Security Officer) until I was ready to write this post:

I don’t have a server room or a rack, so I set it up on my hobby table for testing:

In addition to the appliance, I also scrounged up a VGA cable, a USB keyboard, a small monitor, and a power adapter (C13 to NEMA 5-15). The adapter is necessary because the cord included with the appliance is intended to plug in a power distribution jack commonly found in a data center. I connected it all up, turned it on and watched it boot up, then entered a new administrative password.

Following the directions in the documentation, I configured an IPV4 address, using DHCP for convenience:

I captured the IP address for use in the next step, selected Back (the UI is keyboard-driven) and then logged out. This is the only step that takes place on the local console.

Gateway Configuration
At this point I will switch from past to present, and walk you through the configuration process. As directed by the Getting Started Guide, I open the Storage Gateway Console on the same network as the appliance, select the region where I want to create my gateway, and click Get started:

I select File gateway and click Next to proceed:

I select Hardware Appliance as my host platform (I can click Buy on Amazon to purchase one if necessary), and click Next:

Then I enter the IP address of my appliance and click Connect:

I enter a name for my gateway (jbgw1), set the time zone, pick ZFS as my RAID Volume Manager, and click Activate to proceed:

My gateway is activated within a second or two and I can see it in the Hardware section of the console:

At this point I am free to use a console that is not on the same network, so I’ll switch back to my trusty WorkSpace!

Now that my hardware has been activated, I can launch the actual gateway service on it. I select the appliance, and choose Launch Gateway from the Actions menu:

I choose the desired gateway type, enter a name (fgw1) for it, and click Launch gateway:

The gateway will start off in the Offline status, and transition to Online within 3 to 5 minutes. The next step is to allocate local storage by clicking Edit local disks:

Since I am creating a file gateway, all of the local storage is used for caching:

Now I can create a file share on my appliance! I click Create file share, enter the name of an existing S3 bucket, and choose NFS or SMB, then click Next:

I configure a couple of S3 options, request creation of a new IAM role, and click Next:

I review all of my choices and click Create file share:

After I create the share I can see the commands that are used to mount it in each client environment:

I mount the share on my Ubuntu desktop (I had to install the nfs-client package first) and copy a bunch of files to it:

Then I visit the S3 bucket and see that the gateway has already uploaded the files:

Finally, I have the option to change the configuration of my appliance. After making sure that all network clients have unmounted the file share, I remove the existing gateway:

And launch a new one:

And there you have it. I installed and configured the appliance, created a file share that was accessible from my on-premises systems, and then copied files to it for replication to the cloud.

Now Available
The Storage Gateway Hardware Appliance is available now and you can purchase one today. Start in the AWS Storage Gateway Console and follow the steps above!

Jeff;

 

 

New – AWS Systems Manager Session Manager for Shell Access to EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-session-manager/

It is a very interesting time to be a corporate IT administrator. On the one hand, developers are talking about (and implementing) an idyllic future where infrastructure as code, and treating servers and other resources as cattle. On the other hand, legacy systems still must be treated as pets, set up and maintained by hand or with the aid of limited automation. Many of the customers that I speak with are making the transition to the future at a rapid pace, but need to work in the world that exists today. For example, they still need shell-level access to their servers on occasion. They might need to kill runaway processes, consult server logs, fine-tune configurations, or install temporary patches, all while maintaining a strong security profile. They want to avoid the hassle that comes with running Bastion hosts and the risks that arise when opening up inbound SSH ports on the instances.

We’ve already addressed some of the need for shell-level access with the AWS Systems Manager Run Command. This AWS facility gives administrators secure access to EC2 instances. It allows them to create command documents and run them on any desired set of EC2 instances, with support for both Linux and Microsoft Windows. The commands are run asynchronously, with output captured for review.

New Session Manager
Today we are adding a new option for shell-level access. The new Session Manager makes the AWS Systems Manager even more powerful. You can now use a new browser-based interactive shell and a command-line interface (CLI) to manage your Windows and Linux instances. Here’s what you get:

Secure Access – You don’t have to manually set up user accounts, passwords, or SSH keys on the instances and you don’t have to open up any inbound ports. Session Manager communicates with the instances via the SSM Agent across an encrypted tunnel that originates on the instance, and does not require a bastion host.

Access Control – You use IAM policies and users to control access to your instances, and don’t need to distribute SSH keys. You can limit access to a desired time/maintenance window by using IAM’s Date Condition Operators.

Auditability – Commands and responses can be logged to Amazon CloudWatch and to an S3 bucket. You can arrange to receive an SNS notification when a new session is started.

Interactivity – Commands are executed synchronously in a full interactive bash (Linux) or PowerShell (Windows) environment

Programming and Scripting – In addition to the console access that I will show you in a moment, you can also initiate sessions from the command line (aws ssm ...) or via the Session Manager APIs.

The SSM Agent running on the EC2 instances must be able to connect to Session Manager’s public endpoint. You can also set up a PrivateLink connection to allow instances running in private VPCs (without Internet access or a public IP address) to connect to Session Manager.

Session Manager in Action
In order to use Session Manager to access my EC2 instances, the instances must be running the latest version (2.3.12 or above) of the SSM Agent. The instance role for the instances must reference a policy that allows access to the appropriate services; you can create your own or use AmazonEC2RoleForSSM. Here are my EC2 instances (sk1 and sk2 are running Amazon Linux; sk3-win and sk4-win are running Microsoft Windows):

Before I run my first command, I open AWS Systems Manager and click Preferences. Since I want to log my commands, I enter the name of my S3 bucket and my CloudWatch log group. If I enter either or both values, the instance policy must also grant access to them:

I’m ready to roll! I click Sessions, see that I have no active sessions, and click Start session to move ahead:

I select a Linux instance (sk1), and click Start session again:

The session opens up immediately:

I can do the same for one of my Windows instances:

The log streams are visible in CloudWatch:

Each stream contains the content of a single session:

In the Works
As usual, we have some additional features in the works for Session Manager. Here’s a sneak peek:

SSH Client – You will be able to create SSH sessions atop Session Manager without opening up any inbound ports.

On-Premises Access – We plan to give you the ability to access your on-premises instances (which must be running the SSM Agent) via Session Manager.

Available Now
Session Manager is available in all AWS regions (including AWS GovCloud) at no extra charge.

Jeff;