Tag Archives: Amazon Elastic Block Store

Prime Day 2017 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prime-day-2017-powered-by-aws/

The third annual Prime Day set another round of records for global orders, topping Black Friday and Cyber Monday, making it the biggest day in Amazon retail history. Over the course of the 30 hour event, tens of millions of Prime members purchased things like Echo Dots, Fire tablets, programmable pressure cookers, espresso machines, rechargeable batteries, and much more! July 11th also set a record for the number of new Prime memberships, as people signed up in order to take advantage of hundreds of thousands of deals. Amazon customers shopped online and made heavy use of the Amazon App, with mobile orders more than doubling from last Prime Day.

Powered by AWS
Last year I told you about How AWS Powered Amazon’s Biggest Day Ever, and shared what the team had learned with regard to preparation, automation, monitoring, and thinking big. All of those lessons still apply and you can read that post to learn more. Preparation for this year’s Prime Day (which started just days after Prime Day 2016 wrapped up) started by collecting and sharing best practices and identifying areas for improvement, proceeding to implementation and stress testing as the big day approached. Two of the best practices involve auditing and GameDay:

Auditing – This is a formal way for us to track preparations, identify risks, and to track progress against our objectives. Each team must respond to a series of detailed technical and operational questions that are designed to help them determine their readiness. On the technical side, questions could revolve around time to recovery after a database failure, including the all-important check of the TTL (time to live) for the CNAME. Operational questions address schedules for on-call personnel, points of contact, and ownership of services & instances.

GameDay – This practice (which I believe originated with former Amazonian Jesse Robbins), is intended to validate all of the capacity planning & preparation and to verify that all of the necessary operational practices are in place and work as expected. It introduces simulated failures and helps to train the team to identify and quickly resolve issues, building muscle memory in the process. It also tests failover and recovery capabilities, and can expose latent defects that are lurking under the covers. GameDays help teams to understand scaling drivers (page views, orders, and so forth) and gives them an opportunity to test their scaling practices. To learn more, read Resilience Engineering: Learning to Embrace Failure or watch the video: GameDay: Creating Resiliency Through Destruction.

Prime Day 2017 Metrics
So, how did we do this year?

The AWS teams checked their dashboards and log files, and were happy to share their metrics with me. Here are a few of the most interesting ones:

Block Storage – Use of Amazon Elastic Block Store (EBS) grew by 40% year-over-year, with aggregate data transfer jumping to 52 petabytes (a 50% increase) for the day and total I/O requests rising to 835 million (a 30% increase). The team told me that they loved the elasticity of EBS, and that they were able to ramp down on capacity after Prime Day concluded instead of being stuck with it.

NoSQL Database – Amazon DynamoDB requests from Alexa, the Amazon.com sites, and the Amazon fulfillment centers totaled 3.34 trillion, peaking at 12.9 million per second. According to the team, the extreme scale, consistent performance, and high availability of DynamoDB let them meet needs of Prime Day without breaking a sweat.

Stack Creation – Nearly 31,000 AWS CloudFormation stacks were created for Prime Day in order to bring additional AWS resources on line.

API Usage – AWS CloudTrail processed over 50 billion events and tracked more than 419 billion calls to various AWS APIs, all in support of Prime Day.

Configuration TrackingAWS Config generated over 14 million Configuration items for AWS resources.

You Can Do It
Running an event that is as large, complex, and mission-critical as Prime Day takes a lot of planning. If you have an event of this type in mind, please take a look at our new Infrastructure Event Readiness white paper. Inside, you will learn how to design and provision your applications to smoothly handle planned scaling events such as product launches or seasonal traffic spikes, with sections on automation, resiliency, cost optimization, event management, and more.

Jeff;

 

AWS Hot Startups – August 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-august-2017/

There’s no doubt about it – Artificial Intelligence is changing the world and how it operates. Across industries, organizations from startups to Fortune 500s are embracing AI to develop new products, services, and opportunities that are more efficient and accessible for their consumers. From driverless cars to better preventative healthcare to smart home devices, AI is driving innovation at a fast rate and will continue to play a more important role in our everyday lives.

This month we’d like to highlight startups using AI solutions to help companies grow. We are pleased to feature:

  • SignalBox – a simple and accessible deep learning platform to help businesses get started with AI.
  • Valossa – an AI video recognition platform for the media and entertainment industry.
  • Kaliber – innovative applications for businesses using facial recognition, deep learning, and big data.

SignalBox (UK)

In 2016, SignalBox founder Alain Richardt was hearing the same comments being made by developers, data scientists, and business leaders. They wanted to get into deep learning but didn’t know where to start. Alain saw an opportunity to commodify and apply deep learning by providing a platform that does the heavy lifting with an easy-to-use web interface, blueprints for common tasks, and just a single-click to productize the models. With SignalBox, companies can start building deep learning models with no coding at all – they just select a data set, choose a network architecture, and go. SignalBox also offers step-by-step tutorials, tips and tricks from industry experts, and consulting services for customers that want an end-to-end AI solution.

SignalBox offers a variety of solutions that are being used across many industries for energy modeling, fraud detection, customer segmentation, insurance risk modeling, inventory prediction, real estate prediction, and more. Existing data science teams are using SignalBox to accelerate their innovation cycle. One innovative UK startup, Energi Mine, recently worked with SignalBox to develop deep networks that predict anomalous energy consumption patterns and do time series predictions on energy usage for businesses with hundreds of sites.

SignalBox uses a variety of AWS services including Amazon EC2, Amazon VPC, Amazon Elastic Block Store, and Amazon S3. The ability to rapidly provision EC2 GPU instances has been a critical factor in their success – both in terms of keeping their operational expenses low, as well as speed to market. The Amazon API Gateway has allowed for operational automation, giving SignalBox the ability to control its infrastructure.

To learn more about SignalBox, visit here.

Valossa (Finland)

As students at the University of Oulu in Finland, the Valossa founders spent years doing research in the computer science and AI labs. During that time, the team witnessed how the world was moving beyond text, with video playing a greater role in day-to-day communication. This spawned an idea to use technology to automatically understand what an audience is viewing and share that information with a global network of content producers. Since 2015, Valossa has been building next generation AI applications to benefit the media and entertainment industry and is moving beyond the capabilities of traditional visual recognition systems.

Valossa’s AI is capable of analyzing any video stream. The AI studies a vast array of data within videos and converts that information into descriptive tags, categories, and overviews automatically. Basically, it sees, hears, and understands videos like a human does. The Valossa AI can detect people, visual and auditory concepts, key speech elements, and labels explicit content to make moderating and filtering content simpler. Valossa’s solutions are designed to provide value for the content production workflow, from media asset management to end-user applications for content discovery. AI-annotated content allows online viewers to jump directly to their favorite scenes or search specific topics and actors within a video.

Valossa leverages AWS to deliver the industry’s first complete AI video recognition platform. Using Amazon EC2 GPU instances, Valossa can easily scale their computation capacity based on customer activity. High-volume video processing with GPU instances provides the necessary speed for time-sensitive workflows. The geo-located Availability Zones in EC2 allow Valossa to bring resources close to their customers to minimize network delays. Valossa also uses Amazon S3 for video ingestion and to provide end-user video analytics, which makes managing and accessing media data easy and highly scalable.

To see how Valossa works, check out www.WhatIsMyMovie.com or enable the Alexa Skill, Valossa Movie Finder. To try the Valossa AI, sign up for free at www.valossa.com.

Kaliber (San Francisco, CA)

Serial entrepreneurs Ray Rahman and Risto Haukioja founded Kaliber in 2016. The pair had previously worked in startups building smart cities and online privacy tools, and teamed up to bring AI to the workplace and change the hospitality industry. Our world is designed to appeal to our senses – stores and warehouses have clearly marked aisles, products are colorfully packaged, and we use these designs to differentiate one thing from another. We tell each other apart by our faces, and previously that was something only humans could measure or act upon. Kaliber is using facial recognition, deep learning, and big data to create solutions for business use. Markets and companies that aren’t typically associated with cutting-edge technology will be able to use their existing camera infrastructure in a whole new way, making them more efficient and better able to serve their customers.

Computer video processing is rapidly expanding, and Kaliber believes that video recognition will extend to far more than security cameras and robots. Using the clients’ network of in-house cameras, Kaliber’s platform extracts key data points and maps them to actionable insights using their machine learning (ML) algorithm. Dashboards connect users to the client’s BI tools via the Kaliber enterprise APIs, and managers can view these analytics to improve their real-world processes, taking immediate corrective action with real-time alerts. Kaliber’s Real Metrics are aimed at combining the power of image recognition with ML to ultimately provide a more meaningful experience for all.

Kaliber uses many AWS services, including Amazon Rekognition, Amazon Kinesis, AWS Lambda, Amazon EC2 GPU instances, and Amazon S3. These services have been instrumental in helping Kaliber meet the needs of enterprise customers in record time.

Learn more about Kaliber here.

Thanks for reading and we’ll see you next month!

-Tina

 

New – Cost Allocation for EBS Snapshots

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cost-allocation-for-ebs-snapshots/

Amazon Elastic Block Store (EBS) allows you to create persistent block storage volumes for your Amazon EC2 instances. The volumes offer consistent, low-latency performance and a choice of volume types. You can take snapshot backups of your EBS volumes, keep them for as long as you would like, and then restore them to a fresh volume.

AWS Billing and Cost Management provide you with tools and reports that you can use to track your spending. You can use Cost Allocation Tags to assign costs to your customers, applications, teams, departments, or billing codes at the level of individual resources.

Cost Allocation for Snapshots
Today we are adding cost allocation for EBS snapshots. While I expect AWS customers of all shapes and sizes to make good use of this feature, I know that enterprises will find it particularly interesting. They’ll be able to assign costs to the proper project, department, or entity. Similarly, Managed Service Providers, some of whom manage AWS footprints that encompass thousands of EBS volumes and many more EBS snapshots, will be able to map snapshot costs back to customer accounts and applications.

Tagging Snapshots and Generating Reports
Let’s walk through the process of tagging snapshots and allocating costs.

The first step is to implement a tagging regimen for your existing snapshots. You can create a script that calls the create-tags command or write code that calls the TagResources function. You can also use the Console’s Tag Editor to find the snapshots of interest across any number of AWS Regions:

I have a handful of snapshots and simply tagged some them by hand. My tag key is usage and the values are backup, dev, and metrics. Here are my snapshots:

Next, I need to tell AWS that the new tag key is being used for cost allocation. I open up the Billing Dashboard and click on Cost Allocation Tags:

Then I locate my tag in the list of user-defined tags, select it, and clicked on Activate:

AWS will deliver the first updated report within 24 hours, and will update Cost Explorer at least once per day after that (read Understanding Your Usage with Billing Reports to learn more).

I have two options. I can use Cost Explorer to explore the data visually, or I can create a usage report, download it into Excel and analyze it on my desktop. I’ll show you both!

Using Cost Explorer
I open up Cost Explorer, select the time range of interest, and filter by Usage Type Group, selecting EC2: EBS – Snapshots. Then I set the Group by option to Tag and choose my tag (usage) from the drop-down:

Then I click on Apply and inspect the report:

I can see my costs and my usage (measured in gigabyte-months) at a glance. I can also click on New report, enter a name, and save the report for reuse:

Creating a Cost & Usage Report
I click on Reports and Create Report, to create a report. I named it DailySnapshotUsage and set the Time unit to be Daily:

Then I point it at my jbarr-billing bucket, select ZIP compression, and click on Next:

I confirm my settings on the next page and click on Review and Complete to finalize my report. I check back the next day and my report is ready:

Analyzing the Cost & Usage Report Using Excel
I can also download the cost and and usage report and analyze it using Excel.

I switch to the S3 Console, open up the jbarr-billing bucket, and descend in to the folder structure to find my report:

Then I download and unzip the file, and open it in Excel:

I want to see only the tagged usage, so I scroll over to column DJ (resourceTags/user:usage) and use Excel’s Filter operation to choose the tags of interest:

Then I hide most of the columns and end up with line item costs:

I’m highly confident that your Excel skills are better than mine, and that you can do a far better job of analyzing the data!

Understanding Snapshot Costs
As you create your reports and analyze your EBS snapshot costs and usage, keep in mind that snapshots are created incrementally and that the first snapshot will generally appear to be the most expensive one. If you delete a snapshot that contains blocks that are being used by a later snapshot, the space referenced by the blocks will now be attributed to the later snapshot. Therefore, with respect to a particular EBS volume, deleting the snapshot with the highest cost may simply move some of the costs to a more recent snapshot. Read Deleting an Amazon EBS Snapshot to learn more.

Available Now
This new feature is available now in all commercial AWS regions and you can start using it today.

Jeff;

 

 

EC2 F1 Instances with FPGAs – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-f1-instances-with-fpgas-now-generally-available/

We launched the Developer Preview of the FPGA-equipped F1 instances at AWS re:Invent. The response to the announcement was quick and overwhelming! We received over 2000 requests for entry, and were able to provide over 200 developers with access to the Hardware Development Kit (HDK) and the actual F1 instances.

In the post that I wrote for re:Invent, I told you that:

This highly parallelized model is ideal for building custom accelerators to process compute-intensive problems. Properly programmed, an FPGA has the potential to provide a 30x speedup to many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms and applications.

During the preview, partners and developers have been working on all sorts of exciting tools, services, and applications. I’ll tell you more about them in just a moment.

Now Generally Available
Today we are making the F1 instances generally available in the US East (Northern Virginia) Region, with plans to bring them to other regions before too long.

We continued to add features and functions during the preview, while also making the development tools more efficient and easier to use. Here’s a summary:

Developer Community – We launched the AWS FPGA Development Forum to provide a place for FPGA developers to hang out and to communicate with us and with each other.

HDK and SDK – We published the EC2 FPGA Hardware (HDK) and Software Development Kit to GitHub, and made many improvements in response to feedback that we received during the preview.

The improvements include support for VHDL (in addition to Verilog), an improved virtual lab environment (Virtual JTAG, Virtual LED, and Virtual DipSwitch), AWS libraries for FPGA management and the FPGA runtime, and support for OpenCL including the AWS OpenCL runtime library.

FPGA Developer AMI – This Marketplace AMI contains a full set of FPGA development tools including an RTL compiler and simulator, along with Xilinx SDAccel for OpenCL development, all tuned for use on C4, M4, and R4 instances.

FPGAs At Work
Here’s a sampling of the impressive work that our partners have been doing with the F1’s:

Edico Genome is deploying their DRAGEN Bio-IT Platform on F1 instances, with the expectation that it will provide whole-genome sequencing that runs in real time.

Ryft offers the Ryft Cloud, an accelerator for data analytics and machine learning that extends Elastic Stack. It sources data from Amazon Kinesis, Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and local instance storage and uses massive bitwise parallelism to drive performance. The product supports high-level JDBC, ODBC, and REST interfaces along with low-level C, C++, Java, and Python APIs (see the Ryft API page for more information).

Reconfigure.io launched a cloud-based service that allows you to program FPGAs using the Go programming language. You can build, test, and deploy your code from within their cloud-based environment while taking advantage of concurrency-oriented language features such as goroutines (lightweight threads), channels, and selects.

NGCodec ported their RealityCodec video encoder to the F1 and used it to produce broadcast-quality video at 80 frames per second. Their solution can encode up to 32 independent video streams on a single F1 instance (read their new post, You Deserve Better than Grainy Giraffes, to learn more).

FPGAs In School & Research
Research groups and graduate classes at top-tier universities contacted us via AWS Educate and were eager to gain access to F1 instances.

UCLA‘s CS133 class (Parallel and Distributed Computing) is setting up an F1-based FPGA lab that will be operational within 3 or 4 weeks. According to UCLA Chancellor’s Professor Jason Cong, they are expanding multiple research projects to cover F1 including FPGA performance debugging, machine learning acceleration, Spark to FPGA compilation, and systolic array compilation.

Last month we announced that we are collaborating with the National Science Foundation (NSF) to foster innovation in big data research (read AWS Collaborates With the National Science Foundation to Foster Innovation to learn more and to find out how to apply for a grant).

FPGA’s in the AWS Marketplace
As I shared in my original post, we have built a complete beginning to end solution that lets developers build FPGA-powered applications and services and list them in the AWS Marketplace. I can’t wait to see what kinds of cool things show up there!

Jeff;

AWS Hot Startups – March 2017

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-march-2017/

As the madness of March rounds up, take a break from all the basketball and check out the cool startups Tina Barr brings you for this month!

-Ana


The arrival of spring brings five new startups this month:

  • Amino Apps – providing social networks for hundreds of thousands of communities.
  • Appboy – empowering brands to strengthen customer relationships.
  • Arterys – revolutionizing the medical imaging industry.
  • Protenus – protecting patient data for healthcare organizations.
  • Syapse – improving targeted cancer care with shared data from across the country.

In case you missed them, check out February’s hot startups here.

Amino Apps (New York, NY)
Amino Logo
Amino Apps was founded on the belief that interest-based communities were underdeveloped and outdated, particularly when it came to mobile. CEO Ben Anderson and CTO Yin Wang created the app to give users access to hundreds of thousands of communities, each of them a complete social network dedicated to a single topic. Some of the largest communities have over 1 million members and are built around topics like popular TV shows, video games, sports, and an endless number of hobbies and other interests. Amino hosts communities from around the world and is currently available in six languages with many more on the way.

Navigating the Amino app is easy. Simply download the app (iOS or Android), sign up with a valid email address, choose a profile picture, and start exploring. Users can search for communities and join any that fit their interests. Each community has chatrooms, multimedia content, quizzes, and a seamless commenting system. If a community doesn’t exist yet, users can create it in minutes using the Amino Creator and Manager app (ACM). The largest user-generated communities are turned into their own apps, which gives communities their own piece of real estate on members’ phones, as well as in app stores.

Amino’s vast global network of hundreds of thousands of communities is run on AWS services. Every day users generate, share, and engage with an enormous amount of content across hundreds of mobile applications. By leveraging AWS services including Amazon EC2, Amazon RDS, Amazon S3, Amazon SQS, and Amazon CloudFront, Amino can continue to provide new features to their users while scaling their service capacity to keep up with user growth.

Interested in joining Amino? Check out their jobs page here.

Appboy (New York, NY)
In 2011, Bill Magnuson, Jon Hyman, and Mark Ghermezian saw a unique opportunity to strengthen and humanize relationships between brands and their customers through technology. The trio created Appboy to empower brands to build long-term relationships with their customers and today they are the leading lifecycle engagement platform for marketing, growth, and engagement teams. The team recognized that as rapid mobile growth became undeniable, many brands were becoming frustrated with the lack of compelling and seamless cross-channel experiences offered by existing marketing clouds. Many of today’s top mobile apps and enterprise companies trust Appboy to take their marketing to the next level. Appboy manages user profiles for nearly 700 million monthly active users, and is used to power more than 10 billion personalized messages monthly across a multitude of channels and devices.

Appboy creates a holistic user profile that offers a single view of each customer. That user profile in turn powers contextual cross-channel messaging, lifecycle engagement automation, and robust campaign insights and optimization opportunities. Appboy offers solutions that allow brands to create push notifications, targeted emails, in-app and in-browser messages, news feed cards, and webhooks to enhance the user experience and increase customer engagement. The company prides itself on its interoperability, connecting to a variety of complimentary marketing tools and technologies so brands can build the perfect stack to enable their strategies and experiments in real time.

AWS makes it easy for Appboy to dynamically size all of their service components and automatically scale up and down as needed. They use an array of services including Elastic Load Balancing, AWS Lambda, Amazon CloudWatch, Auto Scaling groups, and Amazon S3 to help scale capacity and better deal with unpredictable customer loads.

To keep up with the latest marketing trends and tactics, visit the Appboy digital magazine, Relate. Appboy was also recently featured in the #StartupsOnAir video series where they gave insight into their AWS usage.

Arterys (San Francisco, CA)
Getting test results back from a physician can often be a time consuming and tedious process. Clinicians typically employ a variety of techniques to manually measure medical images and then make their assessments. Arterys founders Fabien Beckers, John Axerio-Cilies, Albert Hsiao, and Shreyas Vasanawala realized that much more computation and advanced analytics were needed to harness all of the valuable information in medical images, especially those generated by MRI and CT scanners. Clinicians were often skipping measurements and making assessments based mostly on qualitative data. Their solution was to start a cloud/AI software company focused on accelerating data-driven medicine with advanced software products for post-processing of medical images.

Arterys’ products provide timely, accurate, and consistent quantification of images, improve speed to results, and improve the quality of the information offered to the treating physician. This allows for much better tracking of a patient’s condition, and thus better decisions about their care. Advanced analytics, such as deep learning and distributed cloud computing, are used to process images. The first Arterys product can contour cardiac anatomy as accurately as experts, but takes only 15-20 seconds instead of the 45-60 minutes required to do it manually. Their computing cloud platform is also fully HIPAA compliant.

Arterys relies on a variety of AWS services to process their medical images. Using deep learning and other advanced analytic tools, Arterys is able to render images without latency over a web browser using AWS G2 instances. They use Amazon EC2 extensively for all of their compute needs, including inference and rendering, and Amazon S3 is used to archive images that aren’t needed immediately, as well as manage costs. Arterys also employs Amazon Route 53, AWS CloudTrail, and Amazon EC2 Container Service.

Check out this quick video about the technology that Arterys is creating. They were also recently featured in the #StartupsOnAir video series and offered a quick demo of their product.

Protenus (Baltimore, MD)
Protenus Logo
Protenus founders Nick Culbertson and Robert Lord were medical students at Johns Hopkins Medical School when they saw first-hand how Electronic Health Record (EHR) systems could be used to improve patient care and share clinical data more efficiently. With increased efficiency came a huge issue – an onslaught of serious security and privacy concerns. Over the past two years, 140 million medical records have been breached, meaning that approximately 1 in 3 Americans have had their health data compromised. Health records contain a repository of sensitive information and a breach of that data can cause major havoc in a patient’s life – namely identity theft, prescription fraud, Medicare/Medicaid fraud, and improper performance of medical procedures. Using their experience and knowledge from former careers in the intelligence community and involvement in a leading hedge fund, Nick and Robert developed the prototype and algorithms that launched Protenus.

Today, Protenus offers a number of solutions that detect breaches and misuse of patient data for healthcare organizations nationwide. Using advanced analytics and AI, Protenus’ health data insights platform understands appropriate vs. inappropriate use of patient data in the EHR. It also protects privacy, aids compliance with HIPAA regulations, and ensures trust for patients and providers alike.

Protenus built and operates its SaaS offering atop Amazon EC2, where Dedicated Hosts and encrypted Amazon EBS volume are used to ensure compliance with HIPAA regulation for the storage of Protected Health Information. They use Elastic Load Balancing and Amazon Route 53 for DNS, enabling unique, secure client specific access points to their Protenus instance.

To learn more about threats to patient data, read Hospitals’ Biggest Threat to Patient Data is Hiding in Plain Sight on the Protenus blog. Also be sure to check out their recent video in the #StartupsOnAir series for more insight into their product.

Syapse (Palo Alto, CA)
Syapse provides a comprehensive software solution that enables clinicians to treat patients with precision medicine for targeted cancer therapies — treatments that are designed and chosen using genetic or molecular profiling. Existing hospital IT doesn’t support the robust infrastructure and clinical workflows required to treat patients with precision medicine at scale, but Syapse centralizes and organizes patient data to clinicians at the point of care. Syapse offers a variety of solutions for oncologists that allow them to access the full scope of patient data longitudinally, view recommended treatments or clinical trials for similar patients, and track outcomes over time. These solutions are helping health systems across the country to improve patient outcomes by offering the most innovative care to cancer patients.

Leading health systems such as Stanford Health Care, Providence St. Joseph Health, and Intermountain Healthcare are using Syapse to improve patient outcomes, streamline clinical workflows, and scale their precision medicine programs. A group of experts known as the Molecular Tumor Board (MTB) reviews complex cases and evaluates patient data, documents notes, and disseminates treatment recommendations to the treating physician. Syapse also provides reports that give health system staff insight into their institution’s oncology care, which can be used toward quality improvement, business goals, and understanding variables in the oncology service line.

Syapse uses Amazon Virtual Private Cloud, Amazon EC2 Dedicated Instances, and Amazon Elastic Block Store to build a high-performance, scalable, and HIPAA-compliant data platform that enables health systems to make precision medicine part of routine cancer care for patients throughout the country.

Be sure to check out the Syapse blog to learn more and also their recent video on the #StartupsOnAir video series where they discuss their product, HIPAA compliance, and more about how they are using AWS.

Thank you for checking out another month of awesome hot startups!

-Tina Barr

 

Easily Tag Amazon EC2 Instances and Amazon EBS Volumes on Creation

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/easily-tag-amazon-ec2-instances-and-amazon-ebs-volumes-on-creation/

In 2010, AWS launched resource tagging for Amazon EC2 instances and other EC2 resources. Since that launch, we have raised the allowable number of tags per resource from 10 to 50 and made tags more useful with the introduction of resource groups and Tag Editor. AWS customers use tags to track ownership, drive their cost accounting processes, implement compliance protocols, and control access to resources via AWS Identity and Access Management (IAM) policies.

The AWS tagging model provides separate functions for resource creation and resource tagging. Though this is flexible and has worked well for many of our users, it does result in a small time window when the resources exist in an untagged state. Using two separate functions means that it is possible for resource creation to succeed and tagging to fail, which would leave resources in an untagged state.

New this week, we have made tagging more flexible and more useful, with four new features:

  • Tag on creation – You can now specify tags for EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes as part of the API call that creates the resources.
  • Enforced tag usage – You can now write IAM policies that mandate the use of specific tags on EC2 instances and EBS volumes.
  • Resource-level permissions – By popular request, the CreateTags and DeleteTags functions now support IAM’s resource-level permissions.
  • Enforced volume encryption – You can now write IAM policies that mandate the use of encryption for newly created EBS volumes.

To learn more, see the full blog post on the AWS Blog.

– Craig

How to Use Service Control Policies in AWS Organizations to Enforce Healthcare Compliance in Your AWS Account

Post Syndicated from Aaron Lima original https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-in-aws-organizations-to-enforce-healthcare-compliance-in-your-aws-account/

AWS customers with healthcare compliance requirements such as the U.S. Health Insurance Portability and Accountability Act (HIPAA) and Good Laboratory, Clinical, and Manufacturing Practices (GxP) might want to control access to the AWS services their developers use to build and operate their GxP and HIPAA systems. For example, customers with GxP requirements might approve AWS as a supplier on the basis of AWS’s SOC certification and therefore want to ensure that only the services in scope for SOC are available to developers of GxP systems. Likewise, customers with HIPAA requirements might want to ensure that only AWS HIPAA Eligible Services are available to store and process protected health information (PHI). Now with AWS Organizations—policy-based management for multiple AWS accounts—you can programmatically control access to the services within your AWS accounts.

In this blog post, I show how to restrict an AWS account to HIPAA Eligible Services as well as explain why you should include additional supporting AWS services with service control policies (SCPs) in AWS Organizations. Although this example is HIPAA related, you can repurpose it for GxP, a database of Genotypes and Phenotypes (dbGaP) solutions, or other healthcare compliance requirements for which you want to control developers’ access to a specific scope of services.

Managing an account hierarchy with AWS Organizations

Let’s say I manage four AWS accounts: a Payer account, a Development account, a Corporate IT account, and a fourth account that contains PHI. In accordance with AWS’s Business Associate Agreement (BAA), I want to be sure that only AWS HIPAA Eligible Services are allowed in the fourth account along with supporting AWS services that help encrypt and control access to the account. The following diagram shows a logical view of the associated account structure.

Diagram showing the logical view of the account structure

As illustrated in the preceding diagram, Organizations allows me to create this account hierarchy between the four AWS accounts I manage. Before I proceed to show how to create and apply an SCP to the HIPAA account in this hierarchy, I’ll define some Organizations terminology that I use in this post:

  • Organization – A consolidated set of AWS accounts that you manage. For the preceding example, I have already created my organization and invited my accounts. For more information about creating an organization and inviting accounts, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.
  • Master account – The management hub for Organizations. This is where I invite existing accounts, create new accounts and manage my SCPs. I run all commands demonstrated in this post from this master account. This is also my payer account in the preceding account structure diagram.
  • Service control policy (SCP) – A set of controls that the organization’s master account can apply to the organization, selected OUs, and selected accounts. SCPs allow me to whitelist or blacklist services and actions that I can delegate to the users and roles in the account to which the SCPs are applied. The resultant security permissions for a user and role are the union of the permissions in an SCP and the permissions in an AWS Identity and Access Management (IAM) policy. I refer to SCPs as a policy type in some of this post’s command-line arguments.
  • Organizational unit (OU) – A container for a set of AWS accounts. OUs can be arranged into a hierarchy that can be as many as five levels deep. The top of the hierarchy of OUs is also known as the administrative root. In the walkthrough, I create a HIPAA OU and apply my policy to that OU. I then move the account into the OU to have the policy applied. To manage the organization depicted above, I might create OUs for my Corporate IT account and my Development account.

To restrict services in the fourth account to HIPAA Eligible Services and required supporting services, I will show how to create and apply an SCP to the account with the following steps:

  1. Create a JSON document that lists HIPAA Eligible Services and supporting AWS services.
  2. Create an SCP with a JSON document.
  3. Create an OU for the HIPAA account, and move the account into the OU.
  4. Attach the SCP to the HIPAA OU.
  5. Verify which SCPs are attached to the HIPAA OU.
  6. Detach the default FullAWSAccess SCP from the OU.
  7. Verify SCP enforcement.

How to create and apply an SCP to an account

Let’s walk through the steps to create an SCP and apply it to an account. I can manage my organization by using the Organizations console, AWS CLI, or AWS API from my master account. For the purposes of this post, I will demonstrate the creation and application of an SCP to my account by using the AWS CLI.

1.  Create a JSON document that lists HIPAA Eligible Services and supporting AWS services

Creating an SCP will be familiar if you have experience writing an IAM policy because the grammar in crafting the policy is similar. I will create a JSON document that lists only the services I want to allow in my account, and I will use this JSON document to create my SCP via the command line. The SCP I create from this document allows all actions for all resources of the listed services, effectively turning on only these services in my account. I name the document HIPAAExample.json and save it to the directory from which I will demonstrate the CLI commands.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                 "dynamodb:*","rds:*","ec2:*","s3:*","elasticmapreduce:*",
                 "glacier:*","elasticloadbalancing:*", "cloudwatch:*",
                 "importexport:*", "cloudformation:*", "redshift:*",
                 "iam:*", "health:*", "config:*", "snowball:*",
                 "trustedadvisor:*", "kms:*", "apigateway:*",
                 "autoscaling:*", "directconnect:*",
	         "execute-api:*", "sts:*"
             ],
             "Effect": "Allow",
             "Resource": "*"
        }
    ]
}

Note that the SCP includes more than just the HIPAA Eligible Services.

Why include additional supporting services in a HIPAA SCP?

You can use any service in your account, but you can use only HIPAA Eligible Services to store and process PHI. Some services, such as IAM and AWS Key Management Service (KMS), can be used because these services do not directly store or process PHI, but they might still be needed for administrative and security purposes.

To those ends, I include the following supporting services in the SCP to help me with account administration and security:

  • Access controls – I include IAM to ensure that I can manage access to resources in the account. Though Organizations can limit whether a service is available, I still need the granularity of access control that IAM provides.
  • Encryption – I need a way to encrypt the data. The integration of AWS KMS with Amazon Redshift, Amazon RDS, and Amazon Elastic Block Store (Amazon EBS) helps with this security requirement.
  • Auditing – I also need to be able to demonstrate controls in practice, track changes, and discover any malicious activity in my account. You will note that AWS CloudTrail is not included in the SCP, which prohibits any mutating actions against CloudTrail from users within the account. However, when setting up the account, CloudTrail was set up to send logs to a logging account as recommended in AWS Multiple Account Security Strategy. The logs do not reside in the account, and no one has privileges to change the trail including root or administrators, which helps ensure the protection of the API logging of the account. This highlights how SCPs can be used to secure services in an account.
  • Automation – Automation can help me with my security controls as shown in How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series; therefore, I consider including AWS CloudFormation as a way to ensure that applications deployed in the account adhere to my security and compliance policies. Auto Scaling also is an important service to include to help me scale to meet demand and control cost.
  • Monitoring and support – The remaining services in the SCP such as Amazon CloudWatch are needed to make sure that I can monitor the environment and have visibility into the health of the workloads and applications in my AWS account, helping me maintain operational control. AWS Trusted Advisor is a service that helps to make sure that my cloud environment is well architected.

Now that I have created my JSON document with the services that I will include and explained in detail why I include them, I can create my SCP.

2.  Create an SCP with a JSON document

I will now create the SCP via the CLI with the aws organizations create-policy command. Using the name parameter, I name the SCP and define that I am creating an SCP, both of which are required parameters. I then provide a brief description of the SCP and specify the location of the JSON document I created in Step 1.

aws organizations create-policy --name hipaa-example-policy --type SERVICE_CONTROL_POLICY --description
 
"All HIPAA eligible services plus supporting AWS Services." --content file://./HIPAAExample.json

Output

{
    "policy": {
        "policySummary": {
            "type": "SERVICE_CONTROL_POLICY",
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "name": "hipaa-example-policy",
            "awsManaged": false,
            "id": "p-6ldl8bll", "description": "All HIPAA eligible services and supporting AWS services."

I take note of the policy-id because I need it to attach the SCP to my OU in Step 4. Note: Throughout this post, fictitious placeholder values are shown for the purposes of demonstrating this post’s solution.

3.  Create an OU for the HIPAA account, and move the account into the OU

Grouping accounts by function will make it easier to manage the organization and apply policies across multiple accounts. In this step, I create an OU for the HIPAA account and move the target account into the OU. To create an OU, I need to know the ID for the parent object under which I will be placing the OU. In this case, I will place it under the root and need the ID for the root. To get the root ID, I run the list-roots command.

aws organizations list-roots

Output

{
    "Roots": [
        {
            "PolicyTypes": [
                {
                    "Status": "ENABLED", 
                    "Type": "SERVICE_CONTROL_POLICY"
                }
            ], 
            "Id": "r-rth4", 
            "Arn": "arn:aws:organizations::012345678900:root/o-p9bx61i0h1/r-rth4", 
            "Name": "Root"
        }
    ]
}

With the root ID, I can proceed to create the OU under the root.

aws organizations create-organizational-unit --parent-id r-rth4 --name HIPAA-Accounts

Output

{
    "OrganizationalUnit": {
       "Id": "ou-rth4-ezo5wonz", 
        "Arn": "arn:aws:organizations::012345678900:ou/o-p9bx61i0h1/ou-rth4-ezo5wonz", 
        "Name": "HIPAA-Accounts"
    }
}

I take note of the OU ID in the output because I need it in the next command to move my target account. I will also need the root ID in the command because I am moving the target account from the root into the OU.

aws organizations move-account --account-id 098765432110 --source-parent-id r-rth4 --destination-parent-id 
ou-rth4-ezo5wonz

No Output

 

4.  Attach the SCP to the HIPAA OU

Even though you may have enabled All Features in your organization, you still need to enable SCPs at the root level of the organization to attach SCPs to objects. To do this in my case, I will run the enable-policy-type command and provide the root ID.

aws organizations enable-policy-type --root-id r-rth4 --policy-type SERVICE_CONTROL_POLICY

Output

{
    "Root": {
        "PolicyTypes": [], 
        "Id": "r-rth4", 
        "Arn": "arn:aws:organizations::012345678900:root/o-p9bx61i0h1/r-rth4", 
        "Name": "Root"
    }
}

Now, I will attach the SCP to the OU by using the aws organizations attach-policy command. I must include the target-id, which is the OU ID noted in the previous step and the policy-id from the output of the command in Step 2.

aws organizations attach-policy --target-id ou-rth4-ezo5wonz --policy-id p-6ldl8bll

No Output

 

5.  Verify which SCPs are attached to the HIPAA OU

I will now verify which SCPs are attached to my account by using the aws organization list-policies-for-target command. I must provide the OU ID with the target-id parameter and then filter for SERVICE_CONTROL_POLICY type.

aws organizations list-policies-for-target --target-id ou-rth4-ezo5wonz --filter SERVICE_CONTROL_POLICY

Output

{
    "policies": [
        {
            "awsManaged": false,
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "id": "p-6ldl8bll",
            "description": "All HIPAA eligible services plus supporting AWS Services.",
            "name": "hipaa-example-policy",
            "type": "SERVICE_CONTROL_POLICY"
        },
        {
            "awsManaged": true,
            "arn": "arn:aws:organizations::aws:policy/SERVICE_CONTROL_POLICY/p-FullAWSAccess",
            "id": "p-FullAWSAccess",
            "description": "Allows access to every operation",
            "name": "FullAWSAccess",
            "type": "SERVICE_CONTROL_POLICY"
        }
    ]
}

As the output shows, two SCPs are attached to this account. I want to detach the FullAWSAccess SCP so that the HIPAA SCP is properly in effect. The FullAWSAccess SCP is an Allow SCP that allows all AWS services. If I were to leave the default FullAWSAccess SCP in place, it would grant access to services I do not want to allow in my account. Detaching the FullAWSAccess SCP means that only the services I allow in the hipaa-example-policy are allowed in my account. Note that if I were to create a Deny SCP, the SCP would take precedence over an Allow SCP.

6.  Detach the default FullAWSAccess SCP from the OU

Before detaching the default FullAWSAccess SCP from my account, I run the aws workspaces describe-workspaces call from the Amazon WorkSpaces API. I am currently not running any WorkSpaces, so the output shows an empty list. However, I will test this again after I detach the FullAWSAccess SCP from my account and am left with only the HIPAA SCP attached to the account.

aws workspaces describe-workspaces

Output

{
    "Workspaces": []
}

In order to detach the FullAWSAccess SCP, I must run the aws organizations detach-policy command, providing it the policy-id and target-id of the OU.

aws organizations detach-policy --policy-id p-FullAWSAccess --target-id ou-rth4-ezo5wonz

No Output

 

If I rerun the list-policies-for-target command again, I see that only one SCP is attached to my account that allows HIPAA Eligible Services, as shown in the following output.

aws organizations list-policies-for-target --target-id ou-rth4-ezo5wonz --filter SERVICE_CONTROL_POLICY

Output

 

{
    "policies": [
        {
            "name": "hipaa-example-policy",
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "description": "All HIPAA eligible services plus supporting AWS Services.",
            "awsManaged": false,
            "id": "p-6ldl8bll",
            "type": "SERVICE_CONTROL_POLICY"
        }
    ]
}

Now I can test and verify the enforcement of this SCP.

7.  Verify SCP enforcement

Previously, the administrator of the account had full access to all AWS services, including Amazon WorkSpaces. His IAM policy for Amazon WorkSpaces allows all actions for Amazon WorkSpaces. However, after I apply the HIPAA SCP to the account, this changes the effect of the IAM policy to deny all actions for Amazon WorkSpaces because it is not an allowed service.

The following screenshot of the IAM policy simulator shows which permissions are set for the administrator after I apply the HIPAA SCP. Also, note that the IAM policy simulator shows that the Deny permission is being denied by Organizations. Because the policy simulator is aware of the SCPs attached to an account, it is a good tool to use when troubleshooting or validating an SCP.

If I run the aws workspaces describe-workspaces call again as I did in Step 5, this time I receive an AccessDeniedException error, which validates that the HIPAA SCP is working because Amazon WorkSpaces is not an allowed service in the SCP.

aws workspaces describe-workspaces

Output

An error occurred (AccessDeniedException) when calling the DescribeWorkspaces operation: 
User: arn:aws:iam::098765432110:user/admin is not authorized to perform: workspaces:DescribeWorkspaces 
on resource: arn:aws:workspaces:us-east-1:098765432110:workspace/*

This completes the process of creating and applying an SCP to my account.

Summary

In this blog post, I have shown how to create an SCP and attach it to an OU to restrict an account to HIPAA Eligible Services and additional supporting services. I also showed how to create an OU, move an account into the OU, and then validate the SCP attached to the OU. For more information, see AWS Cloud Computing in Healthcare.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues with implementing this solution, please start a new thread on the IAM forum.

– Aaron

New – Tag EC2 Instances & EBS Volumes on Creation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-tag-ec2-instances-ebs-volumes-on-creation/

Way back in 2010, we launched Resource Tagging for EC2 instances and other EC2 resources. Since that launch, we have raised the allowable number of tags per resource from 10 to 50, and we have made tags more useful with the introduction of resource groups and a tag editor. Our customers use tags to track ownership, drive their cost accounting processes, implement compliance protocols, and to control access to resources via IAM policies.

The AWS tagging model provides separate functions for resource creation and resource tagging. While this is flexible and has worked well for many of our users, it does result in a small time window where the resources exist in an untagged state. Using two separate functions means that resource creation could succeed only for tagging to fail, again leaving resources in an untagged state.

Today we are making tagging more flexible and more useful, with four new features:

Tag on Creation – You can now specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources.

Enforced Tag Usage – You can now write IAM policies that mandate the use of specific tags on EC2 instances or EBS volumes.

Resource-Level Permissions – By popular request, the CreateTags and DeleteTags functions now support IAM’s resource-level permissions.

Enforced Volume Encryption – You can now write IAM policies that mandate the use of encryption for newly created EBS volumes.

Tag on Creation
You now have the ability to specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources (if the call creates both instances and volumes, you can specify distinct tags for the instance and for each volume). The resource creation and the tagging are performed atomically; both must succeed in order for the operation (RunInstances, CreateVolume, and other functions that create resources) to succeed. You no longer need to build tagging scripts that run after instances or volumes have been created.

Here’s how you specify tags when you launch an EC2 instance (the CostCenter and SaveSnapshotFlag tags are also set on any EBS volumes created when the instance is launched):

To learn more, read Using Tags.

Resource-Level Permissions
CreateTags and DeleteTags now support IAM’s resource-level permissions, as requested by many customers. This gives you additional control over the tag keys and values on existing resources.

Also, RunInstances and CreateVolume now support additional resource-level permissions. This allows you to exercise control over the users and groups that can tag resources on creation.

To learn more, see Example Policies for Working with the AWS CLI or an AWS SDK.

Enforced Tag Usage
You can now write IAM policies that enforce the use of specific tags. For example, you could write a policy that blocks the deletion of tags named Owner or Account. Or, you could write a “Deny” policy that disallows the creation of new tags for specific existing resources. You could also use an IAM policy to enforce the use of Department and CostCenter tags to help you achieve more accurate cost allocation reporting. In order to implement stronger compliance and security policies, you could also restrict access to DeleteTags if the resource is not tagged with the user’s name. The ability to enforce tag usage gives you precise control over access to resources, ownership, and cost allocation.

Here’s a statement that requires the use of costcenter and stack tags (with values of “115” and “prod,” respectively) for all newly created volumes:

"Statement": [
    {
      "Sid": "AllowCreateTaggedVolumes",
      "Effect": "Allow",
      "Action": "ec2:CreateVolume",
      "Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/costcenter": "115",
          "aws:RequestTag/stack": "prod"
         },
         "ForAllValues:StringEquals": {
             "aws:TagKeys": ["costcenter","stack"]
         }
       }
     },
     {
       "Effect": "Allow",
       "Action": [
         "ec2:CreateTags"
       ],
       "Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",
       "Condition": {
         "StringEquals": {
             "ec2:CreateAction" : "CreateVolume"
        }
      }
    }
  ]

Enforced Volume Encryption
Using the additional IAM resource-level permissions now supported by RunInstances and CreateVolume, you can now write IAM policies that mandate the use of encryption for any EBS boot or data volumes created. You can use this to comply with regulatory requirements, enforce enterprise security policies, and to protect your data in compliance with applicable auditing requirements.

Here’s a sample statement that you can incorporate into an IAM policy for RunInstances and CreateVolume to enforce EBS volume encryption:

"Statement": [
        {
            "Effect": "Deny",
            "Action": [
                       "ec2:RunInstances",
                       "ec2:CreateVolume"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:volume/*"
            ],
            "Condition": {
                "Bool": {
                    "ec2:Encrypted": "false"
                }
            }
        },

To learn more and to see some sample policies, take a look at Example Policies for Working with the AWS CLI or an AWS SDK and IAM Policies for Amazon EC2.

Available Now
As you can see, the combination of tagging and the new resource-level permissions on the resource creation and tag manipulation functions gives you the ability to track and control access to your EC2 resources.

This new feature is available now in all regions except AWS GovCloud (US) and China (Beijing). You can start using it today from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the AWS APIs.

We are planning to add support for additional EC2 resource types over time; stay tuned for more information!

Jeff;

Amazon EBS Update – New Elastic Volumes Change Everything

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ebs-update-new-elastic-volumes-change-everything/

It is always interesting to speak with our customers and to learn how the dynamic nature of their business and their applications drives their block storage requirements. These needs change over time, creating the need to modify existing volumes to add capacity or to change performance characteristics. Today’s 24×7 operating models leaves no room for downtime; as a result, customers want to make changes without going offline or otherwise impacting operations.

Over the years, we have introduced new EBS offerings that support an ever-widening set of use cases. For example, we introduced two new volume types in 2016 – Throughput Optimized HDD (st1) and Cold HDD (sc1). Our customers want to use these volume types as storage tiers, modifying the volume type to save money or to change the performance characteristics, without impacting operations.

In other words, our customers want their EBS volumes to be even more elastic!

New Elastic Volumes
Today we are launching a new EBS feature we call Elastic Volumes and making it available for all current-generation EBS volumes attached to current-generation EC2 instances. You can now increase volume size, adjust performance, or change the volume type while the volume is in use. You can continue to use your application while the change takes effect.

This new feature will greatly simplify (or even eliminate) many of your planning, tuning, and space management chores. Instead of a traditional provisioning cycle that can take weeks or months, you can make changes to your storage infrastructure instantaneously, with a simple API call.

You can address the following scenarios (and many more that you can come up with on your own) using Elastic Volumes:

Changing Workloads – You set up your infrastructure in a rush and used the General Purpose SSD volumes for your block storage. After gaining some experience you figure out that the Throughput Optimized volumes are a better fit, and simply change the type of the volume.

Spiking Demand – You are running a relational database on a Provisioned IOPS volume that is set to handle a moderate amount of traffic during the month, with a 10x spike in traffic  during the final three days of each month due to month-end processing.  You can use Elastic Volumes to dial up the provisioning in order to handle the spike, and then dial it down afterward.

Increasing Storage – You provisioned a volume for 100 GiB and an alarm goes off indicating that it is now at 90% of capacity. You increase the size of the volume and expand the file system to match, with no downtime, and in a fully automated fashion.

Using Elastic Volumes
You can manage all of this from the AWS Management Console, via API calls, or from the AWS Command Line Interface (CLI).

To make a change from the Console, simply select the volume and choose Modify Volume from the Action menu:

Then make any desired changes to the volume type, size, and Provisioned IOPS (if appropriate). Here I am changing my 75 GiB General Purpose (gp2) volume into a 400 GiB Provisioned IOPS volume, with 20,000 IOPS:

When I click on Modify I confirm my intent, and click on Yes:

The volume’s state reflects the progress of the operation (modifying, optimizing, or complete):

The next step is to expand the file system so that it can take advantage of the additional storage space. To learn how to do that, read Expanding the Storage Space of an EBS Volume on Linux or Expanding the Storage Space of an EBS Volume on Windows. You can expand the file system as soon as the state transitions to optimizing (typically a few seconds after you start the operation). The new configuration is in effect at this point, although optimization may continue for up to 24 hours. Billing for the new configuration begins as soon as the state turns to optimizing (there’s no charge for the modification itself).

Automatic Elastic Volume Operations
While manual changes are fine, there’s plenty of potential for automation. Here are a couple of ideas:

Right-Sizing – Use a CloudWatch alarm to watch for a volume that is running at or near its IOPS limit. Initiate a workflow and approval process that could provision additional IOPS or change the type of the volume. Or, publish a “free space” metric to CloudWatch and use a similar approval process to resize the volume and the filesystem.

Cost Reduction – Use metrics or schedules to reduce IOPS or to change the type of a volume. Last week I spoke with a security auditor at a university. He collects tens of gigabytes of log files from all over campus each day and retains them for 60 days. Most of the files are never read, and those that are can be scanned at a leisurely pace. They could address this use case by creating a fresh General Purpose volume each day, writing the logs to it at high speed, and then changing the type to Throughput Optimized.

As I mentioned earlier, you need to resize the file system in order to be able to access the newly provisioned space on the volume. In order to show you how to automate this process, my colleagues built a sample that makes use of CloudWatch Events, AWS Lambda, EC2 Systems Manager, and some PowerShell scripting. The rule matches the modifyVolume event emitted by EBS and invokes the logEvents Lambda function:

The function locates the volume, confirms that it is attached to an instance that is managed by EC2 Systems Manager, and then adds a “maintenance tag” to the instance:

def lambda_handler(event, context):
    volume =(event['resources'][0].split('/')[1])
    attach=ec2.describe_volumes(VolumeIds=[volume])['Volumes'][0]['Attachments']
    if len(attach)>0: 
        instance = attach[0]['InstanceId']
        filter={'key': 'InstanceIds', 'valueSet': [instance]}
        info = ssm.describe_instance_information(InstanceInformationFilterList=[filter])['InstanceInformationList']
        if len(info)>0:
            ec2.create_tags(Resources=[instance],Tags=[tags])
            print (info[0]['PlatformName']+' Instance '+ instance+ ' has been tagged for maintenance' )

Later (either manually or on a schedule), EC2 Systems Manager is used to run a PowerShell script on all of the instances that are tagged for maintenance. The script looks at the instance’s disks and partitions, and resizes all of the drives (filesystems) to the maximum allowable size. Here’s an excerpt:

foreach ($DriveLetter in $DriveLetters) {
	$Error.Clear()
        $SizeMax = (Get-PartitionSupportedSize -DriveLetter $DriveLetter).SizeMax
}

To learn more, take a look at the [[Elastic Volume Sample]].

Available Today
The Elastic Volumes feature is available today and you can start using it right now!

To learn about some important special cases and a few limitations on instance types, read Considerations When Modifying EBS Volumes.

Jeff;

PS – If you would like to design and build cool, game-changing storage services like EBS, take a look at our EBS Jobs page!

 

AWS Announces CISPE Membership and Compliance with First-Ever Code of Conduct for Data Protection in the Cloud

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-announces-cispe-membership-and-compliance-with-first-ever-code-of-conduct-for-data-protection-in-the-cloud/

CISPE logo

I have two exciting announcements today, both showing AWS’s continued commitment to ensuring that customers can comply with EU Data Protection requirements when using our services.

AWS and CISPE

First, I’m pleased to announce AWS’s membership in the Association of Cloud Infrastructure Services Providers in Europe (CISPE).

CISPE is a coalition of about twenty cloud infrastructure (also known as Infrastructure as a Service) providers who offer cloud services to customers in Europe. CISPE was created to promote data security and compliance within the context of cloud infrastructure services. This is a vital undertaking: both customers and providers now understand that cloud infrastructure services are very different from traditional IT services (and even from other cloud services such as Software as a Service). Many entities were treating all cloud services as the same in the context of data protection, which led to confusion on both the part of the customer and providers with regard to their individual obligations.

One of CISPE’s key priorities is to ensure customers get what they need from their cloud infrastructure service providers in order to comply with the new EU General Data Protection Regulation (GDPR). With the publication of its Data Protection Code of Conduct for Cloud Infrastructure Services Providers, CISPE has already made significant progress in this space.

AWS and the Code of Conduct

My second announcement is in regard to the CISPE Code of Conduct itself. I’m excited to inform you that today, AWS has declared that Amazon EC2, Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), AWS Identity and Access Management (IAM), AWS CloudTrail, and Amazon Elastic Block Store (Amazon EBS) are now fully compliant with the aforementioned CISPE Code of Conduct. This provides our customers with additional assurances that they fully control their data in a safe, secure, and compliant environment when they use AWS. Our compliance with the Code of Conduct adds to the long list of internationally recognized certifications and accreditations AWS already has, including ISO 27001, ISO 27018, ISO 9001, SOC 1, SOC 2, SOC 3, PCI DSS Level 1, and many more.

Additionally, the Code of Conduct is a powerful tool to help our customers who must comply with the EU GDPR.

A few key benefits of the Code of Conduct include:

  • Clarifying who is responsible for what when it comes to data protection: The Code of Conduct explains the role of both the provider and the customer under the GDPR, specifically within the context of cloud infrastructure services.
  • The Code of Conduct sets out what principles providers should adhere to: The Code of Conduct develops key principles within the GDPR about clear actions and commitments that providers should undertake to help customers comply. Customers can rely on these concrete benefits in their own compliance and data protection strategies.
  • The Code of Conduct gives customers the security information they need to make decisions about compliance: The Code of Conduct requires providers to be transparent about the steps they are taking to deliver on their security commitments. To name but a few, these steps involve notification around data breaches, data deletion, and third-party sub-processing, as well as law enforcement and governmental requests. Customers can use this information to fully understand the high levels of security provided.

I’m proud that AWS is now a member of CISPE and that we’ve played a part in the development of the Code of Conduct. Due to the very specific considerations that apply to cloud infrastructure services, and given the general lack of understanding of how cloud infrastructure services actually work, there is a clear need for an association such as CISPE. It’s important for AWS to play an active role in CISPE in order to represent the best interests of our customers, particularly when it comes to the EU Data Protection requirements.

AWS has always been committed to enabling our customers to meet their data protection needs. Whether it’s allowing our customers to choose where in the world they wish to store their content, obtaining approval from the EU Data Protection authorities (known as the Article 29 Working Party) of the AWS Data Processing Addendum and Model Clauses to enable transfers of personal data outside Europe, or simply being transparent about the way our services operate, we work hard to be market leaders in the area of security, compliance, and data protection.

Our decision to participate in CISPE and its Code of Conduct sends a clear a message to our customers that we continue to take data protection very seriously.

– Steve

AWS Online Tech Talks – February 2017

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/aws-blog-february-2017-online-techtalks-series/

The New Year is underway, so there is no better time to dive into learning more about the latest AWS services. Each month, we have a series of webinars targeting best practices and new service features in AWS Cloud.

 

February Online Tech Talks (formerly known as Monthly Webinar Series)

I am excited to share the webinars schedule for the month of February. Remember all webinars noted are free, but they may fill up quickly so be sure to register ahead of time. Webinars are typically one hour in length and scheduled times are in Pacific Time (PT) time zone.

 

Webinars featured this month are as follows:

Tuesday, February 14

Mobile

10:30 AM – 11:30 AM: Test your Android App with Espresso and AWS Device Farm

 

Wednesday, February 15

Big Data

9:00 AM – 10:00 AM: Amazon Elasticsearch Service with Elasticsearch 5 and Kibana 5

Mobile

12:00 Noon – 1:00 PM: Deep Dive on AWS Mobile Hub for Enterprise Mobile Applications

 

Thursday, February 16

Security

9:00 AM – 10:00 AM: DNS DDoS mitigation using Amazon Route 53 and AWS Shield

 

Tuesday, February 21

Storage

9:00 AM – 10:00 AM: Best Practices for NoSQL Workloads on Amazon EC2 and Amazon EBS

Databases

10:30 AM – 11:30 AM: Consolidate MySQL Shards Into Amazon Aurora Using AWS Database Migration Service

IoT

12:00 Noon – 1:00 PM: Getting Started with AWS IoT

 

Wednesday, February 22

IoT

10:30 AM – 11:30 AM: Best Practices with IoT Security

Databases

12:00 Noon – 1:00 PM: Migrate from SQL Server or Oracle into Amazon Aurora using AWS Database Migration Service

 

Wednesday, February 23

Enterprise

8:00 AM – 9:00 AM: How to Prepare for AWS Certification and Advance your Career

Storage

10:30 AM – 11:30 AM: Deep Dive on Elastic File System

12:00 Noon – 1:00 PM: Optimize MySQL Workloads with Amazon Elastic Block Store

 

Wednesday, February 24

Big Data

9:00 AM – 10:00 AM: Deep Dive of Flink & Spark on Amazon EMR

10:30 AM – 11:30 AM: Deep Dive on Amazon Redshift

 

The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These technical sessions are led by AWS solutions architects and engineers and feature live demonstrations & customer examples. You can check out the AWS online series here and the AWS on-demand webinar series on the AWS YouTube channel.

Now Open – AWS London Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-london-region/

Last week we launched our 15th AWS Region and today we are launching our 16th. We have expanded the AWS footprint into the United Kingdom with a new Region in London, our third in Europe. AWS customers can use the new London Region to better serve end-users in the United Kingdom and can also use it to store data in the UK.

The Details
The new London Region provides a broad suite of AWS services including Amazon CloudWatch, Amazon DynamoDB, Amazon ECS, Amazon ElastiCache, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon EMR, Amazon Glacier, Amazon Kinesis Streams, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Elastic Beanstalk, AWS Snowball, AWS Snowmobile, AWS Key Management Service (KMS), AWS Marketplace, AWS OpsWorks, AWS Personal Health Dashboard, AWS Shield Standard, AWS Storage Gateway, AWS Support API, Elastic Load Balancing, VM Import/Export, Amazon CloudFront, Amazon Route 53, AWS WAF, AWS Trusted Advisor, and AWS Direct Connect (follow the links for pricing and other information).

The London Region supports all sizes of C4, D2, M4, T2, and X1 instances.

Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

From Our Customers
Many AWS customers are getting ready to use this new Region. Here’s a very small sample:

Trainline is Europe’s number one independent rail ticket retailer. Every day more than 100,000 people travel using tickets bought from Trainline. Here’s what Mark Holt (CTO of Trainline) shared with us:

We recently completed the migration of 100 percent of our eCommerce infrastructure to AWS and have seen awesome results: improved security, 60 percent less downtime, significant cost savings and incredible improvements in agility. From extensive testing, we know that 0.3s of latency is worth more than 8 million pounds and so, while AWS connectivity is already blazingly fast, we expect that serving our UK customers from UK datacenters should lead to significant top-line benefits.

Kainos Evolve Electronic Medical Records (EMR) automates the creation, capture and handling of medical case notes and operational documents and records, allowing healthcare providers to deliver better patient safety and quality of care for several leading NHS Foundation Trusts and market leading healthcare technology companies.

Travis Perkins, the largest supplier of building materials in the UK, is implementing the biggest systems and business change in its history including the migration of its datacenters to AWS.

Just Eat is the world’s leading marketplace for online food delivery. Using AWS, JustEat has been able to experiment faster and reduce the time to roll out new feature updates.

OakNorth, a new bank focused on lending between £1m-£20m to entrepreneurs and growth businesses, became the UK’s first cloud-based bank in May after several months of working with AWS to drive the development forward with the regulator.

Partners
I’m happy to report that we are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in the United Kingdom. Here’s a partial list:

  • AWS Premier Consulting Partners – Accenture, Claranet, Cloudreach, CSC, Datapipe, KCOM, Rackspace, and Slalom.
  • AWS Consulting Partners – Attenda, Contino, Deloitte, KPMG, LayerV, Lemongrass, Perfect Image, and Version 1.
  • AWS Technology Partners – Splunk, Sage, Sophos, Trend Micro, and Zerolight.
  • AWS Managed Service Partners – Claranet, Cloudreach, KCOM, and Rackspace.
  • AWS Direct Connect Partners – AT&T, BT, Hutchison Global Communications, Level 3, Redcentric, and Vodafone.

Here are a few examples of what our partners are working on:

KCOM is a professional services provider offering consultancy, architecture, project delivery and managed service capabilities to large UK-based enterprise businesses. The scalability and flexibility of AWS gives them a significant competitive advantage with their enterprise and public sector customers. The new Region will allow KCOM to build innovative solutions for their public sector clients while meeting local regulatory requirements.

Splunk is a member of the AWS Partner Network and a market leader in analyzing machine data to deliver operational intelligence for security, IT, and the business. They use cloud computing and big data analytics to help their customers to embrace digital transformation and continuous innovation. The new Region will provide even more companies with real-time visibility into the operation of their systems and infrastructure.

Redcentric is a NHS Digital-approved N3 Commercial Aggregator. Their work allows health and care providers such as NHS acute, emergency and mental trusts, clinical commissioning groups (CCGs), and the ISV community to connect securely to AWS. The London Region will allow health and care providers to deliver new digital services and to improve outcomes for citizens and patients.

Visit the AWS Partner Network page to read some case studies and to learn how to join.

Compliance & Connectivity
Every AWS Region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC3, PCI DSS Level 1, and many more. Our Cloud Compliance page includes information about these standards, along with those that are specific to the UK, including Cyber Essentials Plus.

The UK Government recognizes that local datacenters from hyper scale public cloud providers can deliver secure solutions for OFFICIAL workloads. In order to meet the special security needs of public sector organizations in the UK with respect to OFFICIAL workloads, we have worked with our Direct Connect Partners to make sure that obligations for connectivity to the Public Services Network (PSN) and N3 can be met.

Use it Today
The London Region is open for business now and you can start using it today! If you need additional information about this Region, please feel free to contact our UK team at [email protected].

Jeff;

Now Open AWS Canada (Central) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-canada-central-region/

We are growing the AWS footprint once again. Our new Canada (Central) Region is now available and you can start using it today. AWS customers in Canada and the northern parts of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Canada (Central) Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, M4, T2, and X1 instances.

As part of our on-going focus on making cloud computing available to you in an environmentally friendly fashion, AWS data centers in Canada draw power from a grid that generates 99% of its electricity using hydropower (read about AWS Sustainability to learn more).

Well Connected
After receiving a lot of positive feedback on the network latency metrics that I shared when we launched the AWS Region in Ohio, I am happy to have a new set to share as part of today’s launch (these times represent a lower bound on latency and may change over time).

The first set of metrics are to other Canadian cities:

  • 9 ms to Toronto.
  • 14 ms to Ottawa.
  • 47 ms to Calgary.
  • 49 ms to Edmonton.
  • 60 ms to Vancouver.

The second set are to locations in the US:

  • 9 ms to New York.
  • 19 ms to Chicago.
  • 16 ms to US East (Northern Virginia).
  • 27 ms to US East (Ohio).
  • 75 ms to US West (Oregon).

Canada is also home to CloudFront edge locations in Toronto, Ontario, and Montreal, Quebec.

And Canada Makes 15
Today’s launch brings our global footprint to 15 Regions and 40 Availability Zones, with seven more Availability Zones and three more Regions coming online through the next year. As a reminder, each Region is a physical location where we have two or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

For more information about current and future AWS Regions, take a look at the AWS Global Infrastructure page.

Jeff;


Région AWS Canada (Centre) Maintenant Ouverte

Nous étendons la portée d’AWS une fois de plus. Notre nouvelle Région du Canada (Centre) est maintenant disponible et vous pouvez commencer à l’utiliser dès aujourd’hui. Les clients d’AWS au Canada et dans les régions du nord des États-Unis ont un accès rapide et à latence réduite à l’ensemble des services d’infrastructure AWS.

Les détails
La nouvelle Région du Canada (Centre) supporte Amazon Elastic Compute Cloud (EC2) et les services associés incluant Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances et Dedicated Hosts.

Également supportés sont Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, et AWS WAF.

La région supporte toutes les tailles des instances C4, D2, M4, T2 et X1.

Dans le cadre de notre mission continue de vous offrir des services infonuagiques de manière écologique, les centres de données d’AWS au Canada sont alimentés par un réseau électrique dont 99 pour cent de l’énergie fournie est de nature hydroélectrique (consultez AWS Sustainability pour en savoir plus).

Bien connecté
Après avoir reçu beaucoup de commentaires positifs sur les mesures de latence du réseau dont je vous ai fait part lorsque nous avons lancé la région AWS en Ohio, je suis heureux de vous faire part d’un nouvel ensemble de mesures dans le cadre du lancement d’aujourd’hui (ces mesures représentent une limite inférieure à la latence et pourraient changer au fil du temps).

Le premier ensemble de mesures concerne d’autres villes canadiennes:

  • 9 ms à Toronto.
  • 14 ms à Ottawa.
  • 47 ms à Calgary.
  • 49 ms à Edmonton.
  • 60 ms à Vancouver.

Le deuxième ensemble concerne des emplacements aux États-Unis :

  • 9 ms à New York.
  • 19 ms à Chicago.
  • 16 ms à USA Est (Virginie du Nord).
  • 27 ms à USA Est (Ohio).
  • 75 ms à USA Ouest (Oregon).

Le Canada compte également des emplacements périphériques CloudFront à Toronto, en Ontario, et à Montréal, au Québec.

Et le Canada fait 15
Le lancement d’aujourd’hui porte notre présence mondiale à 15 régions et 40 zones de disponibilité avec sept autres zones de disponibilité et trois autres régions qui seront mises en opération au cours de la prochaine année. Pour vous rafraîchir la mémoire, chaque région est un emplacement physique où nous avons deux ou plusieurs zones de disponibilité. Chaque zone de disponibilité, à son tour, comprend un ou plusieurs centres de données, chacun doté d’une alimentation, d’une mise en réseau et d’une connectivité redondantes dans des installations distinctes. Avoir deux zones de disponibilité ou plus dans chaque région vous donne la possibilité d’opérer des applications qui sont plus disponibles, plus tolérantes aux pannes et plus durables qu’elles ne le seraient si vous étiez limité à une seule zone de disponibilité.

Pour plus d’informations sur les régions AWS actuelles et futures, consultez la page Infrastructure mondiale AWS.

Jeff;

New – CloudWatch Events for EBS Snapshots

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cloudwatch-events-for-ebs-snapshots/

Cloud computing can improve upon traditional IT operations by giving you the power to automate complex high-level operations that were formerly kept in a runbook or passed along as tribal knowledge. Far too many of these operations involve backup and recovery operations, especially in smaller and less mature organizations.

Many AWS customers make great use of Amazon Elastic Block Store (EBS) volumes, especially given the ease with which they can generate and manage snapshot backups. They are also copying snapshots between regions on a regular basis for disaster recovery and other operational reasons.

Today we are bringing the benefits of automation to EBS with the addition of new CloudWatch Events for EBS snapshots. You can use these events to add additional automation to your cloud-based backup environment. Here are the new events:

  • createSnapshot – Fired after the status of a newly created EBS snapshot changes to Complete.
  • copySnapshot – Fired after the status of a snapshot copy changes to Complete.
  • shareSnapshot – Fired after a snapshot is shared with your AWS account.

A lot of AWS customers monitor the status of their snapshots by making repeated calls to the DescribeSnapshots function and then stepping through the paginated output in order to locate a specific snapshot. These new events open the door to all sorts of event-driven automation, including the cross-region copy that I mentioned earlier.

Using Snapshot Events
In order to get a better understanding of how this feature helps to automate data backup workflows, I’ll create a workflow that copies a completed snapshot to another region. First, I’ll create an IAM policy that grants appropriate permissions. Then I will incorporate an AWS Lambda function (created by my colleagues) that takes action on the createSnapshot event. Finally, I’ll create a CloudWatch Events rule to capture the event and route it to the Lambda function.

I start out by creating an IAM role (CopySnapshotToRegion) with this policy:

Then I created a new Lambda function (you can find the code at Amazon CloudWatch Events for EBS):

Next, I hopped over to the CloudWatch Events Console, clicked on Create rule, and set it up to handle successful createSnapshot events:

And gave it a name:

To test it out, I create a new EBS snapshot in my source region:

The function was invoked as expected and the snapshot was copied to the target region within seconds (in practice, the copy time will depend on the size of the snapshot):

You can also use these events to make copies of snapshots that are shared with you from other accounts. Many AWS customers partition their usage across multiple accounts for various organizational and security reasons; take a look at our AWS Multiple Account Security Strategy to see our in-depth recommendations in this area. Here are two of the five models included therein:

Available Now
The new events are available  in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), and South America (São Paulo) Regions and you can start using them today! Take a look and let me know what you come up with.

Jeff

 

PS – If you are a developer, development manager, or a product manager and would like to build systems like this, check out the EBS Jobs page.

 

New – HIPAA Eligibility for AWS Snowball

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-hipaa-eligibility-for-aws-snowball/

Many of the tools and technologies now in use at your local doctor, dentist, hospital, or other healthcare provider generate massive amounts of sensitive digital data. Other prolific data generators include genomic sequencers and any number of activity and fitness trackers. We all want to benefit from the insights that can be produced by this “data tsunami,” but we also want to be confident that it will be stored in a protected fashion and processed in a responsible manner.

In the United States, protection of healthcare data is governed by HIPAA (the Health Insurance Portability and Accountability Act). Because many AWS customers would like to store and process sensitive health care data on the cloud, we have worked to make multiple AWS services HIPAA-eligible; this means that the services can be used to process Protected Health Information (PHI) and to build applications that are HIPAA-compliant (read HIPAA in the Cloud to learn more about what Cleveland Clinic, Orion Health, Eliza, Philips, and other AWS customers are doing).

Last year I introduced you to AWS Import/Export Snowball. This is an AWS-owned storage appliance that you can use to move large amounts of data (generally 10 terabytes or more) to AWS on a one-time or recurring basis. You simply request a Snowball from the AWS Management Console, connect it to your network when it arrives, copy your data to it, and then send it back to us so that we can copy the data to the AWS storage service of your choice. Snowball encrypts your data using keys that you specify and control.

Today, I am happy to announce that we are adding Snowball to the list of HIPAA-eligible services, joining Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Elastic Load Balancing, Amazon EMR, Amazon Glacier, Amazon Relational Database Service (RDS) (MySQL and Oracle), Amazon Redshift, and Amazon Simple Storage Service (S3). This brings the total number of eligible services to 10 and represents our commitment to make the AWS Cloud a safe, secure, and reliable destination for PHI and many other types of sensitive data. If you already have a Business Associate Agreement (BAA) with AWS, you can begin using Snowball to transfer data into your HIPAA accounts immediately.

With Snowball now on the list of HIPAA-eligible services, AWS customers in the Healthcare and Life Sciences space can quickly move on-premises data to Snowball and then process it using any of the services that I just mentioned. For example, they can use the new HDFS Import feature to migrate an existing on-premises Hadoop cluster to the cloud and analyze it using a scalable EMR cluster. They can also move existing petabyte-scale data (medical images, patient records, and the like) to AWS and store it in S3 or Glacier, both already HIPAA-eligible.  These services are proven, easy to use, and offer high data durability at low cost.

Jeff

 

New – Burst Balance Metric for EC2’s General Purpose SSD (gp2) Volumes

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-burst-balance-metric-for-ec2s-general-purpose-ssd-gp2-volumes/

Many AWS customers are getting great results with the General Purpose SSD (gp2) EBS volumes that we launched in mid-2014 (see New SSD-Backed Elastic Block Storage for more information).  If you’re unsure of which volume type to use for your workload, gp2 volumes are the best default choice because they offer balanced price/performance for a wide variety of database, dev and test, and boot volume workloads. One of the more interesting aspects of this volume type is the burst feature.

We designed gp2‘s burst feature to suit the I/O patterns of real world workloads we observed across our customer base. Our data scientists found that volume I/O is extremely bursty, spiking for short periods, with plenty of idle time between bursts. This unpredictable and bursty nature of traffic is why we designed the gp2 burst-bucket to allow even the smallest of volumes to burst up to 3000 IOPS and to replenish their burst bucket during idle times or when performing low levels of I/O. The burst-bucket design allows us to provide consistent  and predictable performance for all gp2 users. In practice, very few gp2 volumes ever completely deplete their burst-bucket, and now customers can track their usage patterns and adjust accordingly.

We’ve written extensively about performance optimization across different volume types and the differences between benchmarking and real-world workloads (see I/O Characteristics for more information). As I described in my original post, burst credits accumulate at a rate of 3 per configured GB per second, and each one pays for one read or one write. Each volume can accumulate up to 5.4 million credits, and they can be spent at up to 3,000 per second per volume. To get started, you simply create gp2 volumes of the desired size, launch your application, and your I/O to the volume will proceed as rapidly and efficiently as possible.

New Metric
Effective today, we are making the Burst Balance metric available for each General Purpose (SSD) volume. You can observe this metric in the CloudWatch Console and you can set up an alarm that will be triggered if the balance becomes too low. The metric is expressed as percentage; 100% means that the volume has accumulated the maximum number of credits.

I launched a c4.8xlarge instance and attached a 100 GB volume to it:

Then I created an alarm to let me know if the volume’s burst balance went below 40% (in a real-world scenario you might want to set this considerably lower, but I was impatient and it takes a fair amount of time to drain the balance):

I confirmed my SNS subscription, and then ran fio to generate a load:

$ sudo fio --filename=/dev/sdb --rw=randread --bs=16k --runtime=2400 --time_based=1 \
  --iodepth=32 --ioengine=libaio --direct=1  --name=gp2-16kb-burst-bucket-test

Then I watched as the balance declined:

As expected, I received a notification email:

In a production scenario, I could choose to increase the size of the volume, fine-tune my application’s I/O behavior, or simply note for the record that I was making good use of the burst-bucket.

After the end of the test, I had lunch and watched the burst balance increase (I used the updated CloudWatch Console this time):

If you’re one of the few customers where your burst-bucket is depleting more than you’d like, you can either increase the size of your gp2 volume for more performance or transition to a Provisioned IOPS SSD (io1) volume, which delivers consistent, provisioned performance 99.9% of the time.

Available Now
This feature is available now and you can start using it today in all AWS Commercial Regions at no charge. The usual charges for CloudWatch Alarms will apply.

Jeff

 

PS – If you are a developer, development manager, or a product manager and would like to build systems like this, check out the EBS Jobs page.

Real World AWS Scalability

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/real-world-aws-scalability/

This is a guest post from Linda Hedges, Principal SA, High Performance Computing.

—–

One question we often hear is: “How well will my application scale on AWS?” For HPC workloads that cross multiple nodes, the cluster network is at the heart of scalability concerns. AWS uses advanced Ethernet networking technology which, like all things AWS, is designed for scale, security, high availability, and low cost. This network is exceptional and continues to benefit from Amazon’s rapid pace of development. For real world applications, all but the most demanding customers find that their applications run very well on AWS! Many have speculated that highly-coupled workloads require a name-brand network fabric to achieve good performance. For most applications, this is simply not the case. As with all clusters, the devil is in the details and some applications benefit from cluster tuning. This blog discusses the scalability of a representative, real-world application and provides a few performance tips for achieving excellent application performance using STARCCM+ as an example. For more HPC specific information, please see our website.

TLG Aerospace, a Seattle based aerospace engineering services company, runs most of their STARCCM+ Computational Fluid Dynamics (CFD) cases on AWS. A detailed case study describing TLG Aerospace’s experience and the results they achieved can be found here. This blog uses one of their CFD cases as an example to understand AWS scalability. By leveraging Amazon EC2 Spot instances which allow customers to purchase unused capacity at significantly reduced rates, TLG Aerospace consistently achieves an 80% cost savings compared to their previous cloud and on-premise HPC cluster options. TLG Aerospace experiences solid value, terrific scale-up, and effectively limitless case throughput – all with no queue wait!

HPC applications such as Computational Fluid Dynamics (CFD) depend heavily on the application’s ability to efficiently scale compute tasks in parallel across multiple compute resources. Parallel performance is often evaluated by determining an application’s scale-up. Scale-up is a function of the number of processors used and is defined as the time it takes to complete a run on one processor, divided by the time it takes to complete the same run on the number of processors used for the parallel run.

As an example, consider an application with a time to completion, or turn-around time of 32 hours when run on one processor. If the same application runs in one hour when run on 32 processors, then the scale-up is 32 hours of time on one processor / 1 hour time on 32 processors, or equal to 32, for 32 processes. Scaling is considered to be excellent when the scale-up is close to or equal to the number of processors on which the application is run.

If the same application took eight hours to complete on 32 processors, it would have a scale-up of only four: 32 (time on one processor) / 8 (time to complete on 32 processors). A scale-up of four on 32 processors is considered to be poor.

In addition to characterizing the scale-up of an application, scalability can be further characterized as “strong” or “weak” scaling. Note that the term “weak”, as used here, does not mean inadequate or bad but is a technical term facilitating the description of the type of scaling which is sought. Strong scaling offers a traditional view of application scaling, where a problem size is fixed and spread over an increasing number of processors. As more processors are added to the calculation, good strong scaling means that the time to complete the calculation decreases proportionally with increasing processor count.

In comparison, weak scaling does not fix the problem size used in the evaluation, but purposely increases the problem size as the number of processors also increases. The ratio of the problem size to the number of processors on which the case is run is held constant. For a CFD calculation, problem size most often refers to the size of the grid for a similar configuration.

An application demonstrates good weak scaling when the time to complete the calculation remains constant as the ratio of compute effort to the number of processors is held constant. Weak scaling offers insight into how an application behaves with varying case size.

Figure 1: Strong Scaling Demonstrated for a 16M Cell STARCCM+ CFD Calculation

Scale-up as a function of increasing processor count is shown in Figure 1 for the STARCCM+ case data provided by TLG Aerospace. This is a demonstration of “strong” scalability. The blue line shows what ideal or perfect scalability looks like. The purple triangles show the actual scale-up for the case as a function of increasing processor count. Excellent scaling is seen to well over 400 processors for this modest-sized 16M cell case as evidenced by the closeness of these two curves. This example was run on Amazon EC2 c3.8xlarge instances, each an Intel E5-2680, providing either 16 cores or 32 hyper-threaded processors.

AWS customers can choose to run their applications on either threads or cores. AWS provides access to the underlying hardware of our servers. For an application like STARCCM+, excellent linear scaling can be seen when using either threads or cores though testing of a specific case and application is always recommended. For this example, threads were chosen as the processing basis. Running on threads offered a few percent performance improvement when compared to running the same case on cores. Note that the number of available cores is equal to half of the number of available threads.

The scalability of real-world problems is directly related to the ratio of the compute-effort per-core to the time required to exchange data across the network. The grid size of a CFD case provides a strong indication of how much computational effort is required for a solution. Thus, larger cases will scale to even greater processor counts than for the modest size case discussed here.

Figure 2: Scale-up and Efficiency as a Function of Cells per Processor

STARCCM+ has been shown to demonstrate exceptional “weak” scaling on AWS. That’s not shown here though weak scaling is reflected in Figure 2 by plotting the cells per processor on the horizontal axis. The purple line in Figure 2 shows scale-up as a function of grid cells per processor. The vertical axis for scale-up is on the left-hand side of the graph as indicated by the purple arrow. The green line in Figure 2 shows efficiency as a function of grid cells per processor. The vertical axis for efficiency is shown on the right hand side of the graph and is indicated with a green arrow. Efficiency is defined as the scale-up divided by the number of processors used in the calculation.

Weak scaling is evidenced by considering the number of grid cells per processor as a measure of compute effort. Holding the grid cells per processor constant while increasing total case size demonstrates weak scaling. Weak scaling is not shown here because only one CFD case is used. Fewer grid cells per processor means reduced computational effort per processor. Maintaining efficiency while reducing cells per processor demonstrates the excellent strong scalability of STARCCM+ on AWS.

Efficiency remains at about 100% between approximately 250,000 grid cells per thread (or processor) and 100,000 grid cells per thread. Efficiency starts to fall off at about 100,000 grid cells per thread. An efficiency of at least 80% is maintained until 25,000 grid cells per thread. Decreasing grid cells per processor leads to decreased efficiency because the total computational effort per processor is reduced. Note that the perceived ability to achieve more than 100% efficiency (here, at about 150,000 cells per thread) is common in scaling studies, is case specific, and often related to smaller effects such as timing variation and memory caching.

Figure 3: Cost for per Run Based on Spot Pricing ($0.35 per hour for c3.8xlarge) as a function of Turn-around Time

Plots of scale-up and efficiency offer understanding of how a case or application scales. The bottom line though is that what really matters to most HPC users is case turn-around time and cost. A plot of turn-around time versus CPU cost for this case is shown in Figure 3. As the number of threads are increased, the total turn-around time decreases. But also, as the number of threads increase, the inefficiency increases. Increasing inefficiency leads to increased cost. The cost shown is based on typical Amazon Spot price for the c3.8xlarge and only includes the computational costs. Small costs will also be incurred for data storage.

Minimum cost and turn-around time were achieved with approximately 100,000 cells per thread. Many users will choose a cell count per thread to achieve the lowest possible cost. Others, may choose a cell count per thread to achieve the fastest turn-around time. If a run is desired in 1/3rd the time of the lowest price point, it can be achieved with approximately 25,000 cells per thread. (Note that many users run STARCCM+ with significantly fewer cells per thread than this.) While this increases the compute cost, other concerns, such as license costs or schedules can be over-riding factors. For this 16M cell case, the added inefficiency results in an increase in run price from $3 to $4 for computing. Many find the reduced turn-around time well worth the price of the additional instances.

As with any cluster, good performance requires attention to the details of the cluster set up. While AWS allows for the quick set up and take down of clusters, performance is affected by many of the specifics in that set up. This blog provides some examples.

On AWS, a placement group is a grouping of instances within a single Availability Zone that allow for low latency between the instances. Placement groups are recommended for all applications where low latency is a requirement. A placement group was used to achieve the best performance from STARCCM+. More on placement groups can be found in our docs.

Amazon Linux is a version of Linux maintained by Amazon. The distribution evolved from Red Hat Linux (RHEL) and is designed to provide a stable, secure, and highly performant environment. Amazon Linux is optimized to run on AWS and offers excellent performance for running HPC applications. For the case presented here, the operating system used was Amazon Linux. Other Linux distributions are also performant. However, it is strongly recommended that for Linux HPC applications, a minimum of the version 3.10 Linux kernel be used to be sure of using the latest Xen libraries. See our Amazon Linux page to learn more.

Amazon Elastic Block Store (EBS) is a persistent block level storage device often used for cluster storage on AWS. EBS provides reliable block level storage volumes that can be attached (and removed) from an Amazon EC2 instance. A standard EBS general purpose SSD (gp2) volume is all that is required to meet the needs of STARCCM+ and was used here. Other HPC applications may require faster I/O to prevent data writes from being a bottle neck to turn-around speed. For these applications, other storage options exist. A guide to amazon storage is found here.

As mentioned previously, STARCCM+, like many other CFD solvers, runs well on both threads and cores. Hyper-Threading can improve the performance of some MPI applications depending on the application, the case, and the size of the workload allocated to each thread; it may also slow performance. The one-size-fits-all nature of the static cluster compute environments means that most HPC clusters disable hyper-threading. To first order, it is believed that computationally intensive workloads run best on cores while those that are I/O bound may run best on threads. Again, a few percent increase in performance was discovered for this case, by running with threads. If there is no time to evaluate the effect of hyper-threading on case performance than it is recommended that hyper-threading be disabled. When hyper-threading is disabled, it is important to bind the core to designated CPU. This is called processor or CPU affinity. It almost universally improves performance over unpinned cores for computationally intensive workloads.

Occasionally, an application will include frequent time measurement in their code; perhaps this is done for performance tuning. Under these circumstances, performance can be improved by setting the clock source to the TSC (Time Stamp Counter). This tuning was not required for this application but is mentioned here for completeness.

When evaluating an application, it is always recommended that a meaningful real-world case be used. A case that is too big or too small won’t reflect the performance and scalability achievable in every day operation. The only way to know positively how an application will perform on AWS is to try it!

AWS offers solid strong scaling and exceptional weak scaling. Good performance can be achieved on AWS, for most applications. In addition to low cost and quick turn-around time, important considerations for HPC also include throughput and availability. AWS offers effectively limitless throughput, security, cost-savings, and high-availability making queues a “thing of the past”. A long queue wait makes for a very long case turn-around time, regardless of the scale.

In Case You Missed These: AWS Security Blog Posts from September and October

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-september-and-october/

In case you missed any AWS Security Blog posts from September and October, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from enabling multi-factor authentication on your AWS API calls to using Amazon CloudWatch Events to monitor application health.

October

October 30: Register for and Attend This November 10 Webinar—Introduction to Three AWS Security Services
As part of the AWS Webinar Series, AWS will present Introduction to Three AWS Security Services on Thursday, November 10. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time. AWS Solutions Architect Pierre Liddle shows how AWS Identity and Access Management (IAM), AWS Config Rules, and AWS Cloud Trail can help you maintain control of your environment. In a live demo, Pierre shows you how to track changes, monitor compliance, and keep an audit record of API requests.

October 26: How to Enable MFA Protection on Your AWS API Calls
Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

October 19: Reserved Seating Now Open for AWS re:Invent 2016 Sessions
Reserved seating is new to re:Invent this year and is now open! Some important things you should know about reserved seating:

  1. All sessions have a predetermined number of seats available and must be reserved ahead of time.
  2. If a session is full, you can join a waitlist.
  3. Waitlisted attendees will receive a seat in the order in which they were added to the waitlist and will be notified via email if and when a seat is reserved.
  4. Only one session can be reserved for any given time slot (in other words, you cannot double-book a time slot on your re:Invent calendar).
  5. Don’t be late! The minute the session begins, if you have not badged in, attendees waiting in line at the door might receive your seat.
  6. Waitlisting will not be supported onsite and will be turned off 7-14 days before the beginning of the conference.

October 17: How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager
Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.” In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

October 5: Meet AWS Security Team Members at Grace Hopper 2016
For those of you joining this year’s Grace Hopper Celebration of Women in Computing in Houston, you may already know the conference will have a number of security-specific sessions. A group of women from AWS Security will be at the conference, and we would love to meet you to talk about your cloud security and compliance questions. Are you a student, an IT security veteran, or an experienced techie looking to move into security? Make sure to find us to talk about career opportunities.

September

September 29: How to Create a Custom AMI with Encrypted Amazon EBS Snapshots and Share It with Other Accounts and Regions
An Amazon Machine Image (AMI) provides the information required to launch an instance (a virtual server) in your AWS environment. You can launch an instance from a public AMI, customize the instance to meet your security and business needs, and save configurations as a custom AMI. With the recent release of the ability to copy encrypted Amazon Elastic Block Store (Amazon EBS) snapshots between accounts, you now can create AMIs with encrypted snapshots by using AWS Key Management Service (KMS) and make your AMIs available to users across accounts and regions. This allows you to create your AMIs with required hardening and configurations, launch consistent instances globally based on the custom AMI, and increase performance and availability by distributing your workload while meeting your security and compliance requirements to protect your data.

September 19: 32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog
AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

September 8: Automated Reasoning and Amazon s2n
In June 2015, AWS Chief Information Security Officer Stephen Schmidt introduced AWS’s new Open Source implementation of the SSL/TLS network encryption protocols, Amazon s2n. s2n is a library that has been designed to be small and fast, with the goal of providing you with network encryption that is more easily understood and fully auditable. In the 14 months since that announcement, development on s2n has continued, and we have merged more than 100 pull requests from 15 contributors on GitHub. Those active contributors include members of the Amazon S3, Amazon CloudFront, Elastic Load Balancing, AWS Cryptography Engineering, Kernel and OS, and Automated Reasoning teams, as well as 8 external, non-Amazon Open Source contributors.

September 6: IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region
In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

New Utility – Opt-in to Longer Resource IDs Across All Regions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-utility-opt-in-to-longer-resource-ids-across-all-regions/

Early this year I announced that Longer EC2 Resource IDs are Now Available, and kicked off the start of a transition period that will last until early December 2016. During the transition period, you can opt in to the new resource format on a region-by-region, user-by-user basis. At the conclusion of the transition period, all newly created resources will be assigned 17-character identifiers. Here are some important dates for your calendar:

  • November – Beginning on November 1st, you can use the describe-id-format command to check on the cutover deadline for the regions that are of interest to you.
  • December – Between December 5th and December 16th, we will be setting individual AWS Regions to use 17-character identifiers by default.

In order to help you to ensure that your code and your tools can handle the new format, I’d like to personally encourage you to opt in as soon as possible!

We’ve launched a new longer-ID-converter tool that will allow you to opt in, opt out, or simply check the status. If you have already installed the AWS Command Line Interface (CLI), you can simply download, the script, make it executable, and then run it:

$ wget https://raw.githubusercontent.com/awslabs/ec2-migrate-longer-id/master/migratelongerids.py
$ chmod +x migratelongerids.py

Here are some of the things that you can do.

Check the status of your account:

$ ./migratelongerids.py --status

Convert account, IAM Roles, and IAM Users to long IDs:

$ ./migratelongerids.py

Revert to short IDs:

$ ./migratelongerids.py --revert

Convert the current User/Role:

$ ./migratelongerids.py --convertself

For more information on this utility, check out the README file. For more information on the move to longer resource IDs, consult the EC2 FAQ.


Jeff;

Now Open – AWS US East (Ohio) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-us-east-ohio-region/

As part of our ongoing plan to expand the AWS footprint, I am happy to announce that our new US East (Ohio) Region is now available. In conjunction with the existing US East (Northern Virginia) Region, AWS customers in the Eastern part of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Ohio Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports (deep breath) Amazon API Gateway, Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, Amazon CloudWatch (including CloudWatch Events and CloudWatch Logs), AWS CloudTrail, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Container Registy, Amazon ECS, Amazon Elastic File System, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Import/Export Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Lambda, AWS Marketplace, Mobile Hub, AWS OpsWorks, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53, Amazon Simple Storage Service (S3), AWS Service Catalog, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Storage Gateway, Amazon Simple Workflow Service (SWF), AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, I2, M4, R3, T2, and X1 instances. As is the case with all of our newer Regions, instances must be launched within a Virtual Private Cloud (read Virtual Private Clouds for Everyone to learn more).

Well Connected
Here are some round-trip network metrics that you may find interesting (all names are airport codes, as is apparently customary in the networking world; all times are +/- 2 ms):

  • 12 ms to IAD (home of the US East (Northern Virginia) Region).
  • 20 ms to JFK (home to an Internet exchange point).
  • 29 ms to ORD (home to a pair of Direct Connect locations hosted by QTS and Equinix and another exchange point).
  • 91 ms to SFO (home of the US West (Northern California) Region).

With just 12 ms of round-trip latency between US East (Ohio) and US East (Northern Virginia), you can make good use of unique AWS features such as S3 Cross-Region Replication, Cross-Region Read Replicas for Amazon Aurora, Cross-Region Read Replicas for MySQL, and Cross-Region Read Replicas for PostgreSQL. Data transfer between the two Regions is priced at the Inter-AZ price ($0.01 per GB), making your cross-region use cases even more economical.

Also on the networking front, we have agreed to work together with Ohio State University to provide AWS Direct Connect access to OARnet. This 100-gigabit network connects colleges, schools, medical research hospitals, and state government across Ohio. This connection provides local teachers, students, and researchers with a dedicated, high-speed network connection to AWS.

14 Regions, 38 Availability Zones, and Counting
Today’s launch of this 3-AZ Region expands our global footprint to a grand total of 14 Regions and 38 Availability Zones. We are also getting ready to open up a second AWS Region in China, along with other new AWS Regions in Canada, France, and the UK.

Since there’s been some industry-wide confusion about the difference between Regions and Availability Zones of late, I think it is important to understand the differences between these two terms. Each Region is a physical location where we have one or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

Around the office, we sometimes play with analogies that can serve to explain the difference between the two terms. My favorites are “Hotels vs. hotel rooms” and “Apple trees vs. apples.” So, pick your analogy, but be sure that you know what it means!


Jeff;