All posts by Jeff Barr

New – CloudFormation Drift Detection

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cloudformation-drift-detection/

AWS CloudFormation supports you in your efforts to implement Infrastructure as Code (IaC). You can use a template to define the desired AWS resource configuration, and then use it to launch a CloudFormation stack. The stack contains the set of resources defined in the template, configured as specified. When you need to make a change to the configuration, you update the template and use a CloudFormation Change Set to apply the change. Your template completely and precisely specifies your infrastructure and you can rest assured that you can use it to create a fresh set of resources at any time.

That’s the ideal case! In reality, many organizations are still working to fully implement IaC. They are educating their staff and adjusting their processes, both of which take some time. During this transition period, they sometimes end up making direct changes to the AWS resources (and their properties) without updating the template. They might make a quick out-of-band fix to change an EC2 instance type, fix an Auto Scaling parameter, or update an IAM permission. These unmanaged configuration changes become problematic when it comes time to start fresh. The configuration of the running stack has drifted away from the template and is no longer properly described by it. In severe cases, the change can even thwart attempts to update or delete the stack.

New Drift Detection
Today we are announcing a powerful new drift detection feature that was designed to address the situation that I described above. After you create a stack from a template, you can detect drift from the Console, CLI, or from your own code. You can detect drift on an entire stack or on a particular resource, and see the results in just a few minutes. You then have the information necessary to update the template or to bring the resource back into compliance, as appropriate.

When you initiate a check for drift detection, CloudFormation compares the current stack configuration to the one specified in the template that was used to create or update the stack and reports on any differences, providing you with detailed information on each one.

We are launching with support for a core set of services, resources, and properties, with plans to add more over time. The initial list of resources spans API Gateway, Auto Scaling, CloudTrail, CloudWatch Events, CloudWatch Logs, DynamoDB, Amazon EC2, Elastic Load Balancing, IAM, AWS IoT, Lambda, Amazon RDS, Route 53, Amazon S3, Amazon SNS, Amazon SQS, and more.

You can perform drift detection on stacks that are in the CREATE_COMPLETE, UPDATE_COMPLETE, UPDATE_ROLLBACK_COMPLETE, and UPDATE_ROLLBACK_FAILED states. The drift detection does not apply to other stacks that are nested within the one you check; you can do these checks yourself instead.

Drift Detection in Action
I tested this feature on the simple stack that I used when I wrote about Provisioned Throughput for Amazon EFS. I simply select the stack and choose Detect drift from the Action menu:

I confirm my intent and click Yes, detect:

Drift detection starts right away; I can Close the window while it runs:

After it completes I can see that the Drift status of my stack is IN_SYNC:

I can also see the drift status of each checked resource by taking a look at the Resources tab:

Now, I will create a fake change by editing the IAM role, adding a new policy:

I detect drift a second time, and this time I find (not surprise) that my stack has drifted:

I click View details, and I inspect the Resource drift status to learn more:

I can expand the status line for the modified resource to learn more about the drift:

Available Now
This feature is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo) Regions. As I noted above, we are launching with support for a strong, initial set of resources, and plan to add many more in the months to come.

Jeff;

 

In the Works – AWS Region in Milan, Italy

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-milan-italy/

Late last month I announced that we are working on an AWS Region in South Africa. Today I would like to let you know that we are also building an AWS Region in Italy and plan to open it up in early 2020.

Milan in 2020
The upcoming Europe (Milan) Region will have three Availability Zones and will be our sixth region in Europe, joining the existing regions in France, Germany, Ireland, the UK, and the new region in Sweden that is set to launch later this year. We currently have 57 Availability Zones in 19 geographic regions worldwide, and another 15 Availability Zones across five regions in the works for launch between now and the first half of 2020 (check out the AWS Global Infrastructure page for more info). Like all of our existing regions, this one is designed and built to meet the most rigorous compliance standards and to provide the highest level of security for AWS customers.

AWS in Italy
AWS customers in Italy have been using our existing regions for more than a decade. Hot startups, enterprises, and public sector organizations in Italy are all running their mission-critical applications on the AWS Cloud. Here’s a tasting menu to give you an idea of what’s already happening:

Ferrero is one of the world’s largest chocolate manufacturers (including the Pocket Coffee that powers my blogging). They have been using AWS since 2010, and use a template-driven model that lets them share features and functions across 250 web sites for 80 countries, giving them the ability to handle traffic surges while reducing costs by 30%.

Mediaset runs multiple broadcast networks and digital channels, as well as a pay-TV service, advertising agencies, and Italian film studio Medusa. The Mediaset Premium Online soccer service now attracts over 600,000 unique month visitors, doubling in size since it was launched last year. AWS allows them to meet this demand without adding more hardware, while also scaling up and down on an as-needed basis.

Eataly is the largest online marketplace for Italian food and wine products. After moving from physical stores to the web, they decided to use AWS to ensure scalability. Today, they use a wide range of AWS services, deliver 1.5 to 3 million page views daily, and handle holiday peaks ranging from 100 to 1000 orders per day.

Vodafone Italy has more than 30 million customers for their mobile services. They used AWS to power a new pay-as-you-go service to allow mobile customers to add credit to their accounts, building the service from scratch to be PCI DSS Level 1 compliant and to scale rapidly, all in just 3 months, and with a 30% reduction in capital expenses.

The European Space Agency (ESA) Centre for Earth Observation in Frascati, Italy runs the Data User Element (DUE) program. Although much of the work takes place in Earth-orbiting satellites, the program also takes advantage of EC2 and S3, storing up to 30 terabytes of images and observations at peak times and available to a 50,000 person user community.

The new region will give these customers (and many others) a new option with even lower latency for their local customers, and will also open the door to applications that must comply with strict data sovereignty requirements.

Investing in Italy’s Future
The upcoming Europe (Milan) Region is just one step along a long path! Back in 2012 we launched the first Point of Presence (PoP) in Milan and now use it to deliver Amazon CloudFront, Amazon Route 53, AWS Shield, and AWS WAF services to Italy, sharing the load with a PoP in Palermo that we launched in 2017. In 2016 we acquired Asti-based NICE Software (read Amazon Web Services to Acquire NICE).

We are also working to help prepare developers in Italy for the digital future, with programs like AWS Educate, AWS Academy, and AWS Activate. Dozens of universities and business schools across Italy are already participating in our educational programs, as are a plethora of startups and accelerators.

Stay Tuned
I’ll be sure to share additional news about this and other upcoming AWS regions as soon as I have it, so stay tuned!

Jeff;

 

AWS GovCloud (US-East) Now Open

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-govcloud-us-east-now-open/

Last year I told you that we were working on AWS GovCloud (US-East), an eastern US companion to the existing AWS GovCloud (US-West) Region that we launched in 2011. The new region is now open and ready to serve the needs of federal, state, and local government agencies, the IT contractors that serve them, and customers with regulated workloads. It offers added redundancy, data durability, and resiliency, and also provides additional options for disaster recovery. This is an isolated AWS region, subject to FedRAMP High and Moderate baselines, operated by US citizens on US soil. It is accessible only to vetted US entities and root account holders, who must confirm that they are US Persons (citizens or permanent residents) in order to gain access. You can read Achieve FedRAMP High Compliance in the AWS GovCloud (US) Region to learn more.

AWS GovCloud (US) gives vetted government customers and regulated industry customers and their partners the flexibility to architect secure cloud solutions that comply with: the FedRAMP High baseline, the DOJ’s Criminal Justice Information Systems (CJIS) Security Policy, U.S. International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR), Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG) for Impact Levels 2, 4 and 5, FIPS 140-2, IRS-1075, and other compliance regimes.

Lots of Services
Applications running in this region can make use of Auto Scaling (EC2 and Application), AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon ElastiCache, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Elastic Load Balancing (Application, Network, and Classic), Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM) (including Access Key Last Used), Amazon Inspector, AWS Key Management Service (KMS), Amazon Kinesis Data Streams, AWS Lambda, Amazon Aurora (MySQL and PostgreSQL), Amazon Redshift, Amazon Relational Database Service (RDS), AWS Server Migration Service, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon EC2 Systems Manager (SSM), AWS Trusted Advisor, Amazon Virtual Private Cloud, VM Import, VPN, Amazon API Gateway, AWS Snowball, AWS Snowball Edge, AWS Server Migration Service, and AWS Step Functions.

Crossing the Regions
Many of the cool cross-regions features of AWS can be used to span AWS GovCloud (US-East) and AWS GovCloud (US-West) in order to reduce latency or to increase workload resiliency & availability for mission-critical systems. Here’s what you can do:

We are working to add support for DynamoDB Global Tables and Inter-Region VPC Peering.

AWS GovCloud (US) in Action
Our customers are already hosting many different types of applications in AWS GovCloud (US-West); here’s a small sample:

Enterprise Apps – Oracle, SAP, and Microsoft workloads that were traditionally provisioned for peak demand are now being run on scalable, cloud-based infrastructure.

HPC / Big Data – Organizations with large data sets are spinning up HPC clusters in the cloud in order to extract intelligence and to better serve their constituents.

Storage / DR – The ability to tap in to vast amounts of cost-effective, highly durable cloud storage managed by US Persons supports a variety of DR approaches, from simple backups to hot standby. The addition of a second region allows you to use of the cross-region features that I mentioned earlier.

Learn More
To learn more, check out the AWS GovCloud (US) page. If you are looking forward to making use of AWS GovCloud (US) and need a partner to help you to make it happen, take a look at the list of AWS GovCloud (US) Partners.

Jeff;

New – Redis 5.0 Compatibility for Amazon ElastiCache

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-redis-5-0-compatibility-for-amazon-elasticache/

Earlier this year we announced Redis 4.0 compatibility for Amazon ElastiCache. In that post, Randall explained how ElastiCache for Redis clusters can scale to terabytes of memory and millions of reads and writes per second! Other recent improvements to Amazon ElastiCache for Redis include:

Read Replica Scaling – Support for adding or removing read replica nodes to a Redis Cluster, along with a reduction of up to 40% in cluster creation time.

PCI DSS Compliance – Certification as Payment Card Industry Data Security Standard (PCI DSS) compliant. This allows you to use ElastiCache for Redis (engine versions 4.0.10 and higher) to build low-latency, high-throughput applications that process sensitive payment card data.

FedRAMP Authorized and Available in AWS GovCloud (US) – United States government customers and their partners can use ElastiCache for Redis to process and store their FedRAMP systems and data for mission-critical, high-impact workloads in the AWS GovCloud (US) Region, and at moderate impact level in the other AWS Regions in the US. To learn more, read the ElastiCache for Redis Compliance documentation.

In-Place Upgrades – Support for upgrading a Redis Cluster to a newer engine version in-place and while maintaining availability except for a failover period measured in seconds.

New Instance Types – Support for the use of M5 and R5 instances, with significant performance improvements.

5.0 Compatibility
Today I am happy to announce Redis 5.0 compatibility to Amazon ElastiCache for Redis. This version of Redis includes support for a new Streams data type and new commands (ZPOPMIN and ZPOPMAX) for use on Sorted Sets, and also does a better job of defragmenting memory. To learn more, read What’s New in Redis 5?

As usual, you can use the ElastiCache Console, CLI, APIs, or a CloudFormation template to get started. I’ll use the Console, with the following settings:

My cluster is up and running within minutes:

I can also use the in-place upgrade feature that I mentioned earlier on my existing 4.0-compatible cluster. I select the cluster, click Modify, and the 5.0-compatible engine is already selected. I confirm the other settings and click Modify to proceed:

Streams in Action
The new Stream data type is very powerful! Each Stream has a name, and can be created by simply referencing it as part of an XADD command. Let’s say that I have a long-running process that generates files that need to be scanned and validated. For testing purposes, I can add a bunch of files to a stream name Files from the shell like this:

$  find /usr -name 'a*' -exec redis-cli -h r5cluster.seutl3.ng.0001.use1.cache.amazonaws.com \
    XADD Files \* f {} \;

I can retrieve values starting from the beginning of the stream using the command XREAD BLOCK 1000 STREAMS Files 0:

I can also read the values that are after a given ID:

In most cases, I would be doing the reads and the writes from code rather than from the command line, of course. This is a very simple example of the power of Redis 5 Streams and I am sure that you can do better!

Available Now
You can upgrade existing 4.0-compatible clusters and create new 5.0-compatible clusters today in all commercial AWS regions.

Jeff;

New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-lower-cost-amd-powered-ec2-instances/

From the start, AWS has focused on choice and economy. Driven by a never-ending torrent of customer requests that power our well-known Virtuous Cycle, I think we have delivered on both over the years:

Choice – AWS gives you choices in a wide range of dimensions including locations (18 operational geographic regions, 4 more in the works, and 1 local region), compute models (instances, containers, and serverless), EC2 instance types, relational and NoSQL database choices, development languages, and pricing/purchase models.

Economy – We have reduced prices 67 times so far, and work non-stop to drive down costs and to make AWS an increasingly better value over time. We study usage patterns, identify areas for innovation and improvement, and deploy updates across the entire AWS Cloud on a very regular and frequent basis.

Today I would like to tell you about our latest development, one that provides you with a choice of EC2 instances that are more economical than ever!

Powered by AMD
The newest EC2 instances are powered by custom AMD EPYC processors running at 2.5 GHz and are priced 10% lower than comparable instances. They are designed to be used for workloads that don’t use all of compute power available to them, and provide you with a new opportunity to optimize your instance mix based on cost and performance.

Here’s what we are launching:

General Purpose – M5a instances are designed for general purpose workloads: web servers, app servers, dev/test environments, and gaming. The M5a instances are available in 6 sizes.

Memory Optimized – R5a instances are designed for memory-intensive workloads: data mining, in-memory analytics, caching, and so forth. The R5a instances are available in 6 sizes, with lower per-GiB memory pricing in comparison to the R5 instances.

The new instances are built on the AWS Nitro System. They can make use of existing HVM AMIs (as is the case with all other recent EC2 instance types, the AMI must include the ENA and NVMe drivers), and can be used in Cluster Placement Groups.

These new instances should be a great fit for customers who are looking to further cost-optimize their Amazon EC2 compute environment. As always, we recommend that you measure performance and cost on your own workloads when choosing your instance types.

General Purpose Instances
Here are the specs for the M5a instances:

Instance Name vCPUs RAM EBS-Optimized Bandwidth Network Bandwidth
m5a.large
2 8 GiB Up to 2.120 Gbps Up to 10 Gbps
m5a.xlarge
4 16 GiB Up to 2.120 Gbps Up to 10 Gbps
m5a.2xlarge
8 32 GiB Up to 2.120 Gbps Up to 10 Gbps
m5a.4xlarge
16 64 GiB 2.120 Gbps Up to 10 Gbps
m5a.12xlarge
48 192 GiB 5 Gbps 10 Gbps
m5a.24xlarge
96 384 GiB 10 Gbps 20 Gbps

Memory Optimized Instances
Here are the specs for the R5a instances:

Instance Name vCPUs RAM EBS-Optimized Bandwidth Network Bandwidth
r5a.large
2 16 GiB Up to 2.120 Gbps Up to 10 Gbps
r5a.xlarge
4 32 GiB Up to 2.120 Gbps Up to 10 Gbps
r5a.2xlarge
8 64 GiB Up to 2.120 Gbps Up to 10 Gbps
r5a.4xlarge
16 128 GiB 2.120 Gbps Up to 10 Gbps
r5a.12xlarge
48 384 GiB 5 Gbps 10 Gbps
r5a.24xlarge
96 768 GiB 10 Gbps 20 Gbps

Available Now
These instances are available now and you can start using them today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) Regions in On-Demand, Spot, and Reserved Instance form. Pricing, as I noted earlier, is 10% lower than the equivalent existing instances. To learn more, visit our new AMD Instances page.

Jeff;

PS – We are also working on T3a instances; stay tuned for more info!

 

Join me for the Camp re:Invent Trivia Challenge

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-me-for-the-camp-reinvent-trivia-challenge/

With less than 3 weeks to go until AWS re:Invent 2018, my colleagues and I are working harder than ever to produce the best educational event on the planet! With multiple keynotes, well over two thousand sessions, bootcamps, chalk talks, hands-on workshops, labs, and hackathons to choose from I am confident that you will leave Las Vegas better informed than when you arrived.

Challenge Me
Today I would like to tell you about an opportunity to put your AWS knowledge to use in a new way. Sign up now and join me for the Camp re:Invent Trivia Challenge (7:00 PM on November 28th in the Venetian Theatre). You will have the opportunity to compete against me by answering questions about AWS, have a lot of fun, and to pick up some of the limited edition Camp re:Invent and Jeff Barr pins. I have no idea what to study or how to prepare, so things could get very interesting really fast.

Come for the Challenge, Stay for the Goodies
By the way, in addition to over 60 AWS pins that you can earn by participating in various events and attending certain sessions, you will be able to get them from our partners and sponsors. You can also trade pins with other re:Invent attendees. Here are just a few of the pins (via the unofficial @reinventParties list) that you can earn, find, or trade:

I will also bring along some of my cute new stickers:

See you in Vegas
I am looking forward to meeting my fans and friends in Las Vegas. I have plenty on my agenda for the week, but I always have time to stop and say hello, so don’t be shy!

Jeff;

AWS Quest 2 – The Road to re:Invent

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-quest-2-the-road-to-reinvent/

The first AWS Quest started in May of this year. As you may recall, my trusty robot companion went to pieces after burying some clues in this blog, the AWS Podcast, and other parts of the AWS site. Thanks to the tireless efforts of devoted puzzle solvers all over the world, all of the puzzles were found, all but one was solved, and we put Ozz back together in an action-packed broadcast on the AWS Twitch channel.

We had so much fun the first time around that we have decided to do it again! Ozz 2.0 is lighter, stronger, faster, cuter, and more mobile than ever. Just like last time, we’ve worked with our friends at Lone Shark Games to design a set of puzzles that will require multiple leaps of logic, group cooperation, and an indefatigable spirit to solve.

Follow The Orange Brick Road
I told Ozz to meet me in Las Vegas for AWS re:Invent, but I didn’t specify the route. Ozz, being adventurous and somewhat devious, decided to follow an orange brick road that heads west from Seattle. From what I can tell, Ozz plans to stop in 15 cities along the way and is looking for souvenirs to bring along to re:Invent.

Ozz will leave Seattle on November 1st after picking up a souvenir from Amazon’s home city. From there, Ozz is off to Sydney, Australia. Each puzzle will launch at aws.amazon.com/awsquest at noon in Ozz’s timezone.

Your job, should you decide to accept it, is to help find and decode the puzzles, and to help Ozz to decide what to bring to re:Invent.

Jeff;

PS – Ozz is looking for some friendly robotic faces along the way. From November 1 to 16, follow @awscloud on Twitter and share a picture of a robot around your city for chance to get on the phone with me to chat about AWS and the cloud. We’ll also be looking for robots on Instagram, so follow @amazonwebservices there and share your robot pictures for everyone to enjoy. We will DM the winner by December 5, 2018 to coordinate the call. The post must content #AWSQuest #Promotion and your profile must be public to be eligible.

AWS Serverless Application Model (SAM) Command Line Interface – Build, Test, and Debug Serverless Apps Locally

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-serverless-application-model-sam-command-line-interface-build-test-and-debug-serverless-apps-locally/

Decades ago, I wrote page after page of code in 6502 assembly language. After assembling and linking the code, I would load it into memory, set breakpoints at strategic locations, and step through to make sure that everything worked as intended. These days, I no longer have the opportunity to write or debug any non-trivial code, so I was a bit apprehensive when it came time to write this blog post (truth be told, I have been procrastinating for several weeks).

SAM CLI
I want to tell you about the new Serverless Application Model (SAM) Command Line Interface, and to gain some confidence in my ability to build something using AWS Lambda as I do so! Let’s review some terms to get started:

AWS SAM, short for Serverless Application Model, is an open source framework you can use to build serverless applications on AWS. It provides a shorthand syntax you can use to describe your application (Lambda functions, API endpoints, DynamoDB tables, and other resources) using a simple YAML template. During deployment, SAM transforms and expands the shorthand SAM syntax into an AWS CloudFormation template. Then, CloudFormation provisions your resources in a reliable and repeatable fashion.

The AWS SAM CLI, formerly known as SAM Local, is a command-line interface that supports building SAM-based applications. It supports local development and testing, and is also an active open source project. The CLI lets you choose between Python, Node, Java, Go, .NET, and includes a healthy collection of templates to help get you started.

The sam local command in the SAM CLI, delivers support for local invocation and testing of Lambda functions and SAM-based serverless applications, while running your function code locally in a Lambda-like execution environment. You can also use the sam local command to generate sample payloads locally, start a local endpoint to test your APIs, or automate testing of your Lambda functions.

Installation and Setup
Before I can show you how to use the SAM CLI, I need to install a couple of packages. The functions provided by sam local make use of Docker, so I need to work in a non-virtualized environment for a change! Here’s an overview of the setup process:

Docker – I install the Community Edition of Docker for Windows (a 512 MB download), and run docker ps to verify that it is working:

Python – I install Python 3.6 and make sure that it is on my Windows PATH:

Visual Studio Code – I install VS Code and the accompanying Python Extension.

AWS CLI – I install the AWS CLI:

And configure my credentials:

SAM – I install the AWS SAM CLI using pip:

Now that I have all of the moving parts installed, I can start to explore SAM.

Using SAM CLI
I create a directory (sam_apps) for my projects, and then I run sam init to create my first project:

This creates a sub-directory (sam-app) with all of the necessary source and configuration files inside:

I create a build directory inside of hello_world, and then I install the packages defined in requirements. The build directory contains the source code and the Python packages that are loaded by SAM Local:

And one final step! I need to copy the source files to the build directory in order to deploy them:

My app (app.py and an empty __init__.py) is ready to go, so I start up a local endpoint:

At this point, the endpoint is listening on port 3000 for an HTTP connection, and a Docker container will launch when the connection is made. The build directory is made available to the container so that the Python packages can be loaded and the code in app.py run.

When I open http://127.0.0.1:3000/hello in my browser, the container image is downloaded if necessary, the code is run, and the output appears in my browser:

Here’s what happens on the other side. You can see all of the important steps here, including the invocation of the code, download of the image, mounting the build directory in the container, and the request logging:

I can modify the code, refresh the browser tab, and the new version is run:

The edit/deploy/test cycle is incredibly fast, and you will be more productive than ever!

There is one really important thing to remember here. The initial app.py file was created in the hello_world directory, and I copied it to the build directory a few steps ago. I can do this deployment step each time, or I can simply decide that the code in the build directory is the real deal and edit it directly. This will affect my source code control plan once I start to build and version my code.

What’s Going On
Now that the sample code is running, let’s take a look at the SAM template (imaginatively called template.yaml). In the interest of space, I’ll skip ahead to the Resources section:

Resources:

    HelloWorldFunction:
        Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
        Properties:
            CodeUri: hello_world/build/
            Handler: app.lambda_handler
            Runtime: python3.6
            Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
                Variables:
                    PARAM1: VALUE
            Events:
                HelloWorld:
                    Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
                    Properties:
                        Path: /hello
                        Method: get

This section defines the HelloWorldFunction, indicates where it can be found (hello_world/build/), how to run it (python3.6), and allows environment variables to be defined and set. Then it indicates that the function can process the HelloWorld event, which is generated by a GET on the indicated path (/hello).

This template is not reloaded automatically; if I change it I will need to restart SAM Local. I recommend that you spend some time altering the names and paths here and watching the errors that arise. This will give you a good understanding of what is happening behind the scenes, and will improve your productivity later.

The remainder of the template describes the outputs from the template (the API Gateway endpoint, the function’s ARN, and the function’s IAM Role). These values do not affect local execution, but are crucial to a successful cloud deployment.

Outputs:

    HelloWorldApi:
      Description: "API Gateway endpoint URL for Prod stage for Hello World function"
      Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"

    HelloWorldFunction:
      Description: "Hello World Lambda Function ARN"
      Value: !GetAtt HelloWorldFunction.Arn

    HelloWorldFunctionIamRole:
      Description: "Implicit IAM Role created for Hello World function"
      Value: !GetAtt HelloWorldFunctionRole.Arn

You can leave all of these as-is until you have a good understanding of what’s going on.

Debugging with SAM CLI and VS Code
Ok, now let’s get set up to do some interactive debugging! This took me a while to figure out and I hope that you can benefit from my experience. The first step is to install the ptvsd package:

Then I edit requirements.txt to indicate that my app requires ptvsd (I copied the version number from the package name above):

requests==2.18.4
ptvsd==4.1.4

Next, I rerun pip to install this new requirement in my build directory:

Now I need to modify my code so that it can be debugged. I add this code after the existing imports:

import ptvsd
ptvsd.enable_attach(address=('0.0.0.0', 5858), redirect_output=True)
ptvsd.wait_for_attach()

The first statement tells the app that the debugger will attach to it on port 5858; the second pauses the code until the debugger is attached (you could make this conditional).

Next, I launch VS Code and select the root folder of my application:

Now I need to configure VS Code for debugging. I select the debug icon, click the white triangle next to DEBUG, and select Add Configuration:

I select the Python configuration, replace the entire contents of the file (launch.json) with the following text, and save the file (File:Save).

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [

        {
            "name": "Debug with SAM CLI (Remote Debug)",
            "type": "python",
            "request": "attach",
            "port": 5858,
            "host":  "localhost",
            "pathMappings": [
                {
                "localRoot": "${workspaceFolder}/hello_world/build",
                "remoteRoot" : "/var/task"
                }
            ]
        }
    ]
}

Now I choose this debug configuration from the DEBUG menu:

Still with me? We’re almost there!

I start SAM Local again, and tell it to listen on the debug port:

I return to VS Code and set a breakpoint (good old F9) in my code:

One thing to remember — be sure to open app.py in the build directory and set the breakpoint there.

Now I return to my web browser and visit the local address (http://127.0.0.1:3000/hello) again. The container starts up to handle the request and it runs app.py. The code runs until it hits the call to wait_for_attach, and now I hit F5 in VS Code to start debugging.

The breakpoint is hit, I single-step across the requests.get call, and inspect the ip variable:

Then I hit F5 to continue, and the web request completes. As you can see, I can use the full power of the VS Code debugger to build and debug my Lambda functions. I’ve barely scratched the surface here, and encourage you to follow along and pick up where I left off. To learn more, read Test Your Serverless Applications Locally Using SAM CLI.

Cloud Deployment
The SAM CLI also helps me to package my finished code, upload it to S3, and run it. I start with an S3 bucket (jbarr-sam) and run sam package. This creates a deployment package and uploads it to S3:

This takes a few seconds. Then I run sam deploy to create a CloudFormation stack:

If the stack already exists, SAM CLI will create a Change Set and use it to update the stack. My stack is ready in a minute or two, and includes the Lambda function, an API Gateway, and all of the supporting resources:

I can locate the API Gateway endpoint in the stack outputs:

And access it with my browser, just like I did when the code was running locally:

I can also access the CloudWatch logs for my stack and function using sam logs:

My SAM apps are now visible in the Lambda Console (this is a relatively new feature):

I can see the template and the app’s resources at a glance:

And I can see the relationship between resources:

There’s also a monitoring dashboard:

I can customize the dashboard by adding an Amazon CloudWatch dashboard to my template (read Managing Applications in the AWS Lambda Console to learn more).

That’s Not All
Believe it or not, I have given you just a taste of what you can do with SAM, SAM CLI, and the sam local command. Here are a couple of other cool things that you should know about:

Local Function Invocation – I can directly invoke Lambda functions:

Sample Event Source Generation – If I am writing Lambda functions that respond to triggers from other AWS services (S3 PUTs and so forth), I can generate sample events and use them to invoke my functions:

In a real-world situation I would redirect the output to a file, make some additional customization if necessary, and then use it to invoke my function.

Cookiecutter Templates – The SAM CLI can use Cookiecutter templates to create projects and we have created several examples to get you started. Take a look at Cookiecutter AWS Sam S3 Rekognition Dynamodb Python and Cookiecutter for AWS SAM and .NET to learn more.

CloudFormation Extensions – AWS SAM extends CloudFormation and lets you benefit from the power of infrastructure as code. You get reliable and repeatable deployments and the power to use the full suite of CloudFormation resource types, intrinsic functions, and other template features.

Built-In Best Practices – In addition to the benefits that come with an infrastructure as code model, you can easily take advantage of other best practices including code reviews, safe deployments through AWS CodePipeline, and tracing using AWS X-Ray.

Deep Integration with Development Tools – You can use AWS SAM with a suite of AWS tools for building serverless applications. You can discover new applications in the AWS Serverless Application Repository. For authoring, testing, and debugging SAM-based serverless applications, you can use the AWS Cloud9 IDE. To build a deployment pipeline for your serverless applications, you can use AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. You can also use AWS CodeStar to get started with a project structure, code repository, and a CI/CD pipeline that’s automatically configured for you. To deploy your serverless application you can use the AWS SAM Jenkins plugin, and you can use Stackery.io’s toolkit to build production-ready applications.

Check it Out
I hope that you have enjoyed this tour, and that you can make good use of SAM in your next serverless project!

Jeff;

 

In the Works – AWS Region in South Africa

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-south-africa/

Last year we launched new AWS Regions in France and China (Ningxia), and announced that we are working on regions in Bahrain, Hong Kong SAR, Sweden, and a second GovCloud Region in the United States.

South Africa in Early 2020
Today, I am happy to announce that we will be opening an AWS Region in South Africa in the first half of 2020. The new Region will be based in Cape Town, will be comprised of three Availability Zones, and will give AWS customers and partners the ability to run their workloads and store their data in South Africa. The addition of the AWS Africa (Cape Town) Region will also enable organizations to provide lower latency to end users across Sub-Saharan Africa and will enable more African organizations to leverage advanced technologies such as Artificial Intelligence, Machine Learning, Internet of Things (IoT), mobile services, and more to drive innovation.

AWS customers are already making use of 55 Availability Zones across 19 infrastructure regions worldwide. Today’s announcement brings the total number of global regions (operational and in the works) up to 23.

A Growing Presence
The new Region is the latest of a series of investments in South Africa, and is part our commitment to support South Africa’s transformation. In 2004, Amazon opened a Development Center in Cape Town that focuses on building pioneering networking technologies, next generation software for customer support, and the technology behind Amazon EC2. AWS has also added a number of teams including account managers, customer services reps, partner managers, solutions architects, and more, helping customers of all sizes as they move to the cloud.

In 2015, we continued our expansion, opening an office in Johannesburg, and in 2017 we brought the Amazon Global Network to Africa through AWS Direct Connect. Earlier this year we launched infrastructure on the African continent introducing Amazon CloudFront to South Africa, with two new edge locations in Cape Town and Johannesburg. We also support the growth of technology education with AWS Academy and AWS Educate and have supported the growth of new businesses through AWS Activate in the country for many years.

The addition of the AWS Region in South Africa will help builders and entrepreneurs in enterprises, the public sector, and startups across Sub-Saharan Africa to innovate and grow their organizations.

Talk to Us
As always, we are looking forward to serving new and existing customers in South Africa and working with partners across the region. Of course, the new Region will also be open to existing AWS customers who would like to serve users in South Africa and across the African continent.

To learn more about the AWS South Africa Region feel free to contact our team at [email protected]. If you are interested in joining the team and would like to learn more about AWS positions in South Africa, take a look at the Amazon Jobs site.

Jeff;

Check it Out – New AWS Pricing Calculator for EC2 and EBS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/check-it-out-new-aws-pricing-calculator-for-ec2-and-ebs/

The blog post that we published over a decade ago to launch the Simple Monthly Calculator still shows up on our internal top-10 lists from time to time! Since that post was published, we have extended, redesigned, and even rebuilt the calculator a time or two.

New Calculator
Starting with a blank screen, an empty code repo, and plenty of customer feedback, we are building a brand-new AWS Pricing Calculator. The new calculator is designed to help you estimate and understand your eventual AWS costs. We did our best to avoid excessive jargon and to make the calculations obvious, transparent, and accessible. You can see the options that are available to you, explore the associated costs, and make high-quality data-driven decisions.

We’re starting out with support for EC2 instances, EBS volumes, and a very wide variety of purchasing models, with plans to add support for more services as quickly as possible.

A Quick Tour
The new calculator lives at https://calculator.aws . Each estimate consists of one or more groups and the first one is created automatically:

Each group has a name, and has pricing for services in a particular AWS region. I click Edit group to change the name and pick a region, and click Apply:

Back at the main page of the calculator, I click Add service and choose to configure some EC2 instances. The group can contain multiple types and configurations of instances; I click Configure to move ahead:

At this point I can make a Quick estimate (the default), or supply more details as part of an Advanced estimate. I’ll start with a Quick estimate:

Here are a couple of things to keep in mind when you make a quick estimate:

Instance Type – I have two options for choosing EC2 instance types; I can enter my resource requirements (vCPU count, memory size, and GPU count) and have the calculator choose the option with the lowest price, or I can pick an EC2 instance type by name.

Pricing Strategy – I can choose to use On-Demand Instances, Convertible Reserved Instances, or Standard Reserved Instances, and can choose payment terms and options for RI’s.

EBS Volumes – I can choose the type and size of an EBS volume for the instance. Right now, the calculator allows you to associate one volume with each EC2 instance. If you need more than one, specify the total amount of storage you need across all volumes.

Details – I can expand the Show calculation section to see the math:

After I have made my choices, I click Add to my estimate to move ahead. My selections, along with their costs (annual, upfront, and monthly), are displayed:

I can go back and add another service, or create another group. I’ll add another EC2 instance, using an Advanced estimate this time around. Here’s where I start:

I have very fine-grained control over each aspect of my estimate. For example, I can characterize my workload in great detail. I click on Workload, and have the ability to select the graph that best represents my monthly workload:

I can even model workloads that have two or more independent daily (in this case) spike patterns. As I refine my model, the calculator figures out the most economical combination of On-Demand and Reserved Instances, and shows me the results:

The calculations are driven by the selection in the Pricing strategy. The default value, and the one that I used for the previous screen shot, is Cost optimized. I have other choices as well:

I can also model my data transfer in, out, and to other AWS regions:

Once I am happy with the results I click Add to my estimate, and take a look at my selections and their prices:

I can click Export to capture my estimate in spreadsheet form:

Here’s the data (I hid a few columns for clarity):

As you can see, the new calculator will quickly become a useful part of your planning and decision-making process.

One important thing to keep in mind: your estimates are stored in state that is local to the browser tab, and will be lost if you close the tab. The team is already hard at work on features that will allow you to save and even share your estimates, for launch in early 2019.

Stay Tuned
We will be adding more services and more features to the calculator in the months to come, and I’ll share some updates with you from time to time, either in this blog or via Twitter. If you have ideas, complaints, or other feedback, don’t hesitate to click on the Feedback link at the top of the page.

Jeff;

 

Amazon RDS Update – Console Update, RDS Recommendations, Performance Insights, M5 Instances, MySQL 8, MariaDB 10.3, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rds-update-console-update-rds-recommendations-performance-insights-m5-instances-mysql-8-mariadb-10-3-and-more/

It is time for a quick Amazon RDS update. I’ve got lots of news to share:

Console Update – The RDS Console has a fresh, new look.

RDS Recommendations – You now get recommendations that will help you to configure your database instances per our best practices.

Performance Insights for MySQL – You can peer deep inside of MySQL and understand more about how your queries are processed.

M5 Instances – You can now use MySQL and MariaDB on M5 instances.

MySQL 8.0 – You can now use MySQL 8.0 in production form.

MariaDB 10.3 – You can now use MariaDB 10.3 in production form.

Let’s take a closer look…

Console Update
The RDS Console took on a fresh, new look earlier this year. We made it available to you in preview form during development, and it is now the standard experience for all AWS users. You can see an overview of your RDS resources at a glance, create a new database, access documentation, and more, all from the home page:

You also get direct access to Performance Insights and to the new RDS Recommendations.

RDS Recommendations
We want to make it easy for you to take our best practices in to account when you configure your RDS database instances, even as those practices improve. The new RDS Recommendations feature will periodically check your configuration, usage, and performance data and display recommended changes and improvements, focusing on performance, stability, and security. It works with all of the database engines, and is very easy to use. Open the RDS Console and click Recommendations to get started:

I can see all of the recommendations at a glance:

I can open a recommendation to learn more:

I have four options that I can take with respect to this recommendation:

Fix Immediately – I select some database instances and click Apply now.

Fix Later – I select some database instances and click Schedule for the next maintenance window.

Dismiss – I select some database instances and click Dismiss to indicate that I do not want to make any changes, and to acknowledge that I have seen the recommendation.

Defer – If I do nothing, the recommendations remain active and I can revisit them at another time.

Other recommendations may include other options, or might require me to take some other actions. For example, the procedure for enabling encryption depends on the database engine:

RDS Recommendations are available today at no charge in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions. We plan to add additional recommendations over time, and also expect to make the recommendations available via an API.

Performance Insights for MySQL
I can now peek inside of MySQL to see which queries, hosts, and users are consuming the most time, and why:

You can identify expensive SQL queries and other bottlenecks with a couple of clicks, looking back across the timeframe of your choice: an hour, a day, a week, or even longer.

This feature was first made available for PostgreSQL (both RDS and Aurora) and is now available for MySQL (again, both RDS and Aurora). To learn more, read Using Amazon RDS Performance Insights.

M5 Instances
The M5 instances deliver improved price/performance compared to M4 instances, and offer up to 10 Gbps of dedicated network bandwidth for database storage.

You can now launch M5 instances (including the new high-end m5.24xlarge) when using RDS for MySQL and RDS for MariaDB. You can scale up to these new instance types by modifying your existing DB instances:

MySQL 8
Version 8 of MySQL is now available on Amazon RDS. This version of MySQL offers better InnoDB performance, JSON improvements, better GIS support (new spatial datatypes, indexes, and functions), common table expressions to reduce query complexity, window functions, atomic DDLs for faster online schema modification, and much more (read the documentation to learn more).

MariaDB 10.3
Version 10.3 of MariaDB is now available on Amazon RDS. This version of MariaDB includes a new temporal data processing feature, improved Oracle compatibility, invisible columns, performance enhancements including instant ADD COLUMN operations & fast-fail DDL operations, and much more (read the documentation for a detailed list).

Available Now
All of the new features, engines, and instance types are available now and you can start using them today!

Jeff;

 

 

New – Managed Databases for Amazon Lightsail

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-managed-databases-for-amazon-lightsail/

Amazon Lightsail makes it easy for you to get started with AWS. You choose the operating system (and optional application) that you want to run, pick an instance plan, and create an instance, all in a matter of minutes. Lightsail offers low, predictable pricing, with instance plans that include compute power, storage, and data transfer:

Managed Databases
Today we are making Lightsail even more useful by giving you the ability to create a managed database with a couple of clicks. This has been one of our top customer requests and I am happy to be able to share this news.

This feature is going to be of interest to a very wide range of current and future Lightsail users, including students, independent developers, entrepreneurs, and IT managers. We’ve addressed the most common and complex issues that arise when setting up and running a database. As you will soon see, we have simplified and fine-tuned the process of choosing, launching, securing, accessing, monitoring, and maintaining a database!

Each Lightsail database bundle has a fixed, monthly price that includes the database instance, a generous amount of SSD-backed storage, a terabyte or more of data transfer to the Internet and other AWS regions, and automatic backups that give you point-in-time recovery for a 7-day period. You can also create manual database snapshots that are billed separately.

Creating a Managed Database
Let’s walk through the process of creating a managed database and loading an existing MySQL backup into it. I log in to the Lightsail Console and click Databases to get started. Then I click Create database to move forward:

I can see and edit all of the options at a glance. I choose a location, a database engine and version, and a plan, enter a name, and click Create database (all of these options have good defaults; a single click often suffices):

We are launching with support for MySQL 5.6 and 5.7, and will add support for PostgreSQL 9.6 and 10 very soon. The Standard database plan creates a database in one Availability Zone with no redundancy; the High Availability plan also creates a presence in a second AZ, and is recommended for production use.

Database creation takes just a few minutes, the status turns to Available, and my database is ready to use:

I click on Database-Oregon-1, and I can see the connection details, and have access to other management information & tools:

I’m ready to connect! I create an SSH connection to my Lightsail instance, ensure that the mysql package is installed, and connect using the information above (read Connecting to Your MySQL Database to learn more):

Now I want to import some existing data into my database. Lightsail lets me enable Data import mode in order to defer any backup or maintenance operations:

Enabling data import mode deletes any existing automatic snapshots; you may want to take a manual snapshot before starting your import if you are importing fresh data into an existing database.

I have a large (13 GB) , ancient (2013-era) MySQL backup from a long-dead personal project; I download it from S3, uncompress it, and import it:

I can watch the metrics while the import is underway:

After the import is complete I disable data import mode, and I can run queries against my tables:

To learn more, read Importing Data into Your Database.

Lightsail manages all routine database operations. If I make a mistake and mess up my data, I can use the Emergency Restore to create a fresh database instance from an earlier point in time:

I can rewind by up to 7 days, limited to when I last disabled data import mode.

I can also take snapshots, and use them later to create a fresh database instance:

Things to Know
Here are a couple of things to keep in mind when you use this new feature:

Engine Versions – We plan to support the two latest versions of MySQL, and will do the same for other database engines as we make them available.

High Availability – As is always the case for production AWS systems, you should use the High Availability option in order to maintain a database footprint that spans two Zones. You can switch between Standard and High Availability using snapshots.

Scaling Storage – You can scale to a larger database instance by creating and then restoring a snapshot.

Data Transfer – Data transfer to and from Lightsail instances in the same AWS Region does not count against the usage that is included in your plan.

Amazon RDS – This feature shares core technology with Amazon RDS, and benefits from our operational experience with that family of services.

Available Now
Managed databases are available today in all AWS Regions where Lightsail is available:

Jeff;

re:Invent 2018 – 55 Days to Go….

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/reinvent-2018-55-days-to-go/

As I write this, there are just 55 calendar days until AWS re:Invent 2018. My colleagues and I are working flat-out to bring you the best possible learning experience and I want to give you a quick update on a couple of things…

Transportation – Customer Obsession is the first Amazon Leadership Principle and we take your feedback seriously! The re:Invent 2018 campus is even bigger this year, and our transportation system has been tuned and scaled to match. This includes direct shuttle routes from venue to venue so that you don’t spend time waiting at other venues, access to real-time transportation info from within the re:Invent app, and on-site signage. The mobile app will even help you to navigate to your sessions while letting you know if you are on time. If you are feeling more independent and don’t want to ride the shuttles, we’ll have partnerships with ridesharing companies including Lyft and Uber. Visit the re:Invent Transportation page to learn more about our transportation plans, routes, and options.

Reserved Seating – In order to give you as many opportunities to see the technical content that matters the most to you, we are bringing back reserved seating. You will be able to make reservations starting at 10 AM PT on Thursday, October 11, so mark your calendars. Reserving a seat is the best way to ensure that you will get a seat in your favorite session without waiting in a long line, so be sure to arrive at least 10 minutes before the scheduled start. As I have mentioned before, we have already scheduled repeats of the most popular sessions, and made them available for reservation in the Session Catalog. Repeats will take place all week in all re:Invent venues, along with overflow sessions in our Content Hubs (centralized overflow rooms in every venue). We will also stream live content to the Content Hubs as the sessions fill up.

Trivia Night – Please join me at 7:30 PM on Wednesday in the Venetian Theatre for the first-ever Camp re:Invent Trivia Night. Come and test your re:Invent and AWS knowledge to see if you and your team can beat me at trivia (that should not be too difficult). The last person standing gets bragging rights and an awesome prize.

How to re:Invent – Whether you are a first-time attendee or a veteran re:Invent attendee, please take the time to watch our How to re:Invent videos. We want to make sure that you arrive fully prepared, ready to learn about the latest and greatest AWS services, meet your peers and members of the AWS teams, and to walk away with the knowledge and the skills that will help you to succeed in your career.

See you in Vegas!

Jeff;

Saving Koalas Using Genomics Research and Cloud Computing

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/saving-koalas-using-genomics-research-and-cloud-computing/

Today is Save the Koala Day and a perfect time to to tell you about some noteworthy and ground-breaking research that was made possible by AWS Research Credits and the AWS Cloud.

Five years ago, a research team led by Dr. Rebecca Johnson (Director of the Australian Museum Research Institute) set out to learn more about koala populations, genetics, and diseases. As a biologically unique animal with a limited appetite, maintaining a healthy and genetically diverse population are both key elements of any conservation plan. In addition to characterizing the genetic diversity of koala populations, the team wanted to strengthen Australia’s ability to lead large-scale genome sequencing projects.

Inside the Koala Genome
Last month the team published their results in Nature Genetics. Their paper (Adaptation and Conservation Insights from the Koala Genome) identifies the genomic basis for the koala’s unique biology. Even though I had to look up dozens of concepts as I read the paper, I was able to come away with a decent understanding of what they found. Here’s my lay summary:

Toxic Diet – The eucalyptus leaves favored by koalas contain a myriad of substances that are toxic to other species if ingested. Gene expansions and selection events in genes encoding enzymes with detoxification functions enable koalas to rapidly detoxify these substances, making them able to subsist on a diet favored by no other animal. The genetic repertoire underlying the accelerated metabolics also renders common anti-inflammatory medications and antibiotics ineffective for treating ailing koalas.

Food Choice – Koalas are, as I noted earlier, very picky eaters. Genetically speaking, this comes about because their senses of smell and taste are enhanced, with 6 genes giving them the ability to discriminate between plant metabolites on the basis of smell. The researchers also found that koalas have a gene that helps them to select eucalyptus leaves with a high water content, and another that enhances their ability to perceive bitter and umami flavors.

Reproduction – Specific genes which control ovulation and birth were identified. In the interest of frugality, female koalas produce eggs only when needed.

Koala Milk – Newborn koalas are the size of a kidney bean and weigh less than half of a gram! They nurse for about a year, taking milk that changes in composition over time, with a potential genetic correlation. The researchers also identified genes known to function as anti-microbial properties.

Immune Systems – The researchers identified genes that formed the basis for resistance, immunity, or susceptibility to certain diseases that affect koalas. They also found evidence of a “genomic invasion” (their words) where the koala retrovirus actually inserts itself into the genome.

Genetic Diversity – The researchers also examined how geological events like habitat barriers and surface temperatures have shaped genetic diversity and population evolution. They found that koalas from some areas had markedly less genetic diversity than those from others, with evidence that allowed them to correlate diversity (or the lack of it) with natural barriers such as the Hunter Valley.

Powered by AWS
Creating a complete gene sequence requires (among many other things) an incredible amount of compute power and vast amount of storage.

While I don’t fully understand the process, I do know that it works on a bottom-up basis. The DNA samples are broken up into manageable pieces, each one containing several tens of thousands of base pairs. A variety of chemicals are applied to cause the different base constituents (A, T, C, or G) to fluoresce, and the resulting emission is captured, measured, and stored. Since this study generated a koala reference genome, the sequencing reads were assembled using an overlapping layout consensus assembly algorithm known as Falcon which was run on AWS. The koala genome comes in at 3.42 billion base pairs, slightly larger than the human genome.

I’m happy to report that this groundbreaking work was performed on AWS. The research team used cfnCluster to create multiple clusters, each with 500 to 1000 vCPUs, and running Falcon from Pacific Biosciences. All in all, the team used 3 million EC2 core hours, most of which were EC2 Spot Instances. Having access to flexible, low-cost compute power allowed the bioinformatics team to experiment with the configuration of the Falcon pipeline as they tuned and adapted it to their workload.

We are happy to have done our small part to help with this interesting and valuable research!

Jeff;

Now Available – Amazon EC2 High Memory Instances with 6, 9, and 12 TB of Memory, Perfect for SAP HANA

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-ec2-high-memory-instances-with-6-9-and-12-tb-of-memory-perfect-for-sap-hana/

The Altair 8800 computer that I built in 1977 had just 4 kilobytes of memory. Today I was able to use an EC2 instance with 12 terabytes (12 tebibytes to be exact) of memory, almost 4 billion times as much!

The new Amazon EC2 High Memory Instances let you take advantage of other AWS services including Amazon Elastic Block Store (EBS), Amazon Simple Storage Service (S3), AWS Identity and Access Management (IAM), Amazon CloudWatch, and AWS Config. They are designed to allow AWS customers to run large-scale SAP HANA installations, and can be used to build production systems that provide enterprise-grade data protection and business continuity.

Here are the specs:

Instance Name Memory Logical Processors
Dedicated EBS Bandwidth Network Bandwidth
u-6tb1.metal 6 TiB 448 14 Gbps 25 Gbps
u-9tb1.metal 9 TiB 448 14 Gbps 25 Gbps
u-12tb1.metal 12 TiB 448 14 Gbps 25 Gbps

Each Logical Processor is a hyperthread on one of the 224 physical CPU cores. All three sizes are powered by the latest generation Intel® Xeon® Platinum 8176M (Skylake) processors running at 2.1 GHz (with Turbo Boost to 3.80 GHz), and are available as EC2 Dedicated Hosts for launch within a new or existing Amazon Virtual Private Cloud (VPC). You can launch them using the AWS Command Line Interface (CLI) or the EC2 API, and manage them there or in the EC2 Console.

The instances are EBS-Optimized by default, and give you low-latency access to encrypted and unencrypted EBS volumes. You can choose between Provisioned IOPS, General Purpose (SSD), and Streaming Magnetic volumes, and can attach multiple volumes, each with a distinct type and size, to each instance.

SAP HANA in Minutes
The EC2 High Memory instances are certified by SAP for OLTP and OLAP workloads such as S4/HANA, Suite on HANA, BW4/HANA, BW on HANA, and Datamart (see the SAP HANA Hardware Directory for more information).

We ran the SAP Standard Application Benchmark and measured the instances at 480,600 SAPS, making them suitable for very large workloads. Here’s an excerpt from the benchmark:

In anticipation of today’s launch, the EC2 team provisioned a u-12tb1.metal instance for my AWS account and I located it in the Dedicated Hosts section of the EC2 Console:

Following the directions in the SAP HANA on AWS Quick Start, I copy the Host Reservation ID, hop over to the CloudFormation Console and click Create Stack to get started. I choose my template, give my stack a name, and enter all of the necessary parameters, including the ID that I copied, and click Next to proceed:

On the next page I indicate that I want to tag my resources, leave everything else as-is, and click Next:

I review my settings, acknowledge that the stack might create IAM resources, and click Next to create the stack:

The AWS resources are created and SAP HANA is installed, all in less than 40 minutes:

Using an EC2 instance on the public subnet of my VPC, I can access the new instance. Here’s the memory:

And here’s the CPU info:

I can also run an hdbsql query:

SELECT 
  DISTINCT HOST, CAST(VALUE/1024/1024/1024 AS INTEGER) AS TOTAL_MEMORY_GB 
  FROM SYS.M_MEMORY
  WHERE NAME='SYSTEM_MEMORY_SIZE';

Here’s the output, showing that SAP HANA has access to 12 TiB of memory:

Another option is to have the template create a second EC2 instance, this one running Windows on a public subnet, and accessible via RDP:

I could install HANA Studio on this instance and use its visual interface to run my SAP HANA queries.

The Quick Start implementation uses high performance SSD-based EBS storage volumes for all of your data. This gives you the power to switch to a larger instance in minutes without having to migrate any data.

Available Now
Just like the existing SAP-certified X1 and X1e instances, the EC2 High Memory instances are very cost-effective. For example, the effective hourly rate for the All Upfront 3-Year Reservation for a u-12tb1.metal Dedicated Host in the US East (N. Virginia) Region is $30.539 per hour.

These instances are now available in the US East (N. Virginia) and Asia Pacific (Tokyo) Regions as Dedicated Hosts with a 3-year term, and will be available soon in the US West (Oregon), Europe (Ireland), and AWS GovCloud (US) Regions. If you are ready to get started, contact your AWS account team or use the Contact Us page to make a request.

In the Works
We’re not stopping at 12 TiB, and are planning to launch instances with 18 TiB and 24 TiB of memory in 2019.

Jeff;

PS – If you have applications that might need multiple terabytes in the future but can run comfortably in less memory today, be sure to consider the R5, X1, and X1e instances.

 

New – Parallel Query for Amazon Aurora

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-parallel-query-for-amazon-aurora/

Amazon Aurora is a relational database that was designed to take full advantage of the abundance of networking, processing, and storage resources available in the cloud. While maintaining compatibility with MySQL and PostgreSQL on the user-visible side, Aurora makes use of a modern, purpose-built distributed storage system under the covers. Your data is striped across hundreds of storage nodes distributed over three distinct AWS Availability Zones, with two copies per zone, on fast SSD storage. Here’s what this looks like (extracted from Getting Started with Amazon Aurora):

New Parallel Query
When we launched Aurora we also hinted at our plans to apply the same scale-out design principle to other layers of the database stack. Today I would like to tell you about our next step along that path.

Each node in the storage layer pictured above also includes plenty of processing power. Aurora is now able to make great use of that processing power by taking your analytical queries (generally those that process all or a large part of a good-sized table) and running them in parallel across hundreds or thousands of storage nodes, with speed benefits approaching two orders of magnitude. Because this new model reduces network, CPU, and buffer pool contention, you can run a mix of analytical and transactional queries simultaneously on the same table while maintaining high throughput for both types of queries.

The instance class determines the number of parallel queries that can be active at a given time:

  • db.r*.large – 1 concurrent parallel query session
  • db.r*.xlarge – 2 concurrent parallel query sessions
  • db.r*.2xlarge – 4 concurrent parallel query sessions
  • db.r*.4xlarge – 8 concurrent parallel query sessions
  • db.r*.8xlarge – 16 concurrent parallel query sessions
  • db.r4.16xlarge – 16 concurrent parallel query sessions

You can use the aurora_pq parameter to enable and disable the use of parallel queries at the global and the session level.

Parallel queries enhance the performance of over 200 types of single-table predicates and hash joins. The Aurora query optimizer will automatically decide whether to use Parallel Query based on the size of the table and the amount of table data that is already in memory; you can also use the aurora_pq_force session variable to override the optimizer for testing purposes.

Parallel Query in Action
You will need to create a fresh cluster in order to make use of the Parallel Query feature. You can create one from scratch, or you can restore a snapshot.

To create a cluster that supports Parallel Query, I simply choose Provisioned with Aurora parallel query enabled as the Capacity type:

I used the CLI to restore a 100 GB snapshot for testing, and then explored one of the queries from the TPC-H benchmark. Here’s the basic query:

SELECT
  l_orderkey,
  SUM(l_extendedprice * (1-l_discount)) AS revenue,
  o_orderdate,
  o_shippriority

FROM customer, orders, lineitem

WHERE
  c_mktsegment='AUTOMOBILE'
  AND c_custkey = o_custkey
  AND l_orderkey = o_orderkey
  AND o_orderdate < date '1995-03-13'
  AND l_shipdate > date '1995-03-13'

GROUP BY
  l_orderkey,
  o_orderdate,
  o_shippriority

ORDER BY
  revenue DESC,
  o_orderdate LIMIT 15;

The EXPLAIN command shows the query plan, including the use of Parallel Query:

+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
| id | select_type | table    | type | possible_keys                 | key  | key_len | ref  | rows      | Extra                                                                                                                          |
+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
|  1 | SIMPLE      | customer | ALL  | PRIMARY                       | NULL | NULL    | NULL |  14354602 | Using where; Using temporary; Using filesort                                                                                   |
|  1 | SIMPLE      | orders   | ALL  | PRIMARY,o_custkey,o_orderdate | NULL | NULL    | NULL | 154545408 | Using where; Using join buffer (Hash Join Outer table orders); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra)   |
|  1 | SIMPLE      | lineitem | ALL  | PRIMARY,l_shipdate            | NULL | NULL    | NULL | 606119300 | Using where; Using join buffer (Hash Join Outer table lineitem); Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra) |
+----+-------------+----------+------+-------------------------------+------+---------+------+-----------+--------------------------------------------------------------------------------------------------------------------------------+
3 rows in set (0.01 sec)

Here is the relevant part of the Extras column:

Using parallel query (4 columns, 1 filters, 1 exprs; 0 extra)

The query runs in less than 2 minutes when Parallel Query is used:

+------------+-------------+-------------+----------------+
| l_orderkey | revenue     | o_orderdate | o_shippriority |
+------------+-------------+-------------+----------------+
|   92511430 | 514726.4896 | 1995-03-06  |              0 |
|  593851010 | 475390.6058 | 1994-12-21  |              0 |
|  188390981 | 458617.4703 | 1995-03-11  |              0 |
|  241099140 | 457910.6038 | 1995-03-12  |              0 |
|  520521156 | 457157.6905 | 1995-03-07  |              0 |
|  160196293 | 456996.1155 | 1995-02-13  |              0 |
|  324814597 | 456802.9011 | 1995-03-12  |              0 |
|   81011334 | 455300.0146 | 1995-03-07  |              0 |
|   88281862 | 454961.1142 | 1995-03-03  |              0 |
|   28840519 | 454748.2485 | 1995-03-08  |              0 |
|  113920609 | 453897.2223 | 1995-02-06  |              0 |
|  377389669 | 453438.2989 | 1995-03-07  |              0 |
|  367200517 | 453067.7130 | 1995-02-26  |              0 |
|  232404000 | 452010.6506 | 1995-03-08  |              0 |
|   16384100 | 450935.1906 | 1995-03-02  |              0 |
+------------+-------------+-------------+----------------+
15 rows in set (1 min 53.36 sec)

I can disable Parallel Query for the session (I can use an RDS custom cluster parameter group for a longer-lasting effect):

set SESSION aurora_pq=OFF;

The query runs considerably slower without it:

+------------+-------------+-------------+----------------+
| l_orderkey | o_orderdate | revenue     | o_shippriority |
+------------+-------------+-------------+----------------+
|   92511430 | 1995-03-06  | 514726.4896 |              0 |
...
|   16384100 | 1995-03-02  | 450935.1906 |              0 |
+------------+-------------+-------------+----------------+
15 rows in set (1 hour 25 min 51.89 sec)

This was on a db.r4.2xlarge instance; other instance sizes, data sets, access patterns, and queries will perform differently. I can also override the query optimizer and insist on the use of Parallel Query for testing purposes:

set SESSION aurora_pq_force=ON;

Things to Know
Here are a couple of things to keep in mind when you start to explore Amazon Aurora Parallel Query:

Engine Support – We are launching with support for MySQL 5.6, and are working on support for MySQL 5.7 and PostgreSQL.

Table Formats – The table row format must be COMPACT; partitioned tables are not supported.

Data Types – The TEXT, BLOB, and GEOMETRY data types are not supported.

DDL – The table cannot have any pending fast online DDL operations.

Cost – You can make use of Parallel Query at no extra charge. However, because it makes direct access to storage, there is a possibility that your IO cost will increase.

Give it a Shot
This feature is available now and you can start using it today!

Jeff;

 

AWS Data Transfer Price Reductions – Up to 34% (Japan) and 28% (Australia)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-data-transfer-price-reductions-up-to-34-japan-and-28-australia/

I’ve got good good news for AWS customers who make use of our Asia Pacific (Tokyo) and Asia Pacific (Sydney) Regions. Effective September 1, 2018 we are reducing prices for data transfer from Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), and Amazon CloudFront by up to 34% in Japan and 28% in Australia.

EC2 and S3 Data Transfer
Here are the new prices for data transfer from EC2 and S3 to the Internet:

EC2 & S3 Data Transfer Out to Internet Japan Australia
Old Rate New Rate Change Old Rate New Rate Change
Up to 1 GB / Month $0.000 $0.000 0% $0.000 $0.000 0%
Next 9.999 TB / Month $0.140 $0.114 -19% $0.140 $0.114 -19%
Next 40 TB / Month $0.135 $0.089 -34% $0.135 $0.098 -27%
Next 100 TB / Month $0.130 $0.086 -34% $0.130 $0.094 -28%
Greater than 150 TB / Month $0.120 $0.084 -30% $0.120 $0.092 -23%

You can consult the EC2 Pricing and S3 Pricing pages for more information.

CloudFront Data Transfer
Here are the new prices for data transfer from CloudFront edge nodes to the Internet

CloudFront Data Transfer Out to Internet Japan Australia
Old Rate New Rate Change Old Rate New Rate Change
Up to 10 TB / Month $0.140 $0.114 -19% $0.140 $0.114 -19%
Next 40 TB / Month $0.135 $0.089 -34% $0.135 $0.098 -27%
Next 100 TB / Month $0.120 $0.086 -28% $0.120 $0.094 -22%
Next 350 TB / Month $0.100 $0.084 -16% $0.100 $0.092 -8%
Next 524 TB / Month $0.080 $0.080 0% $0.095 $0.090 -5%
Next 4 PB / Month $0.070 $0.070 0% $0.090 $0.085 -6%
Over 5 PB / Month $0.060 $0.060 0% $0.085 $0.080 -6%

Visit the CloudFront Pricing page for more information.

We have also reduced the price of data transfer from CloudFront to your Origin. The price for CloudFront Data Transfer to Origin from edge locations in Australia has been reduced 20% to $0.080 per GB. This represents content uploads via POST and PUT.

Things to Know
Here are a couple of interesting things that you should know about AWS and data transfer:

AWS Free Tier – You can use the AWS Free Tier to get started with, and to learn more about, EC2, S3, CloudFront, and many other AWS services. The AWS Getting Started page contains lots of resources to help you with your first project.

Data Transfer from AWS Origins to CloudFront – There is no charge for data transfers from an AWS origin (S3, EC2, Elastic Load Balancing, and so forth) to any CloudFront edge location.

CloudFront Reserved Capacity Pricing – If you routinely use CloudFront to deliver 10 TB or more content per month, you should investigate our Reserved Capacity pricing. You can receive a significant discount by committing to transfer 10 TB or more content from a single region, with additional discounts at higher levels of usage. To learn more or to sign up, simply Contact Us.

Jeff;

 

New – AWS Storage Gateway Hardware Appliance

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-storage-gateway-hardware-appliance/

AWS Storage Gateway connects your on-premises applications to AWS storage services such as Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and Amazon Glacier. It runs in your existing virtualized environment and is visible to your applications and your client operating systems as a file share, a local block volume, or a virtual tape library. The resulting hybrid storage model gives our customers the ability to use their AWS Storage Gateways for backup, archiving, disaster recovery, cloud data processing, storage tiering, and migration.

New Hardware Appliance
Today we are making Storage Gateway available as a hardware appliance, adding to the existing support for VMware ESXi, Microsoft Hyper-V, and Amazon EC2. This means that you can now make use of Storage Gateway in situations where you do not have a virtualized environment, server-class hardware or IT staff with the specialized skills that are needed to manage them. You can order appliances from Amazon.com for delivery to branch offices, warehouses, and “outpost” offices that lack dedicated IT resources. Setup (as you will see in a minute) is quick and easy, and gives you access to three storage solutions:

File Gateway – A file interface to Amazon S3, accessible via NFS or SMB. The files are stored as S3 objects, allowing you to make use of specialized S3 features such as lifecycle management and cross-region replication. You can trigger AWS Lambda functions, run Amazon Athena queries, and use Amazon Macie to discover and classify sensitive data.

Volume Gateway – Cloud-backed storage volumes, accessible as local iSCSI volumes. Gateways can be configured to cache frequently accessed data locally, or to store a full copy of all data locally. You can create EBS snapshots of the volumes and use them for disaster recovery or data migration.

Tape Gateway – A cloud-based virtual tape library (VTL), accessible via iSCSI, so you can replace your on-premises tape infrastructure, without changing your backup workflow.

To learn more about each of these solutions, read What is AWS Storage Gateway.

The AWS Storage Gateway Hardware Appliance is based on a specially configured Dell EMC PowerEdge R640 Rack Server that is pre-loaded with AWS Storage Gateway software. It has 2 Intel® Xeon® processors, 128 GB of memory, 6 TB of usable SSD storage for your locally cached data, redundant power supplies, and you can order one from Amazon.com:

If you have an Amazon Business account (they’re free) you can use a purchase order for the transaction. In addition to simplifying deployment, using this standardized configuration helps to assure consistent performance for your local applications.

Hardware Setup
As you know, I like to go hands-on with new AWS products. My colleagues shipped a pre-release appliance to me; I left it under the watchful guide of my CSO (Canine Security Officer) until I was ready to write this post:

I don’t have a server room or a rack, so I set it up on my hobby table for testing:

In addition to the appliance, I also scrounged up a VGA cable, a USB keyboard, a small monitor, and a power adapter (C13 to NEMA 5-15). The adapter is necessary because the cord included with the appliance is intended to plug in a power distribution jack commonly found in a data center. I connected it all up, turned it on and watched it boot up, then entered a new administrative password.

Following the directions in the documentation, I configured an IPV4 address, using DHCP for convenience:

I captured the IP address for use in the next step, selected Back (the UI is keyboard-driven) and then logged out. This is the only step that takes place on the local console.

Gateway Configuration
At this point I will switch from past to present, and walk you through the configuration process. As directed by the Getting Started Guide, I open the Storage Gateway Console on the same network as the appliance, select the region where I want to create my gateway, and click Get started:

I select File gateway and click Next to proceed:

I select Hardware Appliance as my host platform (I can click Buy on Amazon to purchase one if necessary), and click Next:

Then I enter the IP address of my appliance and click Connect:

I enter a name for my gateway (jbgw1), set the time zone, pick ZFS as my RAID Volume Manager, and click Activate to proceed:

My gateway is activated within a second or two and I can see it in the Hardware section of the console:

At this point I am free to use a console that is not on the same network, so I’ll switch back to my trusty WorkSpace!

Now that my hardware has been activated, I can launch the actual gateway service on it. I select the appliance, and choose Launch Gateway from the Actions menu:

I choose the desired gateway type, enter a name (fgw1) for it, and click Launch gateway:

The gateway will start off in the Offline status, and transition to Online within 3 to 5 minutes. The next step is to allocate local storage by clicking Edit local disks:

Since I am creating a file gateway, all of the local storage is used for caching:

Now I can create a file share on my appliance! I click Create file share, enter the name of an existing S3 bucket, and choose NFS or SMB, then click Next:

I configure a couple of S3 options, request creation of a new IAM role, and click Next:

I review all of my choices and click Create file share:

After I create the share I can see the commands that are used to mount it in each client environment:

I mount the share on my Ubuntu desktop (I had to install the nfs-client package first) and copy a bunch of files to it:

Then I visit the S3 bucket and see that the gateway has already uploaded the files:

Finally, I have the option to change the configuration of my appliance. After making sure that all network clients have unmounted the file share, I remove the existing gateway:

And launch a new one:

And there you have it. I installed and configured the appliance, created a file share that was accessible from my on-premises systems, and then copied files to it for replication to the cloud.

Now Available
The Storage Gateway Hardware Appliance is available now and you can purchase one today. Start in the AWS Storage Gateway Console and follow the steps above!

Jeff;

 

 

New – AWS Systems Manager Session Manager for Shell Access to EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-session-manager/

It is a very interesting time to be a corporate IT administrator. On the one hand, developers are talking about (and implementing) an idyllic future where infrastructure as code, and treating servers and other resources as cattle. On the other hand, legacy systems still must be treated as pets, set up and maintained by hand or with the aid of limited automation. Many of the customers that I speak with are making the transition to the future at a rapid pace, but need to work in the world that exists today. For example, they still need shell-level access to their servers on occasion. They might need to kill runaway processes, consult server logs, fine-tune configurations, or install temporary patches, all while maintaining a strong security profile. They want to avoid the hassle that comes with running Bastion hosts and the risks that arise when opening up inbound SSH ports on the instances.

We’ve already addressed some of the need for shell-level access with the AWS Systems Manager Run Command. This AWS facility gives administrators secure access to EC2 instances. It allows them to create command documents and run them on any desired set of EC2 instances, with support for both Linux and Microsoft Windows. The commands are run asynchronously, with output captured for review.

New Session Manager
Today we are adding a new option for shell-level access. The new Session Manager makes the AWS Systems Manager even more powerful. You can now use a new browser-based interactive shell and a command-line interface (CLI) to manage your Windows and Linux instances. Here’s what you get:

Secure Access – You don’t have to manually set up user accounts, passwords, or SSH keys on the instances and you don’t have to open up any inbound ports. Session Manager communicates with the instances via the SSM Agent across an encrypted tunnel that originates on the instance, and does not require a bastion host.

Access Control – You use IAM policies and users to control access to your instances, and don’t need to distribute SSH keys. You can limit access to a desired time/maintenance window by using IAM’s Date Condition Operators.

Auditability – Commands and responses can be logged to Amazon CloudWatch and to an S3 bucket. You can arrange to receive an SNS notification when a new session is started.

Interactivity – Commands are executed synchronously in a full interactive bash (Linux) or PowerShell (Windows) environment

Programming and Scripting – In addition to the console access that I will show you in a moment, you can also initiate sessions from the command line (aws ssm ...) or via the Session Manager APIs.

The SSM Agent running on the EC2 instances must be able to connect to Session Manager’s public endpoint. You can also set up a PrivateLink connection to allow instances running in private VPCs (without Internet access or a public IP address) to connect to Session Manager.

Session Manager in Action
In order to use Session Manager to access my EC2 instances, the instances must be running the latest version (2.3.12 or above) of the SSM Agent. The instance role for the instances must reference a policy that allows access to the appropriate services; you can create your own or use AmazonEC2RoleForSSM. Here are my EC2 instances (sk1 and sk2 are running Amazon Linux; sk3-win and sk4-win are running Microsoft Windows):

Before I run my first command, I open AWS Systems Manager and click Preferences. Since I want to log my commands, I enter the name of my S3 bucket and my CloudWatch log group. If I enter either or both values, the instance policy must also grant access to them:

I’m ready to roll! I click Sessions, see that I have no active sessions, and click Start session to move ahead:

I select a Linux instance (sk1), and click Start session again:

The session opens up immediately:

I can do the same for one of my Windows instances:

The log streams are visible in CloudWatch:

Each stream contains the content of a single session:

In the Works
As usual, we have some additional features in the works for Session Manager. Here’s a sneak peek:

SSH Client – You will be able to create SSH sessions atop Session Manager without opening up any inbound ports.

On-Premises Access – We plan to give you the ability to access your on-premises instances (which must be running the SSM Agent) via Session Manager.

Available Now
Session Manager is available in all AWS regions (including AWS GovCloud) at no extra charge.

Jeff;

AWS – Ready for the Next Storm

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-ready-for-the-next-storm/

As I have shared in the past (AWS – Ready to Weather the Storm) we take extensive precautions to help ensure that AWS will remain operational in the face of hurricanes, storms, and other natural disasters. With Hurricane Florence heading for the east coast of the United States, I thought it would be a good time to review and update some of the most important points from that post. Here’s what I want you to know:

Availability Zones – We replicate critical components of AWS across multiple Availability Zones to ensure high availability. Common points of failure, such as generators, UPS units, and air conditioning, are not shared across Availability Zones. Electrical power systems are designed to be fully redundant and can be maintained without impacting operations. The AWS Well-Architected Framework provides guidance on the proper use of multiple Availability Zones to build applications that are reliable and resilient, as does the Building Fault-Tolerant Applications on AWS whitepaper.

Contingency Planning – We maintain contingency plans and regularly rehearse our responses. We maintain a series of incident response plans and update them regularly to incorporate lessons learned and to prepare for emerging threats. In the days leading up to a known event such as a hurricane, we increase fuel supplies, update staffing plans, and add provisions to ensure the health and safety of our support teams.

Data Transfer – With a storage capacity of 100 TB per device, AWS Snowball Edge appliances can be used to quickly move large amounts of data to the cloud.

Disaster Response – When call volumes spike before, during, or after a disaster, Amazon Connect can supplement your existing call center resources and allow you to provide a better response.

Support – You can contact AWS Support if you are in need of assistance with any of these issues.

Jeff;