Tag Archives: launch

New – Gigabit Connectivity Options for Amazon Direct Connect

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gigabit-connectivity-options-for-amazon-direct-connect/

AWS Direct Connect gives you the ability to create private network connections between your datacenter, office, or colocation environment and AWS. The connections start at your network and end at one of 91 AWS Direct Connect locations and can reduce your network costs, increase throughput, and deliver a more consistent experience than an Internet-based connection. In most cases you will need to work with an AWS Direct Connect Partner to get your connection set up.

As I prepared to write this post, I learned that my understanding of AWS Direct Connect was incomplete, and that the name actually encompasses three distinct models. Here’s a summary:

Dedicated Connections are available with 1 Gbps and 10 Gbps capacity. You use the AWS Management Console to request a connection, after which AWS will review your request and either follow up via email to request additional information or provision a port for your connection. Once AWS has provisioned a port for you, the remaining time to complete the connection by the AWS Direct Connect Partner will vary between days and weeks. A Dedicated Connection is a physical Ethernet port dedicated to you. Each Dedicated Connection supports up to 50 Virtual Interfaces (VIFs). To get started, read Creating a Connection.

Hosted Connections are available with 50 to 500 Mbps capacity, and connection requests are made via an AWS Direct Connect Partner. After the AWS Direct Connect Partner establishes a network circuit to your premises, capacity to AWS Direct Connect can be added or removed on demand by adding or removing Hosted Connections. Each Hosted Connection supports a single VIF; you can obtain multiple VIFs by acquiring multiple Hosted Connections. The AWS Direct Connect Partner provisions the Hosted Connection and sends you an invite, which you must accept (with a click) in order to proceed.

Hosted Virtual Interfaces are also set up via AWS Direct Connect Partners. A Hosted Virtual Interface has access to all of the available capacity on the network link between the AWS Direct Connect Partner and an AWS Direct Connect location. The network link between the AWS Direct Connect Partner and the AWS Direct Connect location is shared by multiple customers and could possibly be oversubscribed. Due to the possibility of oversubscription in the Hosted Virtual Interface model, we no longer allow new AWS Direct Connect Partner service integrations using this model and recommend that customers with workloads sensitive to network congestion use Dedicated or Hosted Connections.

Higher Capacity Hosted Connections
Today we are announcing Hosted Connections with 1, 2, 5, or 10 Gbps of capacity. These capacities will be available through a select set of AWS Direct Connect Partners who have been specifically approved by AWS. We are also working with AWS Direct Connect Partners to implement additional monitoring of the network link between the AWS Direct Connect Partners and AWS.

Most AWS Direct Connect Partners support adding or removing Hosted Connections on demand. Suppose that you archive a massive amount of data to Amazon Glacier at the end of every quarter, and that you already have a pair of resilient 10 Gbps circuits from your AWS Direct Connect Partner for use by other parts of your business. You then create a pair of resilient 1, 2, 5 or 10 Gbps Hosted Connections at the end of the quarter, upload your data to Glacier, and then delete the Hosted Connections.

You pay AWS for the port-hour charges while the Hosted Connections are in place, along with any associated data transfer charges (see the Direct Connect Pricing page for more info). Check with your AWS Direct Connect Partner for the charges associated with their services. You get a cost-effective, elastic way to move data to the cloud while creating Hosted Connections only when needed.

Available Now
The new higher capacity Hosted Connections are available through select AWS Direct Connect Partners after they are approved by AWS.

Jeff;

PS – As part of this launch, we are reducing the prices for the existing 200, 300, 400, and 500 Mbps Hosted Connection capacities by 33.3%, effective March 1, 2019.

 

In the Works – EC2 Instances (G4) with NVIDIA T4 GPUs

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-ec2-instances-g4-with-nvidia-t4-gpus/

I’ve written about the power and value of GPUs in the past, and I have written posts to launch many generations of GPU-equipped EC2 instances including the CG1, G2, G3, P2, P3, and P3dn instance types.

Today I would like to give you a sneak peek at our newest GPU-equipped instance, the G4. Designed for machine learning training & inferencing, video transcoding, and other demanding applications, G4 instances will be available in multiple sizes and also in bare metal form. We are still fine-tuning the specs, but you can look forward to:

  • AWS-custom Intel CPUs (4 to 96 vCPUs)
  • 1 to 8 NVIDIA T4 Tensor Core GPUs
  • Up to 384 GiB of memory
  • Up to 1.8 TB of fast, local NVMe storage
  • Up to 100 Gbps networking

The brand-new NVIDIA T4 GPUs feature 320 Turing Tensor cores, 2,560 CUDA cores, and 16 GB of memory. In addition to support for machine learning inferencing and video processing, the T4 includes RT Cores for real-time ray tracing and can provide up to 2x the graphics performance of the NVIDIA M60 (watch Ray Tracing in Games with NVIDIA RTX to learn more).

I’ll have a lot more to say about these powerful, high-end instances very soon, so stay tuned!

Jeff;

PS – If you are interested in joining a private preview, sign up now.

New – Open Distro for Elasticsearch

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-open-distro-for-elasticsearch/

Elasticsearch is a distributed, document-oriented search and analytics engine. It supports structured and unstructured queries, and does not require a schema to be defined ahead of time. Elasticsearch can be used as a search engine, and is often used for web-scale log analytics, real-time application monitoring, and clickstream analytics.

Originally launched as a true open source project, some of the more recent additions to Elasticsearch are proprietary. My colleague Adrian explains our motivation to start Open Distro for Elasticsearch in his post, Keeping Open Source Open. As strong believers in, and supporters of, open source software, we believe this project will help continue to accelerate open source Elasticsearch innovation.

Open Distro for Elasticsearch
Today we are launching Open Distro for Elasticsearch. This is a value-added distribution of Elasticsearch that is 100% open source (Apache 2.0 license) and supported by AWS. Open Distro for Elasticsearch leverages the open source code for Elasticsearch and Kibana. This is not a fork; we will continue to send our contributions and patches upstream to advance these projects.

In addition to Elasticsearch and Kibana, the first release includes a set of advanced security, event monitoring & alerting, performance analysis, and SQL query features (more on those in a bit). In addition to the source code repo, Open Distro for Elasticsearch and Kibana are available as RPM and Docker containers, with separate downloads for the SQL JDBC and the PerfTop CLI. You can run this code on your laptop, in your data center, or in the cloud.

Contributions are welcome, as are bug reports and feature requests.

Inside Open Distro for Elasticsearch
Let’s take a quick look at the features that we are including in Open Distro for Elasticsearch. Some of these are currently available in Amazon Elasticsearch Service; others will become available in future updates.

Security – This plugin that supports node-to-node encryption, five types of authentication (basic, Active Directory, LDAP, Kerberos, and SAML), role-based access controls at multiple levels (clusters, indices, documents, and fields), audit logging, and cross-cluster search so that any node in a cluster can run search requests across other nodes in the cluster. Learn More

Event Monitoring & Alerting – This feature notifies you when data from one or more Elasticsearch indices meets certain conditions. You could, for example, notify a Slack channel if an application logs more than five HTTP 503 errors in an hour. Monitoring is based on jobs that run on a defined schedule, checking indices against trigger conditions, and raising alerts when a condition has been triggered. Learn More

Deep Performance Analysis – This is a REST API that allows you to query a long list of performance metrics for your cluster. You can access the metrics programmatically or you can visualize them using perf top and other perf tools. Learn More

SQL Support – This feature allows you to query your cluster using SQL statements. It is an improved version of the elasticsearch-sql plugin, and supports a rich set of statements.

This is just the beginning; we have more in the works, and also look forward to your contributions and suggestions!

Jeff;

 

New – RISC-V Support in the FreeRTOS Kernel

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-risc-v-support-for-freertos-kernel/

FreeRTOS is a popular operating system designed for small, simple processors often known as microcontrollers. It is available under the MIT open source license and runs on many different Instruction Set Architectures (ISAs). Amazon FreeRTOS extends FreeRTOS with a collection of IoT-oriented libraries that provide additional networking and security features including support for Bluetooth Low Energy, Over-the-Air Updates, and Wi-Fi.

RISC-V is a free and open ISA that was designed to be simple, extensible, and easy to implement. The simplicity of the RISC-V model, coupled with its permissive BSD license, makes it ideal for a wide variety of processors, including low-cost microcontrollers that can be manufactured without incurring license costs. The RISC-V model can be implemented in many different ways, as you can see from the RISC-V cores page. Development tools, including simulators, compilers, and debuggers, are also available.

Today I am happy to announce that we are now providing RISC-V support in the FreeRTOS kernel. The kernel supports the RISC-V I profile (RV32I and RV64I) and can be extended to support any RISC-V microcontroller. It includes preconfigured examples for the OpenISA VEGAboard, QEMU emulator for SiFive’s HiFive board, and Antmicro’s Renode emulator for the Microchip M2GL025 Creative Board.

You now have a powerful new option for building smart devices that are more cost-effective than ever before!

Jeff;

 

Now Available – Five New Amazon EC2 Bare Metal Instances: M5, M5d, R5, R5d, and z1d

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-five-new-amazon-ec2-bare-metal-instances-m5-m5d-r5-r5d-and-z1d/

Today we are launching the five new EC2 bare metal instances that I promised you a few months ago. Your operating system runs on the underlying hardware and has direct access to the processor and other hardware. The instances are powered by AWS-custom Intel® Xeon® Scalable Processor (Skylake) processors that deliver sustained all-core Turbo performance.

Here are the specs:

Instance Name Sustained All-Core Turbo
Logical Processors Memory Local Storage EBS-Optimized Bandwidth Network Bandwidth
m5.metal Up to 3.1 GHz 96 384 GiB 14 Gbps 25 Gbps
m5d.metal Up to 3.1 GHz 96 384 GiB 4 x 900 GB NVMe SSD 14 Gbps 25 Gbps
r5.metal Up to 3.1 GHz 96 768 GiB 14 Gbps 25 Gbps
r5d.metal Up to 3.1 GHz 96 768 GiB 4 x 900 GB NVMe SSD 14 Gbps 25 Gbps
z1d.metal Up to 4.0 GHz 48 384 GiB 2 x 900 GB NVMe SSD 14 Gbps 25 Gbps

The M5 instances are designed for general-purpose workloads, such as web and application servers, gaming servers, caching fleets, and app development environments. The R5 instances are designed for high performance databases, web scale in-memory caches, mid-sized in-memory databases, real-time big data analytics, and other memory-intensive enterprise applications. The M5d and R5d variants also include 3.6 TB of local NVMe SSD storage.

z1d instances provide high compute performance and lots of memory, making them ideal for electronic design automation (EDA) and relational databases with high per-core licensing costs. The high CPU performance allows you to license fewer cores and significantly reduce your TCO for Oracle or SQL Server workloads.

All of the instances are powered by the AWS Nitro System, with dedicated hardware accelerators for EBS processing (including crypto operations), the software-defined network inside of each Virtual Private Cloud (VPC), ENA networking, and access to the local NVMe storage on the M5d, R5d, and z1d instances. Bare metal instances can also take advantage of Elastic Load Balancing, Auto Scaling, Amazon CloudWatch, and other AWS services.

In addition to being a great home for old-school applications and system software that are licensed specifically and exclusively for use on physical, non-virtualized hardware, bare metal instances can be used to run tools and applications that require access to low-level processor features such as performance counters. For example, Mozilla’s Record and Replay Framework (rr) records and replays program execution with low overhead, using the performance counters to measure application performance and to deliver signals and context-switch events with high fidelity. You can read their paper, Engineering Record And Replay For Deployability, to learn more.

Launch One Today
m5.metal instances are available in the US East (N. Virginia and Ohio), US West (N. California and Oregon), Europe (Frankfurt, Ireland, London, Paris, and Stockholm), and Asia Pacific (Mumbai, Seoul, Singapore, Sydney, and Tokyo) AWS regions.

m5d.metal instances are available in the US East (N. Virginia and Ohio), US West (Oregon), Europe (Frankfurt, Ireland, Paris, and Stockholm), and Asia Pacific (Mumbai, Seoul, Singapore, and Sydney) AWS regions.

r5.metal instances are available in the US East (N. Virginia and Ohio), US West (N. California and Oregon), Europe (Frankfurt, Ireland, Paris, and Stockholm), Asia Pacific (Mumbai, Seoul, and Singapore), and AWS GovCloud (US-West) AWS regions.

r5d.metal instances are available in the US East (N. Virginia and Ohio), US West (N. California), Europe (Frankfurt, Paris, and Stockholm), Asia Pacific (Mumbai, Seoul, and Singapore), and AWS GovCloud (US-West) AWS regions.

z1d.metal instances are available in the US East (N. Virginia), US West (N. California and Oregon), Europe (Ireland), and Asia Pacific (Singapore and Tokyo) AWS regions.

The bare metal instances will become available in even more AWS regions as soon as possible.

Jeff;

 

New – Infrequent Access Storage Class for Amazon Elastic File System (EFS)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-infrequent-access-storage-class-for-amazon-elastic-file-system-efs/

Amazon Elastic File System lets you create petabyte-scale file systems that can be accessed in massively parallel fashion from hundreds or thousands of EC2 instances and on-premises servers, while scaling on demand without disrupting applications. Since the mid-2016 launch of EFS, we have added many new features including encryption of data at rest and in transit, a provisioned throughput option when you need high throughput access to a set of files that do not occupy a lot of space, on-premises access via AWS Direct Connect, EFS File Sync, support for AWS VPN and Inter-Region VPC Peering, and more.

Infrequent Access Storage Class
Today I would like to tell you about the new Amazon EFS Infrequent Access storage class, as pre-announced at AWS re:Invent. As part of a new Lifecycle Management option for EFS file systems, you can now indicate that you want to move files that have not been accessed in the last 30 days to a storage class that is 85% less expensive. You can enable the use of Lifecycle Management when you create a new EFS file system, and you can enable it later for file systems that were created on or after today’s launch.

The new storage class is totally transparent. You can still access your files as needed and in the usual way, with no code or operational changes necessary.

You can use the Infrequent Access storage class to meet auditing and retention requirements, create nearline backups that can be recovered using normal file operations, and to keep data close at hand that you need on an occasional basis.

Here are a couple of things to keep in mind:

Eligible Files – Files that are 128 KiB or larger and that have not been accessed or modified for at least 30 days can be transitioned to the new storage class. Modifications to a file’s metadata that do not change the file will not delay a transition.

Priority – Operations that transition files to Infrequent Access run at a lower priority than other operations on the file system.

Throughput – If your file system is configured for Bursting mode, the amount of Standard storage determines the throughput. Otherwise, the provisioned throughput applies.

Enabling Lifecycle Management
You can enable Lifecycle Management and benefit from the Infrequent Access storage class with one click:

As I noted earlier, you can check this when you create the file system, or you can enable it later for file systems that you create from now on.

Files that have not been read or written for 30 days will be transitioned to the Infrequent Access storage class with no further action on your part. Files in the Standard Access class can be accessed with latency measured in single-digit milliseconds; files in the Infrequent Access class have latency in the double-digits. Your next AWS bill will include information on your use of both storage classes, so that you can see your cost savings.

Available Now
This feature is available now and you can start using it today in all AWS Regions where EFS is available. Infrequent Access storage is billed at $0.045 per GB/Month in US East (N. Virginia), with correspondingly low pricing in other regions. There’s also a data transfer charge of $0.01 per GB for reads and writes to Infrequent Access storage.

Like every AWS service and feature, we are launching with an initial set of features and a really strong roadmap! For example, we are working on additional lifecycle management flexibility, and would be very interested in learning more about what kinds of times and rules you would like.

Jeff;

PS – AWS DataSync will help you to quickly and easily automate data transfer between your existing on-premises storage and EFS.

New – TLS Termination for Network Load Balancers

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/

When you access a web site using the HTTPS protocol, a whole lot of interesting work (formally known as an SSL/TLS handshake) happens to create and maintain a secure communication channel. Your client (browser) and the web server work together to negotiate a mutually agreeable cipher, exchange keys, and set up a session key. Once established, both ends of the conversation use the session key to encrypt and decrypt all further traffic. Because the session key is unique to the conversation between the client and the server, a third party cannot decrypt the traffic or interfere with the conversation.

New TLS Termination
Today we are simplifying the process of building secure web applications by giving you the ability to make use of TLS (Transport Layer Security) connections that terminate at a Network Load Balancer (you can think of TLS as providing the “S” in HTTPS). This will free your backend servers from the compute-intensive work of encrypting and decrypting all of your traffic, while also giving you a host of other features and benefits:

Source IP Preservation – The source IP address and port is presented to your backend servers, even when TLS is terminated at the NLB. This is, as my colleague Colm says, “insane magic!”

Simplified Management – Using TLS at scale means that you need to take responsibility for distributing your server certificate to each backend server. This creates extra management work (sometimes involving a fleet of proxy servers), and also increases your attack surface due to the presence of multiple copies of the certificate. Today’s launch removes all of that complexity and gives you a central management point for your certificates. If you are using AWS Certificate Manager (ACM), your certificates will be stored securely, expired & rotated regularly, and updated automatically, all with no action on your part.

Zero-day Patching – The TLS protocol is complex and the implementations are updated from time to time in response to emerging threats. Terminating your connections at the NLB protects your backend servers and allows us to update your NLB in response to these threats. We make use of s2n, our security-focused , formally-verified implementation of the TLS/SSL protocols.

Improved Compliance – You can use built-in security policies to specify the cipher suites and protocol versions that are acceptable to your application. This will help you in your PCI and FedRAMP compliance effort, and will also allow you to achieve a perfect TLS score.

Classic Upgrade – If you are currently using a Classic Load Balancer for TLS termination, switching to a Network Load Balancer will allow you to scale more quickly in response to an increased load. You will also be able to make use of a static IP address for your NLB and to log the source IP address for requests.

Access Logs – You now have the ability to enable access logs for your Network Load Balancers and to direct them to the S3 bucket of your choice. The log entries include detailed information about the TLS protocol version, cipher suite, connection time, handshake time, and more.

Using TLS Termination
You can create a Network Load Balancer and make use of TLS termination in minutes! You can use the API (CreateLoadBalancer), CLI (create-load-balancer), the EC2 Console, or a AWS CloudFormation template. I’ll use the Console, and click Load Balancers to get started. Then I click Create in the Network Load Balancer area:

I enter a name (MyLB2) and choose TLS (Secure TCP) as the Load Balancer Protocol:

Then I choose one or more Availability Zones, and optionally choose and Elastic IP address for each one. I can also choose to tag my NLB. When I am all set, I click Next: Configure Security Settings to proceed:

On the next page, I can choose an existing certificate or upload a new one. I already have one for www.jeff-barr.com, so I’ll choose it. I also choose a security policy (more on that in a minute):

There are currently seven security policies to choose from. Each policy allows for the use of certain TLS versions and ciphers:

The describe-load-balancer-policies command can be used to learn more about the policies:

After choosing the certificate and the policy, I click Next:Configure Routing. I can choose the communication protocol (TCP or TLS) that will be used between my NLB and my targets. If I choose TLS, communication is encrypted; this allows you to make use of complete end-to-end encryption in transit:

The remainder of the setup process proceeds as usual, and I can start using my Network Load Balancer right away.

Available Now
TLS Termination is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo) Regions.

Jeff;

 

Amazon WorkLink – Secure, One-Click Mobile Access to Internal Websites and Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-worklink-secure-one-click-mobile-access-to-internal-websites-and-applications/

We want to make it easier for you and your colleagues to use your mobile devices to access internal corporate websites and applications. Our goal is to give your workforce controlled access to valuable intranet content while maintaining a strong security profile.

Introducing Amazon WorkLink
Today I would like to tell you about Amazon WorkLink. You get seamless access to internal websites and applications from your mobile device, with no need to modify or migrate any content. Amazon WorkLink is a fully managed, pay-as-you-go service that scales to meet the needs of any organization. It is easy to set up and run, and does not require you to migrate or modify your existing sites or content. You get full control over the domains that are accessible from mobile devices, and you can use your existing SAML-based Identity Provider (IdP) to manage your user base.

Amazon WorkLink gains access to your internal resources through a Virtual Private Cloud (VPC). The resources can exist within that VPC (for example, applications hosted on EC2 instance), in another VPC that is peered with it, or on-premises. In the on-premises case, the resources must be accessible via an IPsec tunnel, AWS Direct Connect, or the new AWS Transit Gateway. Applications running in a VPC can use AWS PrivateLink to access AWS services while keeping all traffic on the AWS network.

Your users get a secure, non-invasive browsing experience. Corporate content is rendered within the AWS Cloud and delivered to each device over a secure connection. We’re launching with support for devices that run iOS 12, with support for Android 6+ coming within weeks.

Inside Amazon WorkLink
Amazon WorkLink lets you associates domains with each WorkLink fleet that you create. For example, you could associate phones.example.com, payroll.example.com, and tickets.example.com to provide your users with access to your phone directory, payroll system and trouble ticketing system. When you associate a domain with a fleet, you need to prove to WorkLink that you control the domain. WorkLink will issue an SSL/TLS certificate for the domain and then establish and manage an endpoint to handle requests for the domain.

With the fleet created, you can use the email template provided by WorkLink to extend invitations to users. The users accept the invitations, install the WorkLink app, and sign in using their existing corporate identity.

The app installs itself as the first-tier DNS resolver and configures the device’s VPN connection so that it can access the WorkLink fleet. When a mobile user accesses a domain that is associated with their fleet, the requested content is fetched, rendered, delivered to the device in vector form across a TLS connection, and rendered in the user’s existing mobile browser. Your users can interact with the content as usual: zooming, scrolling, and typing all work as expected. All HTML, CSS, and JavaScript content is rendered in the cloud on a fleet of EC2 instances isolated from other AWS customers; no content is stored or cached by browsers on the local devices. Encrypted version of cookies are stored by the WorkLink app on the user devices. They are never decrypted on the devices but are sent back to resume sessions when a user gets a new cloud-rendering container. Traffic to and from domains that are not associated with WorkLink continues to flow as before, and does not go through WorkLink.

Setting Up Amazon WorkLink
Let’s walk through the process of setting up a WorkLink fleet. I don’t have a genuine corporate network or intranet, so I’ll have to wave my hands a bit. I open the Amazon WorkLink Console and click Create fleet to get started:

I give my fleet a programmatic name (my-fleet), a display name (MyFleet), and click Create fleet to proceed:

My fleet is created in seconds, and is ready for further setup:

I click my-fleet to proceed; I can see the mandatory and optional setup steps at a glance:

I click Link IdP to use my existing SAML-style identity provider, click Choose file to upload an XML document that describes my metadata provider, and again click Link IdP to proceed:

WorkLink validates and processes the document, and generates a service provider metadata document. I download that document, and pass it along to the operator of the identity provider. The provider, in turn, uses the document to finalize the SAML federation for the identity provider:

Next, I click Link network to link my users to my company content. I can create a new VPC, or I can use an existing one. Either way, I should choose subnets in two or more Availability Zones in order to maximize availability. The chosen subnets must have enough free IP addresses to support the number of users that will be accessing the fleet; WorkLink will create and manage an Elastic Network Interface (ENI) for each connected user. I’ll use my existing VPC:

With my identify provider configured and my network linked, I can click Associate domain to indicate that I want my users to be able to access it some content on my network. I enter the domain name, and click Next to proceed (let’s pretend that www.jeff-barr.com is an intranet site):

Now I need to prove that I have control over the domain. I can either modify the DNS configuration or I can respond to an email request. I’ll take the first option:

The console displays the necessary changes (an additional CNAME record) that I need to make to my domain:

I use Amazon Route 53 to maintain my DNS entries so it is easy to add the CNAME:

Amazon WorkLink will validate the DNS entry (this can take four or five hours; email is a bit quicker). I can repeat this step for all desired domains, and I can add even more later.

After my domain has been validated I click User invites to get an email invitation that I can send to my users:

Your users simply follow the directions and can start to enjoy remote access to the permitted sites and applications within minutes. For example:

Other powerful administrative features include the ability to set up and use device policies, and to configure delivery of audit logs to a new or existing Amazon Kinesis Data Stream:

Things to Know
Here are a couple of things to keep in mind when evaluating Amazon WorkLink:

Device Support – We are launching with support for devices that run iOS 12. Support for Android 6 devices will be ready within weeks.

CompatibilityAmazon WorkLink is designed to process and render most modern forms of web content, with support for video and audio on the drawing board. It does not support content that makes use of Flash, Silverlight, WebGL, or applets.

Identity ProvidersAmazon WorkLink can be used with SAML-based identity providers today, with plans to support other types of providers based on customer requests and feedback.

Regions – You can create Amazon WorkLink fleets in AWS regions in North America and Europe today. Support for other regions is in the works for rollout later this year.

Pricing – Pricing is based on the number of users with an active browser session in a given month. You pay $5 per active user per month.

Available Now
Amazon WorkLink is available now and you can start using it today!

Jeff;

 

AWS Backup – Automate and Centrally Manage Your Backups

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-backup-automate-and-centrally-manage-your-backups/

AWS gives you the power to easily and dynamically create file systems, block storage volumes, relational databases, NoSQL databases, and other resources that store precious data. You can create them on a moment’s notice as the need arises, giving you access to as much storage as you need and opening the door to large-scale cloud migration. When you bring your sensitive data to the cloud, you need to make sure that you continue to meet business and regulatory compliance requirements, and you definitely want to make sure that you are protected against application errors.

While you can build your own backup tools using the built-in snapshot operations built in to many of the services that I listed above, creating an enterprise wide backup strategy and the tools to implement it still takes a lot of work. We are changing that.

New AWS Backup
AWS Backup is designed to help you automate and centrally manage your backups. You can create policy-driven backup plans, monitor the status of on-going backups, verify compliance, and find / restore backups, all using a central console. Using a combination of the existing AWS snapshot operations and new, purpose-built backup operations, Backup backs up EBS volumes, EFS file systems, RDS & Aurora databases, DynamoDB tables, and Storage Gateway volumes to Amazon Simple Storage Service (S3), with the ability to tier older backups to Amazon Glacier. Because Backup includes support for Storage Gateway volumes, you can include your existing, on-premises data in the backups that you create.

Each backup plan includes one or more backup rules. The rules express the backup schedule, frequency, and backup window. Resources to be backed-up can be identified explicitly or in a policy-driven fashion using tags. Lifecycle rules control storage tiering and expiration of older backups. Backup gathers the set of snapshots and the metadata that goes along with the snapshots into collections that define a recovery point. You get lots of control so that you can define your daily / weekly / monthly backup strategy, the ability to rest assured that your critical data is being backed up in accord with your requirements, and the ability to restore that data on an as-needed data. Backups are grouped into vaults, each encrypted by a KMS key.

Using AWS Backup
You can get started with AWS Backup in minutes. Open the AWS Backup Console and click Create backup plan:

I can build a plan from scratch, start from an existing plan or define one using JSON. I’ll Build a new plan, and start by giving my plan a name:

Now I create the first rule for my backup plan. I call it MainBackup, indicate that I want it to run daily, define the lifecycle (transition to cold storage after 1 month, expire after 6 months), and select the Default vault:

I can tag the recovery points that are created as a result of this rule, and I can also tag the backup plan itself:

I’m all set, so I click Create plan to move forward:

At this point my plan exists and is ready to run, but it has just one rule and does not have any resource assignments (so there’s nothing to back up):

Now I need to indicate which of my resources are subject to this backup plan I click Assign resources, and then create one or more resource assignments. Each assignment is named and references an IAM role that is used to create the recovery point. Resources can be denoted by tag or by resource ID, and I can use both in the same assignment. I enter all of the values and click Assign resources to wrap up:

The next step is to wait for the first backup job to run (I cheated by editing my backup window in order to get this post done as quickly as possible). I can peek at the Backup Dashboard to see the overall status:

Backups On Demand
I also have the ability to create a recovery point on demand for any of my resources. I choose the desired resource and designate a vault, then click Create an on-demand backup:

I indicated that I wanted to create the backup right away, so a job is created:

The job runs to completion within minutes:

Inside a Vault
I can also view my collection of vaults, each of which contains multiple recovery points:

I can examine see the list of recovery points in a vault:

I can inspect a recovery point, and then click Restore to restore my table (in this case):

I’ve shown you the highlights, and you can discover the rest for yourself!

Things to Know
Here are a couple of things to keep in mind when you are evaluating AWS Backup:

Services – We are launching with support for EBS volumes, RDS databases, DynamoDB tables, EFS file systems, and Storage Gateway volumes. We’ll add support for additional services over time, and welcome your suggestions. Backup uses the existing snapshot operations for all services except EFS file systems.

Programmatic Access – You can access all of the functions that I showed you above using the AWS Command Line Interface (CLI) and the AWS Backup APIs. The APIs are powerful integration points for your existing backup tools and scripts.

Regions – Backups work within the scope of a particular AWS Region, with plans in the works to enable several different types of cross-region functionality in 2019.

Pricing – You pay the normal AWS charges for backups that are created using the built-in AWS snapshot facilities. For Amazon EFS, there’s a low, per-GB charge for warm storage and an even lower charge for cold storage.

Available Now
AWS Backup is available now and you can start using it today!

Jeff;

 

 

New – Amazon DocumentDB (with MongoDB Compatibility): Fast, Scalable, and Highly Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-documentdb-with-mongodb-compatibility-fast-scalable-and-highly-available/

A glance at the AWS Databases page will show you that we offer an incredibly wide variety of databases, each one purpose-built to address a particular need! In order to help you build the coolest and most powerful applications, you can mix and match relational, key-value, in-memory, graph, time series, and ledger databases.

Introducing Amazon DocumentDB (with MongoDB compatibility)
Today we are launching Amazon DocumentDB (with MongoDB compatibility), a fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools. Amazon DocumentDB uses a purpose-built SSD-based storage layer, with 6x replication across 3 separate Availability Zones. The storage layer is distributed, fault-tolerant, and self-healing, giving you the the performance, scalability, and availability needed to run production-scale MongoDB workloads.

Each MongoDB database contains a set of collections. Each collection (similar to a relational database table) contains a set of documents, each in the JSON-like BSON format. For example:

{
  name: "jeff",
  full_name: {first: "jeff", last: "barr"},
  title: "VP, AWS Evangelism",
  email: "[email protected]",
  city: "Seattle",
  foods: ["chocolate", "peanut butter"]
}

Each document can have a unique set of field-value pairs and data; there are no fixed or predefined schemas. The MongoDB API includes the usual CRUD (create, read, update, and delete) operations along with a very rich query model. This is just the tip of the iceberg (the MongoDB API is very powerful and flexible), so check out the list of supported MongoDB operations, data types, and functions to learn more.

All About Amazon DocumentDB
Here’s what you need to know about Amazon DocumentDB:

Compatibility – Amazon DocumentDB is compatible with version 3.6 of MongoDB.

Scalability – Storage can be scaled from 10 GB up to 64 TB in increments of 10 GB. You don’t need to preallocate storage or monitor free space; Amazon DocumentDB will take care of that for you. You can choose between six instance sizes (15.25 GiB to 488 GiB of memory), and you can create up to 15 read replicas. Storage and compute are decoupled and you can scale each one independently and as-needed.

PerformanceAmazon DocumentDB stores database changes as a log stream, allowing you to process millions of reads per second with millisecond latency. The storage model provides a nice performance increase without compromising data durability, and greatly enhances overall scalability.

Reliability – The 6-way storage replication ensures high availability. Amazon DocumentDB can failover from a primary to a replica within 30 seconds, and supports MongoDB replica set emulation so applications can handle failover quickly.

Fully Managed – Like the other AWS database services, Amazon DocumentDB is fully managed, with built-in monitoring, fault detection, and failover. You can set up daily snapshot backups, take manual snapshots, and use either one to create a fresh cluster if necessary. You can also do point-in-time restores (with second-level resolution) to any point within the 1-35 day backup retention period.

Secure – You can choose to encrypt your active data, snapshots, and replicas with the KMS key of your choice when you create each of your Amazon DocumentDB clusters. Authentication is enabled by default, as is encryption of data in transit.

Compatible – As I said earlier, Amazon DocumentDB is designed to work with your existing MongoDB applications and tools. Just be sure to use drivers intended for MongoDB 3.4 or newer. Internally, Amazon DocumentDB implements the MongoDB 3.6 API by emulating the responses that a MongoDB client expects from a MongoDB server.

Creating An Amazon DocumentDB (with MongoDB compatibility) Cluster
You can create a cluster from the Console, Command Line, CloudFormation, or by making a call to the CreateDBCluster function. I’ll use the Amazon DocumentDB Console today. I open the console and click Launch Amazon DocumentDB to get started:

I name my cluster, choose the instance class, and specify the number of instances (one is the primary and the rest are replicas). Then I enter a master username and password:

I can use any of the following instance classes for my cluster:

At this point I can click Create cluster to use default settings, or I can click Show advanced settings for additional control. I can choose any desired VPC, subnets, and security group. I can also set the port and parameter group for the cluster:

I can control encryption (enabled by default), set the backup retention period, and establish the backup window for point-in-time restores:

I can also control the maintenance window for my new cluster. Once I am ready I click Create cluster to proceed:

My cluster starts out in creating status, and switches to available very quickly:

As do the instances in the cluster:

Connecting to a Cluster
With the cluster up and running, I install the mongo shell on an EC2 instance (details depend on your distribution) and fetch a certificate so that I can make a secure connection:

$ wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem

The console shows me the command that I need to use to make the connection:

I simply customize the command with the password that I specified when I created the cluster:

From there I can use any of the mongo shell commands to insert, query, and examine data. I inserted some very simple documents and then ran an equally simple query (I’m sure you can do a lot better):

Now Available
Amazon DocumentDB (with MongoDB compatibility) is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions. Pricing is based on the instance class, storage consumption for current documents and snapshots, I/O operations, and data transfer.

Jeff;

Western Digital HDD Simulation at Cloud Scale – 2.5 Million HPC Tasks, 40K EC2 Spot Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/western-digital-hdd-simulation-at-cloud-scale-2-5-million-hpc-tasks-40k-ec2-spot-instances/

Earlier this month my colleague Bala Thekkedath published a story about Extreme Scale HPC and talked about how AWS customer Western Digital built a cloud-scale HPC cluster on AWS and used it to simulate crucial elements of upcoming head designs for their next-generation hard disk drives (HDD).

The simulation described in the story encompassed a little over 2.5 million tasks, and ran to completion in just 8 hours on a million-vCPU Amazon EC2 cluster. As Bala shared in his story, much of the simulation work at Western Digital revolves around the need to evaluate different combinations of technologies and solutions that comprise an HDD. The engineers focus on cramming ever-more data into the same space, improving storage capacity and increasing transfer speed in the process. Simulating millions of combinations of materials, energy levels, and rotational speeds allows them to pursue the highest density and the fastest read-write times. Getting the results more quickly allows them to make better decisions and lets them get new products to market more rapidly than before.

Here’s a visualization of Western Digital’s energy-assisted recording process in action. The top stripe represents the magnetism; the middle one represents the added energy (heat); and the bottom one represents the actual data written to the medium via the combination of magnetism and heat:

I recently spoke to my colleagues and to the teams at Western Digital and Univa who worked together to make this record-breaking run a reality. My goal was to find out more about how they prepared for this run, see what they learned, and to share it with you in case you are ready to run a large-scale job of your own.

Ramping Up
About two years ago, the Western Digital team was running clusters as big as 80K vCPUs, powered by EC2 Spot Instances in order to be as cost-effective as possible. They had grown to the 80K vCPU level after repeated, successful runs with 8K, 16K, and 32K vCPUs. After these early successes, they decided to shoot for the moon, push the boundaries, and work toward a one million vCPU run. They knew that this would stress and tax their existing tools, and settled on a find/fix/scale-some-more methodology.

Univa’s Grid Engine is a batch scheduler. It is responsible for keeping track of the available compute resources (EC2 instances) and dispatching work to the instances as quickly and efficiently as possible. The goal is to get the job done in the smallest amount of time and at the lowest cost. Univa’s Navops Launch supports container-based computing and also played a critical role in this run by allowing the same containers to be used for Grid Engine and AWS Batch.

One interesting scaling challenge arose when 50K hosts created concurrent connections to the Grid Engine scheduler. Once running, the scheduler can dispatch up to 3000 tasks per second, with an extra burst in the (relatively rare) case that an instance terminates unexpectedly and signals the need to reschedule 64 or more tasks as quickly as possible. The team also found that referencing worker instances by IP addresses allowed them to sidestep some internal (AWS) rate limits on the number of DNS lookups per Elastic Network Interface.

The entire simulation is packed in a Docker container for ease of use. When newly launched instances come online they register their specs (instance type, IP address, vCPU count, memory, and so forth) in an ElastiCache for Redis cluster. Grid Engine uses this data to find and manage instances; this is more efficient and scalable than calling DescribeInstances continually.

The simulation tasks read and write data from Amazon Simple Storage Service (S3), taking advantage of S3’s ability to store vast amounts of data and to handle any conceivable request rate.

Inside a Simulation Task
Each potential head design is described by a collection of parameters; the overall simulation run consists of an exploration of this parameter space. The results of the run help the designers to find designs that are buildable, reliable, and manufacturable. This particular run focused on modeling write operations.

Each simulation task ran for 2 to 3 hours, depending on the EC2 instance type. In order to avoid losing work if a Spot Instance is about to be terminated, the tasks checkpoint themselves to S3 every 15 minutes, with a bit of extra logic to cover the important case where the job finishes after the termination signal but before the actual shutdown.

Making the Run
After just 6 weeks of planning and prep (including multiple large-scale AWS Batch runs to generate the input files), the combined Western Digital / Univa / AWS team was ready to make the full-scale run. They used an AWS CloudFormation template to start Grid Engine and launch the cluster. Due to the Redis-based tracking that I described earlier, they were able to start dispatching tasks to instances as soon as they became available. The cluster grew to one million vCPUs in 1 hour and 32 minutes and ran full-bore for 6 hours:

When there were no more undispatched tasks available, Grid Engine began to shut the instances down, reaching the zero-instance point in about an hour. During the run, Grid Engine was able to keep the instances fully supplied with work over 99% of the time. The run used a combination of C3, C4, M4, R3, R4, and M5 instances. Here’s the overall breakdown over the course of the run:

The job spanned all six Availability Zones in the US East (N. Virginia) Region. Spot bids were placed at the On-Demand price. Over the course of the run, about 1.5% of the instances in the fleet were terminated and automatically replaced; the vast majority of the instances stayed running for the entire time.

And That’s That
This job ran 8 hours and cost $137,307 ($17,164 per hour). The folks I talked to estimated that this was about half the cost of making the run on an in-house cluster, if they had one of that size!

Evaluating the success of the run, Steve Phillpott (CIO of Western Digital) told us:

“Storage technology is amazingly complex and we’re constantly pushing the limits of physics and engineering to deliver next-generation capacities and technical innovation. This successful collaboration with AWS shows the extreme scale, power and agility of cloud-based HPC to help us run complex simulations for future storage architecture analysis and materials science explorations. Using AWS to easily shrink simulation time from 20 days to 8 hours allows Western Digital R&D teams to explore new designs and innovations at a pace un-imaginable just a short time ago.”

The Western Digital team behind this one is hiring an R&D Engineering Technologist; they also have many other open positions!

A Run for You
If you want to do a run on the order of 100K to 1M cores (or more), our HPC team is ready to help, as are our friends at Univa. To get started, Contact HPC Sales!

Jeff;

Now Open – AWS Europe (Stockholm) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-europe-stockholm-region/

The AWS Region in Sweden that I promised you last year is now open and you can start using it today! The official name is Europe (Stockholm) and the API name is eu-north-1. This is our fifth region in Europe, joining the existing regions in Europe (Ireland), Europe (London), Europe (Frankfurt), and Europe (Paris). Together, these regions provide you with a total of 15 Availability Zones and allow you to architect applications that are resilient and fault tolerant. You now have yet another option to help you to serve your customers in the Nordics while keeping their data close to home.

Instances and Services
Applications running in this 3-AZ region can use C5, C5d, D2, I3, M5, M5d, R5, R5d, and T3 instances, and can use of a long list of AWS services including Amazon API Gateway, Application Auto Scaling, AWS Artifact, AWS Certificate Manager (ACM), Amazon CloudFront, AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Config, AWS Config Rules, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Auto Scaling, EC2 Dedicated Hosts, Amazon Elastic Container Service for Kubernetes, AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), Elastic Container Registry, Amazon ECS, Application Load Balancers (Classic, Network, and Application), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), Amazon Kinesis Data Streams, AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS Organizations, AWS Personal Health Dashboard, AWS Resource Groups, Amazon RDS for Aurora, Amazon RDS for PostgreSQL, Amazon Route 53 (including Private DNS for VPCs), AWS Server Migration Service, AWS Shield Standard, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS Step Functions, AWS Storage Gateway, AWS Support API, Amazon EC2 Systems Manager (SSM), AWS Trusted Advisor, Amazon Virtual Private Cloud, VM Import, and AWS X-Ray.

Edge Locations and Latency
CloudFront edge locations are already operational in four cities adjacent to the new region:

  • Stockholm, Sweden (3 locations)
  • Copenhagen, Denmark
  • Helsinki, Finland
  • Oslo, Norway

AWS Direct Connect is also available in all of these locations.

The region also offers low-latency connections to other cities and AWS regions in area. Here are the latest numbers:

AWS Customers in the Nordics
Tens of thousands of our customers in Denmark, Finland, Iceland, Norway, and Sweden already use AWS! Here’s a sampling:

Volvo Connected Solutions Group – AWS is their preferred cloud solution provider; allowing them to connect over 800,000 Volvo trucks, buses, construction equipment, and Penta engines. They make heavy use of microservices and will use the new region to deliver services with lower latency than ever before.

Fortum – Their one-megawatt Virtual Battery runs on top of AWS. The battery aggregates and controls usage of energy assets and allows Fortum to better balance energy usage across their grid. This results in lower energy costs and power bills, along with a reduced environmental impact.

Den Norske Bank – This financial services customer is using AWS to provide a modern banking experience for their customers. They can innovate and scale more rapidly, and have devoted an entire floor of their headquarters to AWS projects.

Finnish Rail – They are moving their website and travel applications to AWS in order to allow their developers to quickly experiment, build, test, and deliver personalized services for each of their customers.

And That Makes 20
With today’s launch, the AWS Cloud spans 60 Availability Zones within 20 geographic regions around the world. We are currently working on 12 more Availability Zones and four more AWS Regions in Bahrain, Cape Town, Hong Kong SAR, and Milan.

AWS services are GDPR ready and also include capabilities that are designed to support your own GDPR readiness efforts. To learn more, read the AWS Service Capabilities for GDPR and check out the AWS General Data Protection Regulation (GDPR) Center.

The Europe (Stockholm) Region is now open and you can start creating your AWS resources in it today!

Jeff;

And Now a Word from Our AWS Heroes…

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/and-now-a-word-from-our-aws-heroes/

Whew! Now that AWS re:Invent 2018 has wrapped up, the AWS Blog Team is taking some time to relax, recharge, and to prepare for 2019.

In order to wrap up the year in style, we have asked several of the AWS Heroes to write guest blog posts on an AWS-related topic of their choice. You will get to hear from Machine Learning Hero Cyrus Wong (pictured at right), Community Hero Markus Ostertag, Container Hero Philipp Garbe, and several others.

Each of these Heroes brings a fresh and unique perspective to the AWS Blog and I know that you will enjoy hearing from them. We’ll have the first post up in a day or two, so stay tuned!

Jeff;

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

New – EC2 P3dn GPU Instances with 100 Gbps Networking & Local NVMe Storage for Faster Machine Learning + P3 Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ec2-p3dn-gpu-instances-with-100-gbps-networking-local-nvme-storage-for-faster-machine-learning-p3-price-reduction/

Late last year I told you about Amazon EC2 P3 instances and also spent some time discussing the concept of the Tensor Core, a specialized compute unit that is designed to accelerate machine learning training and inferencing for large, deep neural networks. Our customers love P3 instances and are using them to run a wide variety of machine learning and HPC workloads. For example, fast.ai set a speed record for deep learning, training the ResNet-50 deep learning model on 1 million images for just $40.

Raise the Roof
Today we are expanding the P3 offering at the top end with the addition of p3dn.24xlarge instances, with 2x the GPU memory and 1.5x as many vCPUs as p3.16xlarge instances. The instances feature 100 Gbps network bandwidth (up to 4x the bandwidth of previous P3 instances), local NVMe storage, the latest NVIDIA V100 Tensor Core GPUs with 32 GB of GPU memory, NVIDIA NVLink for faster GPU-to-GPU communication, AWS-custom Intel® Xeon® Scalable (Skylake) processors running at 3.1 GHz sustained all-core Turbo, all built atop the AWS Nitro System. Here are the specs:4

Model NVIDIA V100 Tensor Core GPUs GPU Memory NVIDIA NVLink vCPUs Main Memory Local Storage Network Bandwidth EBS-Optimized Bandwidth
p3dn.24xlarge 8 256 GB 300 GB/s 96 768 GiB 2 x 900 GB NVMe SSD 100 Gbps 14 Gbps

If you are doing large-scale training runs using MXNet, TensorFlow, PyTorch, or Keras, be sure to check out the Horovod distributed training framework that is included in the Amazon Deep Learning AMIs. You should also take a look at the new NVIDIA AI Software containers in the AWS Marketplace; these containers are optimized for use on P3 instances with V100 GPUs.

With a total of 256 GB of GPU memory (twice as much as the largest of the current P3 instances), the p3dn.24xlarge allows you to explore bigger and more complex deep learning algorithms. You can rotate and scale your training images faster than ever before, while also taking advantage of the Intel AVX-512 instructions and other leading-edge Skylake features. Your GPU code can scale out across multiple GPUs and/or instances using NVLink and the NVLink Collective Communications Library (NCCL). Using NCCL will also allow you to fully exploit the 100 Gbps of network bandwidth that is available between instances when used within a Placement Group.

In addition to being a great fit for distributed machine learning training and image classification, these instances provide plenty of power for your HPC jobs. You can render 3D images, transcode video in real time, model financial risks, and much more.

You can use existing AMIs as long as they include the ENA, NVMe, and NVIDIA drivers. You will need to upgrade to the latest ENA driver to get 100 Gbps networking; if you are using the Deep Learning AMIs, be sure to use a recent version that is optimized for AVX-512.

Available Today
The p3dn.24xlarge instances are available now in the US East (N. Virginia) and US West (Oregon) Regions and you can start using them today in On-Demand, Spot, and Reserved Instance form.

Bonus – P3 Price Reduction
As part of today’s launch we are also reducing prices for the existing P3 instances. The following prices went in to effect on December 6, 2018:

  • 20% reduction for all prices (On-Demand and RI) and all instance sizes in the Asia Pacific (Tokyo) Region.
  • 15% reduction for all prices (On-Demand and RI) and all instance sizes in the Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul) Regions.
  • 15% reduction for Standard RIs with a three-year term for all instance sizes in all regions except Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul).

The percentages apply to instances running Linux; slightly smaller percentages apply to instances that run Microsoft Windows and other operating systems.

These reductions will help to make your machine learning training and inferencing even more affordable, and are being brought to you as we pursue our goal of putting machine learning in the hands of every developer.

Jeff;

 

 

New – AWS Well-Architected Tool – Review Workloads Against Best Practices

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-well-architected-tool-review-workloads-against-best-practices/

Back in 2015 we launched the AWS Well-Architected Framework and I asked Are You Well-Architected? The framework includes five pillars that encapsulate a set of core strategies and best practices for architecting systems in the cloud:

Operational Excellence – Running and managing systems to deliver business value.

Security – Protecting information and systems.

Reliability – Preventing and quickly recovering from failures.

Performance Efficiency – Using IT and compute resources efficiently.

Cost Optimization – Avoiding un-needed costs.

I think of it as a way to make sure that you are using the cloud right, and that you are using it well.

AWS Solutions Architects (SA) work with our customers to perform thousands of Well-Architected reviews every year! Even at that pace, the demand for reviews always seems to be a bit higher than our supply of SAs. Our customers tell us that the reviews are of great value and use the results to improve their use of AWS over time.

New AWS Well-Architected Tool
In order to make the Well-Architected reviews open to every AWS customer, we are introducing the AWS Well-Architected Tool. This is a self-service tool that is designed to help architects and their managers to review AWS workloads at any time, without the need for an AWS Solutions Architect.

The AWS Well-Architected Tool helps you to define your workload, answer questions designed to review the workload against the best practices specified by the five pillars, and to walk away with a plan that will help you to do even better over time. The review process includes educational content that focuses on the most current set of AWS best practices.

Let’s take a quick tour…

AWS Well-Architected Tool in Action
I open the AWS Well-Architected Tool Console and click Define workload to get started:

I begin by naming and defining my workload. I choose an industry type and an industry, list the regions where I operate, indicate if this is a pre-production or production workload, and optionally enter a list of AWS account IDs to define the span of the workload. Then I click Define workload to move ahead:

I am ready to get started, so I click Start review:

The first pillar is Operational Excellence. There are nine questions, each with multiple-choice answers. Helpful resources are displayed on the side:

I can go through the pillars and questions in order, save and exit, and so forth. After I complete my review, I can consult the improvement plan for my workload:

I can generate a detailed PDF report that summarizes my answers:

I can review my list of workloads:

And I can see the overall status in the dashboard:

Available Now
The AWS Well-Architected Tool is available now and you can start using it today for workloads in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions at no charge.

Jeff;

New for AWS Lambda – Use Any Programming Language and Share Common Components

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-lambda-use-any-programming-language-and-share-common-components/

I remember the excitement when AWS Lambda was announced in 2014Four years on, customers are using Lambda functions for many different use cases. For example, iRobot is using AWS Lambda to provide compute services for their Roomba robotic vacuum cleaners, Fannie Mae to run Monte Carlo simulations for millions of mortgages, Bustle to serve billions of requests for their digital content.

Today, we are introducing two new features that are going to make serverless development even easier:

  • Lambda Layers, a way to centrally manage code and data that is shared across multiple functions.
  • Lambda Runtime API, a simple interface to use any programming language, or a specific language version, for developing your functions.

These two features can be used together: runtimes can be shared as layers so that developers can pick them up and use their favorite programming language when authoring Lambda functions.

Let’s see how they work more in detail.

Lambda Layers

When building serverless applications, it is quite common to have code that is shared across Lambda functions. It can be your custom code, that is used by more than one function, or a standard library, that you add to simplify the implementation of your business logic.

Previously, you would have to package and deploy this shared code together with all the functions using it. Now, you can put common components in a ZIP file and upload it as a Lambda Layer. Your function code doesn’t need to be changed and can reference the libraries in the layer as it would normally do.

Layers can be versioned to manage updates, each version is immutable. When a version is deleted or permissions to use it are revoked, functions that used it previously will continue to work, but you won’t be able to create new ones.

In the configuration of a function, you can reference up to five layers, one of which can optionally be a runtime. When the function is invoked, layers are installed in /opt in the order you provided. Order is important because layers are all extracted under the same path, so each layer can potentially overwrite the previous one. This approach can be used to customize the environment. For example, the first layer can be a runtime and the second layer adds specific versions of the libraries you need.

The overall, uncompressed size of function and layers is subject to the usual unzipped deployment package size limit.

Layers can be used within an AWS account, shared between accounts, or shared publicly with the broad developer community.

There are many advantages when using layers. For example, you can use Lambda Layers to:

  • Enforce separation of concerns, between dependencies and your custom business logic.
  • Make your function code smaller and more focused on what you want to build.
  • Speed up deployments, because less code must be packaged and uploaded, and dependencies can be reused.

Based on our customer feedback, and to provide an example of how to use Lambda Layers, we are publishing a public layer which includes NumPy and SciPy, two popular scientific libraries for Python. This prebuilt and optimized layer can help you start very quickly with data processing and machine learning applications.

In addition to that, you can find layers for application monitoring, security, and management from partners such as Datadog, Epsagon, IOpipe, NodeSource, Thundra, Protego, PureSec, Twistlock, Serverless, and Stackery.

Using Lambda Layers

In the Lambda console I can now manage my own layers:

I don’t want to create a new layer now but use an existing one in a function. I create a new Python function and, in the function configuration, I can see that there are no referenced layers. I choose to add a layer:

From the list of layers compatible with the runtime of my function, I select the one with NumPy and SciPy, using the latest available version:

After I add the layer, I click Save to update the function configuration. In case you’re using more than one layer, you can adjust here the order in which they are merged with the function code.

To use the layer in my function, I just have to import the features I need from NumPy and SciPy:

import numpy as np
from scipy.spatial import ConvexHull

def lambda_handler(event, context):

    print("\nUsing NumPy\n")

    print("random matrix_a =")
    matrix_a = np.random.randint(10, size=(4, 4))
    print(matrix_a)

    print("random matrix_b =")
    matrix_b = np.random.randint(10, size=(4, 4))
    print(matrix_b)

    print("matrix_a * matrix_b = ")
    print(matrix_a.dot(matrix_b)
    print("\nUsing SciPy\n")

    num_points = 10
    print(num_points, "random points:")
    points = np.random.rand(num_points, 2)
    for i, point in enumerate(points):
        print(i, '->', point)

    hull = ConvexHull(points)
    print("The smallest convex set containing all",
        num_points, "points has", len(hull.simplices),
        "sides,\nconnecting points:")
    for simplex in hull.simplices:
        print(simplex[0], '<->', simplex[1])

I run the function, and looking at the logs, I can see some interesting results.

First, I am using NumPy to perform matrix multiplication (matrices and vectors are often used to represent the inputs, outputs, and weights of neural networks):

random matrix_1 =
[[8 4 3 8]
[1 7 3 0]
[2 5 9 3]
[6 6 8 9]]
random matrix_2 =
[[2 4 7 7]
[7 0 0 6]
[5 0 1 0]
[4 9 8 6]]
matrix_1 * matrix_2 = 
[[ 91 104 123 128]
[ 66 4 10 49]
[ 96 35 47 62]
[130 105 122 132]]

Then, I use SciPy advanced spatial algorithms to compute something quite hard to build by myself: finding the smallest “convex set” containing a list of points on a plane. For example, this can be used in a Lambda function receiving events from multiple geographic locations (corresponding to buildings, customer locations, or devices) to visually “group” similar events together in an efficient way:

10 random points:
0 -> [0.07854072 0.91912467]
1 -> [0.11845307 0.20851106]
2 -> [0.3774705 0.62954561]
3 -> [0.09845837 0.74598477]
4 -> [0.32892855 0.4151341 ]
5 -> [0.00170082 0.44584693]
6 -> [0.34196204 0.3541194 ]
7 -> [0.84802508 0.98776034]
8 -> [0.7234202 0.81249389]
9 -> [0.52648981 0.8835746 ]
The smallest convex set containing all 10 points has 6 sides,
connecting points:
1 <-> 5
0 <-> 5
0 <-> 7
6 <-> 1
8 <-> 7
8 <-> 6

When I was building this example, there was no need to install or package dependencies. I could quickly iterate on the code of the function. Deployments were very fast because I didn’t have to include large libraries or modules.

To visualize the output of SciPy, it was easy for me to create an additional layer to import matplotlib, a plotting library. Adding a few lines of code at the end of the previous function, I can now upload to Amazon Simple Storage Service (S3) an image that shows how the “convex set” is wrapping all the points:

    plt.plot(points[:,0], points[:,1], 'o')
    for simplex in hull.simplices:
        plt.plot(points[simplex, 0], points[simplex, 1], 'k-')
        
    img_data = io.BytesIO()
    plt.savefig(img_data, format='png')
    img_data.seek(0)

    s3 = boto3.resource('s3')
    bucket = s3.Bucket(S3_BUCKET_NAME)
    bucket.put_object(Body=img_data, ContentType='image/png', Key=S3_KEY)
    
    plt.close()

Lambda Runtime API

You can now select a custom runtime when creating or updating a function:

With this selection, the function must include (in its code or in a layer) an executable file called bootstrap, responsible for the communication between your code (that can use any programming language) and the Lambda environment.

The runtime bootstrap uses a simple HTTP based interface to get the event payload for a new invocation and return back the response from the function. Information on the interface endpoint and the function handler are shared as environment variables.

For the execution of your code, you can use anything that can run in the Lambda execution environment. For example, you can bring an interpreter for the programming language of your choice.

You only need to know how the Runtime API works if you want to manage or publish your own runtimes. As a developer, you can quickly use runtimes that are shared with you as layers.

We are making these open source runtimes available today:

We are also working with our partners to provide more open source runtimes:

  • Erlang (Alert Logic)
  • Elixir (Alert Logic)
  • Cobol (Blu Age)
  • N|Solid (NodeSource)
  • PHP (Stackery)

The Runtime API is the future of how we’ll support new languages in Lambda. For example, this is how we built support for the Ruby language.

Available Now

You can use runtimes and layers in all regions where Lambda is available, via the console or the AWS Command Line Interface (CLI). You can also use the AWS Serverless Application Model (SAM) and the SAM CLI to test, deploy and manage serverless applications using these new features.

There is no additional cost for using runtimes and layers. The storage of your layers takes part in the AWS Lambda Function storage per region limit.

To learn more about using the Runtime API and Lambda Layers, don’t miss our webinar on December 11, hosted by Principal Developer Advocate Chris Munns.

I am so excited by these new features, please let me know what are you going to build next!

New – Compute, Database, Messaging, Analytics, and Machine Learning Integration for AWS Step Functions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-compute-database-messaging-analytics-and-machine-learning-integration-for-aws-step-functions/

AWS Step Functions is a fully managed workflow service for application developers. You can think & work at a high level, connecting and coordinating activities in a reliable and repeatable way, while keeping your business logic separate from your workflow logic. After you design and test your workflows (which we call state machines), you can deploy them at scale, with tens or even hundreds of thousands running independently and concurrently. Step Functions tracks the status of each workflow, takes care of retrying activities on transient failures, and also simplifies monitoring and logging. To learn more, step through the Create a Serverless Workflow with AWS Step Functions and AWS Lambda tutorial.

Since our launch at AWS re:Invent 2016, our customers have made great use of Step Functions (my post, Things go Better with Step Functions describes a real-world use case). Our customers love the fact that they can easily call AWS Lambda functions to implement their business logic, and have asked us for even more options.

More Integration, More Power
Today we are giving you the power to use eight more AWS services from your Step Function state machines. Here are the new actions:

DynamoDB – Get an existing item from an Amazon DynamoDB table; put a new item into a DynamoDB table.

AWS Batch – Submit a AWS Batch job and wait for it to complete.

Amazon ECS – Run an Amazon ECS or AWS Fargate task using a task definition.

Amazon SNS – Publish a message to an Amazon Simple Notification Service (SNS) topic.

Amazon SQS – Send a message to an Amazon Simple Queue Service (SQS) queue.

AWS Glue – Start a AWS Glue job run.

Amazon SageMaker – Create an Amazon SageMaker training job; create a SageMaker transform job (learn more by reading New Features for Amazon SageMaker: Workflows, Algorithms, and Accreditation).

You can use these actions individually or in combination with each other. To help you get started, we’ve built some cool samples that will show you how to manage a batch job, manage a container task, copy data from DynamoDB, retrieve the status of a Batch job, and more. For example, here’s a visual representation of the sample that copies data from DynamoDB to SQS:

The sample (available to you as an AWS CloudFormation template) creates all of the necessary moving parts including a Lambda function that will populate (seed) the table with some test data. After I create the stack I can locate the state machine in the Step Functions Console and execute it:

I can inspect each step in the console; the first one (Seed the DynamoDB Table) calls a Lambda function that creates some table entries and returns a list of keys (message ids):

The third step (Send Message to SQS) starts with the following input:

And delivers this output, including the SQS MessageId:

As you can see, the state machine took care of all of the heavy lifting — calling the Lambda function, iterating over the list of message IDs, and calling DynamoDB and SQS for each one. I can run many copies at the same time:

I’m sure you can take this example as a starting point and build something awesome with it; be sure to check out the other samples and templates for some ideas!

If you are already building and running your own state machines, you should know about Magic ARNs and Parameters:

Magic ARNs – Each of these new operations is represented by a special “magic” (that’s the technical term Tim used) ARN. There’s one for sending to SQS, another one for running a batch job, and so forth.

Parameters – You can use the Parameters field in a Task state to control the parameters that are passed to the service APIs that implement the new functions. Your state machine definitions can include static JSON or references (in JsonPath form) to specific elements in the state input.

Here’s how the Magic ARNs and Parameters are used to define a state:

   "Read Next Message from DynamoDB": {
      "Type": "Task",
      "Resource": "arn:aws:states:::dynamodb:getItem",
      "Parameters": {
        "TableName": "StepDemoStack-DDBTable-1DKVAVTZ1QTSH",
        "Key": {
          "MessageId": {"S.$": "$.List[0]"}
        }
      },
      "ResultPath": "$.DynamoDB",
      "Next": "Send Message to SQS"
    },

Available Now
The new integrations are available now and you can start using them today in all AWS Regions where Step Functions are available. You pay the usual charge for each state transition and for the AWS services that you consume.

Jeff;

New – AWS Toolkits for PyCharm, IntelliJ (Preview), and Visual Studio Code (Preview)

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-aws-toolkits-for-pycharm-intellij-preview-and-visual-studio-code-preview/

Software developers have their own preferred tools. Some use powerful editors, others Integrated Development Environments (IDEs) that are tailored for specific languages and platforms. In 2014 I created my first AWS Lambda function using the editor in the Lambda console. Now, you can choose from a rich set of tools to build and deploy serverless applications. For example, the editor in the Lambda console has been greatly enhanced last year when AWS Cloud9 was released. For .NET applications, you can use the AWS Toolkit for Visual Studio and AWS Tools for Visual Studio Team Services.

AWS Toolkits for PyCharm, IntelliJ, and Visual Studio Code

Today, we are announcing the general availability of the AWS Toolkit for PyCharm. We are also announcing the developer preview of the AWS Toolkits for IntelliJ and Visual Studio Code, which are under active development in GitHub. These open source toolkits will enable you to easily develop serverless applications, including a full create, step-through debug, and deploy experience in the IDE and language of your choice, be it Python, Java, Node.js, or .NET.

For example, using the AWS Toolkit for PyCharm you can:

These toolkits are distributed under the open source Apache License, Version 2.0.

Installation

Some features use the AWS Serverless Application Model (SAM) CLI. You can find installation instructions for your system here.

The AWS Toolkit for PyCharm is available via the IDEA Plugin Repository. To install it, in the Settings/Preferences dialog, click Plugins, search for “AWS Toolkit”, use the checkbox to enable it, and click the Install button. You will need to restart your IDE for the changes to take effect.

The AWS Toolkit for IntelliJ and Visual Studio Code are currently in developer preview and under active development. You are welcome to build and install these from the GitHub repositories:

Building a Serverless application with PyCharm

After installing AWS SAM CLI and AWS Toolkit, I create a new project in PyCharm and choose SAM on the left to create a serverless application using the AWS Serverless Application Model. I call my project hello-world in the Location field. Expanding More Settings, I choose which SAM template to use as the starting point for my project. For this walkthrough, I select the “AWS SAM Hello World”.

In PyCharm you can use credentials and profiles from your AWS Command Line Interface (CLI) configuration. You can change AWS region quickly if you have multiple environments.
The AWS Explorer shows Lambda functions and AWS CloudFormation stacks in the selected AWS region. Starting from a CloudFormation stack, you can see which Lambda functions are part of it.

The function handler is in the app.py file. After I open the file, I click on the Lambda icon on the left of the function declaration to have the option to run the function locally or start a local step-by-step debugging session.

First, I run the function locally. I can configure the payload of the event that is provided in input for the local invocation, starting from the event templates provided for most services, such as the Amazon API Gateway, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and so on. You can use a file for the payload, or select the share checkbox to make it available to other team members. The function is executed locally, but here you can choose the credentials and the region to be used if the function is calling other AWS services, such as Amazon Simple Storage Service (S3) or Amazon DynamoDB.

A local container is used to emulate the Lambda execution environment. This function is implementing a basic web API, and I can check that the result is in the format expected by the API Gateway.

After that, I want to get more information on what my code is doing. I set a breakpoint and start a local debugging session. I use the same input event as before. Again, you can choose the credentials and region for the AWS services used by the function.

I step over the HTTP request in the code to inspect the response in the Variables tab. Here you have access to all local variables, including the event and the context provided in input to the function.

After that, I resume the program to reach the end of the debugging session.

Now I am confident enough to deploy the serverless application right-clicking on the project (or the SAM template file). I can create a new CloudFormation stack, or update an existing one. For now, I create a new stack called hello-world-prod. For example, you can have a stack for production, and one for testing. I select an S3 bucket in the region to store the package used for the deployment. If your template has parameters, here you can set up the values used by this deployment.

After a few minutes, the stack creation is complete and I can run the function in the cloud with a right-click in the AWS Explorer. Here there is also the option to jump to the source code of the function.

As expected, the result of the remote invocation is the same as the local execution. My serverless application is in production!

Using these toolkits, developers can test locally to find problems before deployment, change the code of their application or the resources they need in the SAM template, and update an existing stack, quickly iterating until they reach their goal. For example, they can add an S3 bucket to store images or documents, or a DynamoDB table to store your users, or change the permissions used by their functions.

I am really excited by how much faster and easier it is to build your ideas on AWS. Now you can use your preferred environment to accelerate even further. I look forward to seeing what you will do with these new tools!

New – Hibernate Your EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-hibernate-your-ec2-instances/

As you know, you can easily build highly scalable AWS applications that launch fresh EC2 instances on an as-needed basis. While the instances can be up and running in a matter of seconds, booting the operating system and the application can take considerable time. Also, caches and other memory-centric application components can take some time (sometimes tens of minutes) to preload or warm up. Both of these factors impose a delay that can force you to over-provision in case you need incremental capacity very quickly.

Hibernation for EC2 Instances
Today we are giving you the ability to launch EC2 instances, set them up as desired, hibernate them, and then bring them back to life when you need them. The hibernation process stores the in-memory state of the instance, along with its private and elastic IP addresses, allowing it to pick up exactly where it left off.

This feature is available today and you can use it on freshly launched M3, M4, M5, C3, C4, C5, R3, R4, and R5 instances running Amazon Linux 1 (support for Amazon Linux 2 is in the works and will be ready soon). It applies to On-Demand instances and instances running with Reserved Instance coverage.

When an instance is instructed to hibernate, it writes the in-memory state to a file in the root EBS volume and then (in effect) shuts itself down. The AMI used to launch the instance must be encrypted, as must the root EBS volume of the instance. The encryption ensures proper protection for sensitive data when it is copied from memory to the EBS volume.

While the instance is in hibernation, you pay only for the EBS volumes and Elastic IP Addresses attached to it; there are no other hourly charges (just like any other stopped instance).

Hibernation in Action
In order to check out this feature I launch a c4.large instance, and select hibernation as a stop behavior:

I also expand my instance’s root volume, adding 10 GB + the memory size of the instance to the desired size:

I also create and associate an Elastic IP address with my instance since the public IP address will change. My instance is up and running, and I can check the uptime:

Then I select the instance in the EC2 Console and choose Stop – Hibernate from the Instance State menu (API and CLI support is also available):

The instance state transitions from running to stopping, and then to stopped, in seconds:

The console provides additional information about the transition:

The SSH connection to the instance drops, since it is no longer running:

Later, when I am ready to proceed, I click Start:

This time the state goes from stopped to pending, and then to running, again in seconds, and I can reconnect. I can then use uptime to see that the instance has not been rebooted, but has continued from where it left off:

If I was using this instance interactively, I could use a session manager such as screen, tmux, or mosh to make this totally seamless. The most interesting use cases for hibernation revolve around long-running processes and services that take a lot of time to initialize before they are ready to accept traffic where this would not be a concern.

Things to Know
As you can see, hibernation is really easy to use, and I hope that you are already thinking of some ways to apply it to your application. Here are a couple of things to keep in mind:

Instance Type – You can enable and use hibernation on freshly launches instances of the types that I listed above.

Root Volume Size – The root volume must have free space equal to the amount of RAM on the instance in order for the hibernation to succeed.

Operating Systems – The newest Amazon Linux 1 AMIs are configured for hibernation, with many others in the works. You will need to create an encrypted AMI, using one of these AMIs as a base. You can also follow our directions to customize and use your own AMI.

Modifications – You cannot modify the instance size or type while it is in hibernation, but you can modify the user data and the EBS Optimization setting.

Pricing – While the instance is in hibernation, you pay only for the EBS storage and any Elastic IP addresses attached to the instance.

Performance – The time to hibernate or resume is dependent on the memory size of the instance, the amount of in-memory data to be saved, and the throughput of the root EBS volume.

Coming Soon – We are working on support for Amazon Linux 2, Ubuntu, Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2, Windows Server 2016, along with the SQL Server variants of the Windows AMIs.

Available Now
This feature is available now in the US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), and EU (Frankfurt, London, Ireland, Paris) Regions.

Jeff;