Tag Archives: launch

CloudEndure Highly Automated Disaster Recovery – 80% Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cloudendure-highly-automated-disaster-recovery-80-price-reduction/

AWS acquired CloudEndure last year. After the acquisition we began working with our new colleagues to integrate their products into the AWS product portfolio.

CloudEndure Disaster Recovery is designed to help you minimize downtime and data loss. It continuously replicates the contents of your on-premises, virtual, or cloud-based systems to a low-cost staging area in the AWS region of your choice, within the confines of your AWS account:

The block-level replication encompasses essentially every aspect of the protected system including the operating system, configuration files, databases, applications, and data files. CloudEndure Disaster Recovery can replicate any database or application that runs on supported versions of Linux or Windows, and is commonly used with Oracle and SQL Server, as well as enterprise applications such as SAP. If you do an AWS-to-AWS replication, the AWS environment within a specified VPC is replicated; this includes the VPC itself, subnets, security groups, routes, ACLs, Internet Gateways, and other items.

Here are some of the most popular and interesting use cases for CloudEndure Disaster Recovery:

On-Premises to Cloud Disaster Recovery -This model moves your secondary data center to the AWS Cloud without downtime or performance impact. You can improve your reliability, availability, and security without having to invest in duplicate hardware, networking, or software.

Cross-Region Disaster Recovery – If your application is already on AWS, you can add an additional layer of cost-effective protection and improve your business continuity by setting up cross-region disaster recovery. You can set up continuous replication between regions or Availability Zones and meet stringent RPO (Recovery Point Objective) or RTO (Recovery Time Objective) requirements.

Cross-Cloud Disaster Recovery – If you run workloads on other clouds, you can increase your overall resilience and meet compliance requirements by using AWS as your DR site. CloudEndure Disaster Recovery will replicate and recover your workloads, including automatic conversion of your source machines so that they boot and run natively on AWS.

80% Price Reduction
Recovery is quick and robust, yet cost-effective. In fact, we are reducing the price for CloudEndure Disaster Recovery by about 80% today, making it more cost-effective than ever: $0.028 per hour, or about $20 per month per server.

If you have tried to implement a DR solution in the traditional way, you know that it requires a costly set of duplicate IT resources (storage, compute, and networking) and software licenses. By replicating your workloads into a low-cost staging area in your preferred AWS Region, CloudEndure Disaster Recovery reduces compute costs by 95% and eliminates the need to pay for duplicate OS and third-party application licenses.

To learn more, watch the Disaster Recovery to AWS Demo Video:

After that, be sure to visit the new CloudEndure Disaster Recovery page!

Jeff;

In the Works – AWS Osaka Local Region Expansion to Full Region

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/in-the-works-aws-osaka-local-region-expansion-to-full-region/

Today, we are excited to announce that, due to high customer demand for additional services in Osaka, the Osaka Local Region will be expanded into a full AWS Region with three Availability Zones by early 2021. Like all AWS Regions, each Availability Zone will be isolated with its own power source, cooling system, and physical security, and be located far enough apart to significantly reduce the risk of a single event impacting availability, yet near enough to provide low latency for high availability applications.

We are constantly expanding our infrastructure to provide customers with sufficient capacity to grow and the necessary tools to architect a variety of system designs for higher availability and robustness. AWS now operates 22 regions and 69 Availability Zones globally.

In March 2011, we launched the AWS Tokyo Region as our fifth AWS Region with two Availability Zones. After that, we launched a third Tokyo Availability Zone in 2012 and a fourth in 2018.

In February 2018, we launched the Osaka Local Region as a new region construct that comprises an isolated, fault-tolerant infrastructure design contained in a single data center and complements an existing AWS Region. Located 400km from the Tokyo Region, the Osaka Local Region has supported customers with applications that require in-country, geographic diversity for disaster recovery purposes that could not be served with the Tokyo Region alone.

Osaka Local Region in the Future
When launched, the Osaka Region will provide the same broad range of services as other AWS Regions and will be available to all AWS customers. Customers will be capable of deploying multi-region systems within Japan, and those users located in western Japan will enjoy even lower latency than what they have today.

If you are interested in how AWS Global Infrastructure is designed and built to deliver the most flexible, reliable, scalable and secure cloud computing environment with the highest global network performance then check out our Global Infrastructure site which explains and visualizes it all.

Stay Tuned
I’ll be sure to share additional news about this and other upcoming AWS Regions as soon as I have it, so stay tuned! We are working on 4 more regions (Indonesia, Italy, South Africa, and Spain), and 13 more Availability Zones globally.

– Kame, Sr. Product Marketing Manager / Sr. Evangelist Amazon Web Services Japan

 

New for Amazon EFS – IAM Authorization and Access Points

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-efs-iam-authorization-and-access-points/

When building or migrating applications, we often need to share data across multiple compute nodes. Many applications use file APIs and Amazon Elastic File System (EFS) makes it easy to use those applications on AWS, providing a scalable, fully managed Network File System (NFS) that you can access from other AWS services and on-premises resources.

EFS scales on demand from zero to petabytes with no disruptions, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity. By using it, you get strong file system consistency across 3 Availability Zones. EFS performance scales with the amount of data stored, with the option to provision the throughput you need.

Last year, the EFS team focused on optimizing costs with the introduction of the EFS Infrequent Access (IA) storage class, with storage prices up to 92% lower compared to EFS Standard. You can quickly start reducing your costs by setting a Lifecycle Management policy to move to EFS IA the files that haven’t been accessed for a certain amount of days.

Today, we are introducing two new features that simplify managing access, sharing data sets, and protecting your EFS file systems:

  • IAM authentication and authorization for NFS Clients, to identify clients and use IAM policies to manage client-specific permissions.
  • EFS access points, to enforce the use of an operating system user and group, optionally restricting access to a directory in the file system.

Using IAM Authentication and Authorization
In the EFS console, when creating or updating an EFS file system, I can now set up a file system policy. This is an IAM resource policy, similar to bucket policies for Amazon Simple Storage Service (S3), and can be used, for example, to disable root access, enforce read-only access, or enforce in-transit encryption for all clients.

Identity-based policies, such as those used by IAM users, groups, or roles, can override these default permissions. These new features work on top of EFS’s current network-based access using security groups.

I select the option to disable root access by default, click on Set policy, and then select the JSON tab. Here, I can review the policy generated based on my settings, or create a more advanced policy, for example to grant permissions to a different AWS account or a specific IAM role.

The following actions can be used in IAM policies to manage access permissions for NFS clients:

  • ClientMount to give permission to mount a file system with read-only access
  • ClientWrite to be able to write to the file system
  • ClientRootAccess to access files as root

I look at the policy JSON. I see that I can mount and read (ClientMount) the file system, and I can write (ClientWrite) in the file system, but since I selected the option to disable root access, I don’t have ClientRootAccess permissions.

Similarly, I can attach a policy to an IAM user or role to give specific permissions. For example, I create a IAM role to give full access to this file system (including root access) with this policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "elasticfilesystem:ClientMount",
                "elasticfilesystem:ClientWrite",
                "elasticfilesystem:ClientRootAccess"
            ],
            "Resource": "arn:aws:elasticfilesystem:us-east-2:123412341234:file-system/fs-d1188b58"
        }
    ]
}

I start an Amazon Elastic Compute Cloud (EC2) instance in the same Amazon Virtual Private Cloud as the EFS file system, using Amazon Linux 2 and a security group that can connect to the file system. The EC2 instance is using the IAM role I just created.

The open source efs-utils are required to connect a client using IAM authentication, in-transit encryption, or both. Normally, on Amazon Linux 2, I would install efs-utils using yum, but the new version is still rolling out, so I am following the instructions to build the package from source in this repository. I’ll update this blog post when the updated package is available.

To mount the EFS file system, I use the mount command. To leverage in-transit encryption, I add the tls option. I am not using IAM authentication here, so the permissions I specified for the “*” principal in my file system policy apply to this connection.

$ sudo mkdir /mnt/shared
$ sudo mount -t efs -o tls fs-d1188b58 /mnt/shared

My file system policy disables root access by default, so I can’t create a new file as root.

$ sudo touch /mnt/shared/newfile
touch: cannot touch ‘/mnt/shared/newfile’: Permission denied

I now use IAM authentication adding the iam option to the mount command (tls is required for IAM authentication to work).

$ sudo mount -t efs -o iam,tls fs-d1188b58 /mnt/shared

When I use this mount option, the IAM role from my EC2 instance profile is used to connect, along with the permissions attached to that role, including root access:

$ sudo touch /mnt/shared/newfile
$ ls -la /mnt/shared/newfile
-rw-r--r-- 1 root root 0 Jan  8 09:52 /mnt/shared/newfile

Here I used the IAM role to have root access. Other common use cases are to enforce in-transit encryption (using the aws:SecureTransport condition key) or create different roles for clients needing write or read-only access.

EFS IAM permission checks are logged by AWS CloudTrail to audit client access to your file system. For example, when a client mounts a file system, a NewClientConnection event is shown in my CloudTrail console.

Using EFS Access Points
EFS access points allow you to easily manage application access to NFS environments, specifying a POSIX user and group to use when accessing the file system, and restricting access to a directory within a file system.

Use cases that can benefit from EFS access points include:

  • Container-based environments, where developers build and deploy their own containers (you can also see this blog post for using EFS for container storage).
  • Data science applications, that require read-only access to production data.
  • Sharing a specific directory in your file system with other AWS accounts.

In the EFS console, I create two access points for my file system, each using a different POSIX user and group:

  • /data – where I am sharing some data that must be read and updated by multiple clients.
  • /config – where I share some configuration files that must not be updated by clients using the /data access point.

I used file permissions 755 for both access points. That means that I am giving read and execute access to everyone and write access to the owner of the directory only. Permissions here are used when creating the directory. Within the directory, permissions are under full control of the user.

I mount the /data access point adding the accesspoint option to the mount command:

$ sudo mount -t efs -o tls,accesspoint=fsap-0204ce67a2208742e fs-d1188b58 /mnt/shared

I can now create a file, because I am not doing that as root, but I am automatically using the user and group ID of the access point:

$ sudo touch /mnt/shared/datafile
$ ls -la /mnt/shared/datafile
-rw-r--r-- 1 1001 1001 0 Jan  8 09:58 /mnt/shared/datafile

I mount the file system again, without specifying an access point. I see that datafile was created in the /data directory, as expected considering the access point configuration. When using the access point, I was unable to access any files that were in the root or other directories of my EFS file system.

$ sudo mount -t efs -o tls /mnt/shared/
$ ls -la /mnt/shared/data/datafile 
-rw-r--r-- 1 1001 1001 0 Jan  8 09:58 /mnt/shared/data/datafile

To use IAM authentication with access points, I add the iam option:

$ sudo mount -t efs -o iam,tls,accesspoint=fsap-0204ce67a2208742e fs-d1188b58 /mnt/shared

I can restrict a IAM role to use only a specific access point adding a Condition on the AccessPointArn to the policy:

"Condition": {
    "StringEquals": {
        "elasticfilesystem:AccessPointArn" : "arn:aws:elasticfilesystem:us-east-2:123412341234:access-point/fsap-0204ce67a2208742e"
    }
}

Using IAM authentication and EFS access points together simplifies securely sharing data for container-based architectures and multi-tenant-applications, because it ensures that every application automatically gets the right operating system user and group assigned to it, optionally limiting access to a specific directory, enforcing in-transit encryption, or giving read-only access to the file system.

Available Now
IAM authorization for NFS clients and EFS access points are available in all regions where EFS is offered, as described in the AWS Region Table. There is no additional cost for using them. You can learn more about using EFS with IAM and access points in the documentation.

It’s now easier to create scalable architectures sharing data and configurations. Let me know what you are going use these new features for!

Danilo

Urgent & Important – Rotate Your Amazon RDS, Aurora, and DocumentDB Certificates

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/urgent-important-rotate-your-amazon-rds-aurora-and-documentdb-certificates/

You may have already received an email or seen a console notification, but I don’t want you to be taken by surprise!

Rotate Now
If you are using Amazon Aurora, Amazon Relational Database Service (RDS), or Amazon DocumentDB and are taking advantage of SSL/TLS certificate validation when you connect to your database instances, you need to download & install a fresh certificate, rotate the certificate authority (CA) for the instances, and then reboot the instances.

If you are not using SSL/TLS connections or certificate validation, you do not need to make any updates, but I recommend that you do so in order to be ready in case you decide to use SSL/TLS connections in the future. In this case, you can use a new CLI option that rotates and stages the new certificates but avoids a restart.

The new certificate (CA-2019) is available as part of a certificate bundle that also includes the old certificate (CA-2015) so that you can make a smooth transition without getting into a chicken and egg situation.

What’s Happening?
The SSL/TLS certificates for RDS, Aurora, and DocumentDB expire and are replaced every five years as part of our standard maintenance and security discipline. Here are some important dates to know:

September 19, 2019 – The CA-2019 certificates were made available.

January 14, 2020 – Instances created on or after this date will have the new (CA-2019) certificates. You can temporarily revert to the old certificates if necessary.

February 5 to March 5, 2020 – RDS will stage (install but not activate) new certificates on existing instances. Restarting the instance will activate the certificate.

March 5, 2020 – The CA-2015 certificates will expire. Applications that use certificate validation but have not been updated will lose connectivity.

How to Rotate
Earlier this month I created an Amazon RDS for MySQL database instance and set it aside in preparation for this blog post. As you can see from the screen shot above, the RDS console lets me know that I need to perform a Certificate update.

I visit Using SSL/TLS to Encrypt a Connection to a DB Instance and download a new certificate. If my database client knows how to handle certificate chains, I can download the root certificate and use it for all regions. If not, I download a certificate that is specific to the region where my database instance resides. I decide to download a bundle that contains the old and new root certificates:

Next, I update my client applications to use the new certificates. This process is specific to each app and each database client library, so I don’t have any details to share.

Once the client application has been updated, I change the certificate authority (CA) to rds-ca-2019. I can Modify the instance in the console, and select the new CA:

I can also do this via the CLI:

$ aws rds modify-db-instance --db-instance-identifier database-1 \
  --ca-certificate-identifier rds-ca-2019

The change will take effect during the next maintenance window. I can also apply it immediately:

$ aws rds modify-db-instance --db-instance-identifier database-1 \
  --ca-certificate-identifier rds-ca-2019 --apply-immediately

After my instance has been rebooted (either immediately or during the maintenance window), I test my application to ensure that it continues to work as expected.

If I am not using SSL and want to avoid a restart, I use --no-certificate-rotation-restart:

$ aws rds modify-db-instance --db-instance-identifier database-1 \
  --ca-certificate-identifier rds-ca-2019 --no-certificate-rotation-restart

The database engine will pick up the new certificate during the next planned or unplanned restart.

I can also use the RDS ModifyDBInstance API function or a CloudFormation template to change the certificate authority.

Once again, all of this must be completed by March 5, 2020 or your applications may be unable to connect to your database instance using SSL or TLS.

Things to Know
Here are a couple of important things to know:

Amazon Aurora ServerlessAWS Certificate Manager (ACM) is used to manage certificate rotations for this database engine, and no action is necessary.

Regions – Rotation is needed for database instances in all commercial AWS regions except Asia Pacific (Hong Kong), Middle East (Bahrain), and China (Ningxia).

Cluster Scaling – If you add more nodes to an existing cluster, the new nodes will receive the CA-2019 certificate if one or more of the existing nodes already have it. Otherwise, the CA-2015 certificate will be used.

Learning More
Here are some links to additional information:

Jeff;

 

Amazon at CES 2020 – Connectivity & Mobility

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-at-ces-2020-connectivity-mobility/

The Consumer Electronics Show (CES) starts tomorrow. Attendees will have the opportunity to learn about the latest and greatest developments in many areas including 5G, IoT, Advertising, Automotive, Blockchain, Health & Wellness, Home & Family, Immersive Entertainment, Product Design & Manufacturing, Robotics & Machine Intelligence, and Sports.

Amazon at CES
If you will be traveling to Las Vegas to attend CES, I would like to invite you to visit the Amazon Automotive exhibit in the Las Vegas Convention Center. Come to booth 5616 to learn about our work to help auto manufacturers and developers create the next generation of software-defined vehicles:

As you might know, this industry is working to reinvent itself, with manufacturers expanding from designing & building vehicles to a more expansive vision that encompasses multiple forms of mobility.

At the booth, you will find multiple demos that are designed to show you what is possible when you mashup vehicles, connectivity, software, apps, sensors, and machine learning in new ways.

Cadillac Customer Journey – This is an interactive, immersive demo of a data-driven shopping experience to engage customers at every touchpoint. Powered by ZeroLight and running on AWS, the demo uses 3D imagery that is generated in real time on GPU-equipped EC2 instances.

Future Mobility – This demo uses the Alexa Auto SDK and several AWS Machine Learning services to create an interactive in-vehicle assistant. It stores driver profiles in the cloud, and uses Amazon Rekognition to load the proper profile for the driver. Machine learning is used to detect repeated behaviors, such as finding the nearest coffee shop each morning.

Rivian Alexa – This full-vehicle demo showcases the deep Alexa Auto SDK integration that Rivian is using to control core vehicle functions on their upcoming R1T Electric Truck.

Smart Home / Garage – This demo ensemble showcases several of the Alexa home-to-car and car-to-home integrations, and features multiple Amazon & Alexa offerings including Amazon Pay, Fire TV, and Ring.

Karma Automotive / Blackberry QNX – Built on AWS IoT and machine learning inference models developed using Amazon SageMaker, this demo includes two use cases. The first one shows how data from Karma‘s fleet of electric vehicles is used to predict the battery state of health. The second one shows how cloud-trained models run at the edge (in the vehicle) to detect gestures that control vehicle functions.

Accenture Personalized Connected Vehicle Adventure – This demo shows how identity and personalization can be used to create unique transportation experiences. The journeys are customized using learned preferences and contextual data gathered in real time, powered by Amazon Personalize.

Accenture Data Monetization – This demo tackles data monetization while preserving customer privacy. Built around a data management reference architecture that uses Amazon QLDB and AWS Data Exchange, the demo enables consent and value exchange, with a focus on insights, predictions, and recommendations.

Denso Connected Vehicle Reference System – CVRS is an intelligent, end-to-end mobility service built on the AWS Connected Vehicle Solution. It uses a layered architecture that combines edge and cloud components, to allow mobility service providers to build innovative products without starting from scratch.

WeRide – This company runs a fleet of autonomous vehicles in China. The ML training to support the autonomy runs on AWS, as does the overall fleet management system. The demo shows how the AWS cloud supports their connected & autonomous fleet.

Dell EMC / National Instruments – This jointly developed demo focuses on the Hardware-in-Loop phase of autonomous vehicle development, where actual vehicle hardware running in real-world conditions is used.

Unity – This demo showcases a Software-in-Loop autonomous vehicle simulation built with Unity. An accurate, photorealistic representation of Berlin, Germany is used, with the ability to dynamically vary parameters such as time, weather, and scenery. Using the Unity Simulation framework and AWS, 100 permutations of each scene are generated and used as training data in parallel.

Get in Touch
If you are interested in learning more about any of these demos or if you are ready to build a connected or autonomous vehicle solution of your own, please feel free to contact us.

Jeff;

Celebrating AWS Community Leaders at re:Invent 2019

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/celebrating-aws-community-leaders-at-reinvent-2019/

Even though cloud computing is a global phenomenon, location still matters when it comes to community. For example, customers regularly tell me that they are impressed by the scale, enthusiasm, and geographic reach of the AWS User Group Community. We are deeply appreciative of the work that our user group and community leaders do.

Each year, leaders of local communities travel to re:Invent in order to attend a series of events designed to meet their unique needs. They attend an orientation session, learn about We Power Tech (“Building a future of tech that is diverse, inclusive and accessible”), watch the keynotes, and participate in training sessions as part of a half-day AWS Community Leader workshop. After re:Invent wraps up, they return to their homes and use their new knowledge and skills to do an even better job of creating and sharing technical content and of nurturing their communities.

Community Leadership Grants
In order to make it possible for more community leaders to attend and benefit from re:Invent, we launched a grant program in 2018. The grants covered registration, housing, and flights and were awarded to technologists from emerging markets and underrepresented communities.

Several of the recipients went on to become AWS Heroes, and we decided to expand the program for 2019. We chose 17 recipients from 14 countries across 5 continents, with an eye toward recognizing those who are working to build inclusive AWS communities. Additionally, We Power Tech launched a separate Grant Program with Project Alloy to help support underrepresented technologists in the first five years of their careers to attend re:Invent by covering conference registration, hotel, and airfare. In total, there were 102 grantees from 16 countries.

The following attendees received Community Leadership Grants and were able to attend re:Invent:

Ahmed Samir – Riyadh, KSA (LinkedIn, Twitter) – Ahmed is a co-organizer of the AWS Riyadh User Group. He is well known for his social media accounts in which he translates all AWS announcements to Arabic.

Veronique Robitaille – Valencia, Spain (LinkedIn, Twitter) – Veronique is an SA certified cloud consultant in Valencia, Spain. She is the co organizer of the AWS User Group in Valencia, and also translates AWS content into Spanish.

Dzenana Dzevlan – Mostar, Bosnia (LinkedIn) – Dzenana is an electrical engineering masters student at the University of Sarajevo, and a co-organizer of the AWS User Group in Bosnia-Herzegovina.

Magdalena Zawada – Warsaw, Poland (LinkedIn) – Magdalena is a cloud consultant and co-organizer of the AWS User Group Poland.

Hiromi Ito – Osaka, Japan (Twitter) – Hiromi runs IT communities for women in Japan and elsewhere in Asia, and also contributes to JAWS-UG in Kansai. She is the founder of the Asian Woman’s Association Meetup in Singapore.

Lena Taupier – Columbus, Ohio, USA (LinkedIn) – Lena co-organizes the Columbus AWS Meetup, was on the organizing team for the 2018 and 2019 Midwest / Chicago AWS Community Days, and delivered a lightning talk on “Building Diverse User Communities” at re:Invent.

Victor Perez – Panama City, Panama (LinkedIn) – Victor founded the AWS Panama User Group after deciding that he wanted to make AWS Cloud the new normal for the country. He also created the AWS User Group Caracas.

Hiro Nishimura – New York, USA (LinkedIn, Twitter) – Hiro is an educator at heart. She founded AWS Newbies to teach beginners about AWS, and worked with LinkedIn to create video courses to introduce cloud computing to non-engineers.

Sridevi Murugayen –  Chennai, India (LinkedIn) – Sridevi is a core leader of AWS Community Day Chennai. She managed a diversity session at the Community Day, and is a regular presenter and participant in the AWS Chennai User Group.

Sukanya Mandal – Mumbai, India (LinkedIn) – Sukanya leads the PyData community in Mumbai, and also contributes to the AWS User Group there. She talked about “ML for IoT at the Edge” at the AWS Developer Lounge in the re:Invent 2019 Expo Hall.

Seohyun Yoon – Seoul, Korea (LinkedIn) – Seohyun is a founding member of the student division of the AWS Korea Usergroup (AUSG), one of the youngest active AWS advocates in Korea, and served as a judge for the re:Invent 2019 Non-Profit Hackathon for Good. Check out her hands-on AWS lab guides!

Farah Clara Shinta Rachmady – Jakarta, Indonesia (LinkedIn, Twitter) – Farah nurtures AWS Indonesia and other technical communities in Indonesia, and also organizes large-scale events & community days.

Sandy Rodríguez – Mexico City, Mexico (LinkedIn) – Sandy co-organized the AWS Mexico City User Group and focuses on making events great for attendees. She delivered a 20-minute session in the AWS Village Theater at re:Invent 2019. Her work is critical to the growth of the AWS community in Mexico.

Vanessa Alves dos Santos – São Paulo, Brazil (LinkedIn) – Vanessa is a powerful AWS advocate within her community. She helped to plan AWS Community Days Brazil and the AWS User Group in São Paulo.

The following attendees were chosen for grants, but were not able to attend due to issues with travel visas:

Ayeni Oluwakemi – Lagos, Nigeria (LinkedIn, Twitter) – Ayeni is the founder of the AWS User Group in Lagos, Nigeria. She is the organizer of AWSome Day in Nigeria, and writes for the Cloud Guru Blog.

Ewere Diagboya – Lagos, Nigeria (LinkedIn, Twitter) – Ewere is one of our most active advocates in Nigeria. He is very active in the DevOps and Cloud Computing community as educator, and also organizes the DevOps Nigeria Meetup.

Minh Ha – Hanoi, Vietnam – Minh grows the AWS User Group Vietnam by organizing in-person meetups and online events. She co-organized AWS Community Day 2018, runs hackathons, and co-organized SheCodes Vietnam.

Jeff;

 

Alejandra’s Top 5 Favorite re:Invent🎉 Launches of 2019

Post Syndicated from Alejandra Quetzalli original https://aws.amazon.com/blogs/aws/alejandras-top-5-favorite-reinvent%F0%9F%8E%89-launches-of-2019/

favorite re:Invent launches of 2019

While re:Invent 2019 may feel well over, I’m still feeling elated and curious about several of the launches that were announced that week. Is it just me, or did some of the new feature announcements seem to bring us closer to the Scifi worlds (i.e. AWS WaveLength anyone? and don’t get me started on Amazon Braket) of the future we envisioned as kids?

The future might very well be here. Can you handle it?

If you can, then I’m pumped to tell you why the following 5 launches of re:Invent 2019 got me the most excited.

[CAVEAT: Out of consideration for your sanity, dear reader, we try to keep these posts to a maximum word length. After all, I wouldn’t want you to fall asleep at your keyboard during work hours! Sadly, this also means I limited myself to only sharing a set number of the cool, new launches that happened. If you’re curious to read about ALL OF THEM, you can find them here: 2019 re:Invent Announcement Summary Page.]

 

1. Amazon Braket: explore Quantum Computing

Backstory of why I picked this one…

First of all, let’s address the 🐘elephant in the room🐘 and admit that 99.9% of us don’t really know what Quantum Computing is. But we want to! Because it sounds so cool and futuristic. So let’s give it a shot…

According to The Internet, a quantum computer is any computational device that uses the quantum mechanical phenomenas of superposition and entanglement to perform data operations. The basic principle of quantum computation is that quantum properties can be used to represent data and perform operations on it. Also, fun fact… in a “normal” computer —like your laptop— that data…that information… is stored in something called bits. But in a quantum computer, it is stored as qubits (quantum bits).

Quantum Computing is still in its infancy. Are you wondering where it will go?

What got launched?

Amazon Braket is a new service that makes it easy for scientists, researchers, and developers to build, test, and run quantum computing algorithms.

Sounds cool, but what does that actually mean?

The way it works is that Amazon Braket provides a development environment that enables you to design your own quantum algorithms from scratch or choose from a set of pre-built algorithms. Once you’ve picked your algorithm of choice, Amazon Braket provides you with a simulation service that helps you troubleshoot and verify your implementation of said algorithm. Once you’re ready, you could also choose to run your algorithm on a real quantum computer from one of our quantum hardware providers (i.e. D-Wave, IonQ, and Rigetti, etc).

So what are you waiting for? Go explore the future of quantum computing with Amazon Braket!

👉🏽Don’t forget to check out the docs: aws.amazon.com/braket
⚠Sign up to get notified when it’s released.

 

2. AWS Wavelength: ultra-low latency apps for 5G

Backstory of why I picked this one…

When I was a kid in the 80s, we were still on the beginning stages of the first wireless technology.

1G.

It had a lot of similarities to an old AM/FM radio. And just like with radio stations, cell phone calls ended up recieving interference from other callers ALL THE TIME. Sometimes, the calls became staticy if you were too far away from cell phone towers.

But it’s no longer the 80s, my dear readers. It’s 2019 and we’re all the way up to 5G now.

[note: When talking about 1,2, 3, 4 or 5G, the G stands for generation.]

What got launched?

AWS Wavelength combines high bandwidth and single-digit millisecond latency of 5G networks with AWS compute and storage services to enable developers to build new kinds of apps.

Phew, that was quite the brain🧠dump🗑, wasn’t it?

Sounds cool, but what does that actually mean?

Every generation of wireless technology has been defined by the speed of data transmission. So just how fast are we hoping 5G will be? Well, to give you a baseline…our fastest current 4G mobile networks offer about 45Mbps (megabits per second). But Qualcomm believes 5G could achieve browsing and download speeds about 10 to 20 times faster!

What makes this speed improvement possible is that 5G technology makes better use of the radio spectrum. It enables a more devices to access the mobile internet at the same time. Thus, it’s much better at handling thousands of devices simultaneously, without the congestion that was experienced in previous wireless generations.

At this speed, access to low latency services is really important. Why? Low latency is optimized to process a high volume of data messages with minimal delay (latency). This is exactly what you want if your business requires near real-time access to rapidly changing data.

Enter AWS Wavelength.

AWS Wavelength brings AWS services to the edge of the 5G network. It allows you to build the next generation of ultra-low latency apps using familiar AWS services, APIs, and tools. To deploy your app to 5G, simply extend your Amazon Virtual Private Cloud (VPC) to a Wavelength Zone and then create AWS resources like Amazon Elastic Compute Cloud (EC2) instances and Amazon Elastic Block Storage (EBS) volumes.

The other neat news is that AWS Wavelength will be in partnership with Verizon starting in 2020, as well as working with other carriers like Vodafone, SK Telecom, and KDDI to expand Wavelength Zones to more locations by the end of 2020.

👉🏽Don’t forget to check out the docs: aws.amazon.com/wavelength
⚠Sign up to get notified when it’s released.

 

3. AWS DeepComposer: learn Machine Learning with a piano keyboard!

Backstory of why I picked this one…

I do not have a Machine Learning (ML) background. At all.

But I do have a piano and musical background. 🎹🎶I learnt how to play the piano at 4, and I first got into composing when I was about 12 years old. Not having a super fancy piano instructor at the time, I remember wondering how an average person could learn how to compose, regardless of your musical background.

What got launched?

AWS DeepComposer is a machine learning-enabled keyboard for developers that also uses AI (Artificial Intelligence) to create original songs and melodies.

Sounds cool, but what does that actually mean?

AWS DeepComposer includes tutorials, sample code, and training data that can be used to get started building generative models, all without having to write a single line of code! This is great, because it helps encourage people new to ML to still give it a whirl.

Now the other neat thing about AWS DeepComposer, is that it opens the door for you to learn about Generative AI — one of the biggest advancements in AI technology . You’ll learn about Generative Adversarial Networks (GANs), a Generative AI technique that puts two different neural networks against each other to produce new and original digital works based on sample inputs. With AWS DeepComposer, you are training and optimizing GAN models to create original music. 🎶

Is that awesome, or what?

👉🏽Don’t forget to check out the docs: aws.amazon.com/deepcomposer
⚠Sign up to get notified when it’s released.

 

4. Amplify: now it’s ready for iOS and Android devs too!

Backstory of why I picked this one…

I used to be a CSS developer. Joining the Back-End world was an accident for me, since I first assumed I’d always be a Front-End developer.

Amplify makes it easy for developers to build and deploy Full-Stack apps that leverage the cloud. It’s a service that really helps bridge the gap between Front and Back-End development. Seeing Amplify now offer SDKs and libraries for iOS and Android devs sounds even more inclusive and exciting!

What got launched?

The Amplify Framework (open source project for building cloud-enabled mobile and web apps) is ready for iOS and Andriod developers! There are now — in preview— Amplify iOS and Amplify Android libraries for building scalable and secure cloud powered serverless apps.

Sounds cool, but what does that actually mean?

Developers can now add capabilities of Analytics, AI/ML, API (GraphQL and REST), DataStore, and Storage to their mobile apps with these new iOS and Android Amplify libraries.

This release also included support for the Predictions category in Amplify iOS that allows developers to easily add and configure AI/ML use cases with very few lines of code. (And no machine learning experience required!) This allows developers to then accomplish other use cases of text translation, speech to text generation, image recognition, text to speech, insights from text, etc. You can even hook it up to services such as Amazon Rekognition, Amazon Translate, Amazon Polly, Amazon Transcribe, Amazon Comprehend, and Amazon Textract.

👉🏽Don’t forget to check out the docs…
📳Android: aws-amplify.github.io/docs/android/start
📱iOS: aws-amplify.github.io/docs/ios/start

 

5. EC2 Image Builder

Backstory of why I picked this one…

In my 1st year at AWS as a Developer Advocate, I got really into robotics and IoT. I’m not giving that up anytime soon, but for 2020, I’m also excited to serve more customers that are new to core AWS services. You know, things like storage, compute, containers, databases, etc.

Thus, it came as no surprise to me when this new launch caught my eye… 👀

What got launched?

EC2 Image Builder is a service that makes it easier and faster to build and maintain secure container images. It greatly simplifies the creation, patching, testing, distribution, and sharing of Linux or Windows Server images.

Sounds cool, but what does that actually mean?

In the past, creating custom container images felt way too complex and time consuming. Most dev teams had to manually update VMs or build automation scripts to maintain these images.

Can you imagine?

Today, Amazon’s Image Builder service simplifies this process by allowing you to create custom OS images via an AWS GUI environment. You can also use it to build an automated pipeline that customizes, tests, and distributes your images in addition to keeping them secure and up-to-date. Sounds like a win-win to me. 🏆

👉🏽Don’t forget to check out the docs: aws.amazon.com/image-builder

 

¡Gracias por tu tiempo!
~Alejandra 💁🏻‍♀️ & Canela 🐾

New – Amazon Comprehend Medical Adds Ontology Linking

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-amazon-comprehend-medical-adds-ontology-linking/

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights in unstructured text. It is very easy to use, with no machine learning experience required. You can customize Comprehend for your specific use case, for example creating custom document classifiers to organize your documents into your own categories, or custom entity types that analyze text for your specific terms. However, medical terminology can be very complex and specific to the healthcare domain.

For this reason, we introduced last year Amazon Comprehend Medical, a HIPAA eligible natural language processing service that makes it easy to use machine learning to extract relevant medical information from unstructured text. Using Comprehend Medical, you can quickly and accurately gather information, such as medical condition, medication, dosage, strength, and frequency from a variety of sources like doctors’ notes, clinical trial reports, and patient health records.

Today, we are adding the capability of linking the information extracted by Comprehend Medical to medical ontologies.

An ontology provides a declarative model of a domain that defines and represents the concepts existing in that domain, their attributes, and the relationships between them. It is typically represented as a knowledge base, and made available to applications that need to use or share knowledge. Within health informatics, an ontology is a formal description of a health-related domain.

The ontologies supported by Comprehend Medical are:

  • ICD-10-CM, to identify medical conditions as entities and link related information such as diagnosis, severity, and anatomical distinctions as attributes of that entity. This is a diagnosis code set that is very useful for population health analytics, and for getting payments from insurance companies based on medical services rendered.
  • RxNorm, to identify medications as entities and link attributes such as dose, frequency, strength, and route of administration to that entity. Healthcare providers use these concepts to enable use cases like medication reconciliation, which is is the process of creating the most accurate list possible of all medications a patient is taking.

For each ontology, Comprehend Medical returns a ranked list of potential matches. You can use confidence scores to decide which matches make sense, or what might need further review. Let’s see how this works with an example.

Using Ontology Linking
In the Comprehend Medical console, I start by giving some unstructured, doctor notes in input:

At first, I use some functionalities that were already available in Comprehend Medical to detect medical and protected health information (PHI) entities.

Among the recognized entities (see this post for more info) there are some symptoms and medications. Medications are recognized as generics or brands. Let’s see how we can connect some of these entities to more specific concepts.

I use the new features to link those entities to RxNorm concepts for medications.

In the text, only the parts mentioning medications are detected. In the details of the answer, I see more information. For example, let’s look at one of the detected medications:

  • The first occurrence of the term “Clonidine” (in second line in the input text above) is linked to the generic concept (on the left in the image below) in the RxNorm ontology.
  • The second occurrence of the term “Clonidine” (in the fourth line in the input text above) is followed by an explicit dosage, and is linked to a more prescriptive format that includes dosage (on the right in the image below) in the RxNorm ontology.

To look for for medical conditions using ICD-10-CM concepts, I am giving a different input:

The idea again is to link the detected entities, like symptoms and diagnoses, to specific concepts.

As expected, diagnoses and symptoms are recognized as entities. In the detailed results those entities are linked to the medical conditions in the ICD-10-CM ontology. For example, the two main diagnoses described in the input text are the top results, and specific concepts in the ontology are inferred by Comprehend Medical, each with its own score.

In production, you can use Comprehend Medical via API, to integrate these functionalities with your processing workflow. All the screenshots above render visually the structured information returned by the API in JSON format. For example, this is the result of detecting medications (RxNorm concepts):

{
    "Entities": [
        {
            "Id": 0,
            "Text": "Clonidine",
            "Category": "MEDICATION",
            "Type": "GENERIC_NAME",
            "Score": 0.9933062195777893,
            "BeginOffset": 83,
            "EndOffset": 92,
            "Attributes": [],
            "Traits": [],
            "RxNormConcepts": [
                {
                    "Description": "Clonidine",
                    "Code": "2599",
                    "Score": 0.9148101806640625
                },
                {
                    "Description": "168 HR Clonidine 0.00417 MG/HR Transdermal System",
                    "Code": "998671",
                    "Score": 0.8215734958648682
                },
                {
                    "Description": "Clonidine Hydrochloride 0.025 MG Oral Tablet",
                    "Code": "892791",
                    "Score": 0.7519310116767883
                },
                {
                    "Description": "10 ML Clonidine Hydrochloride 0.5 MG/ML Injection",
                    "Code": "884225",
                    "Score": 0.7171697020530701
                },
                {
                    "Description": "Clonidine Hydrochloride 0.2 MG Oral Tablet",
                    "Code": "884185",
                    "Score": 0.6776907444000244
                }
            ]
        },
        {
            "Id": 1,
            "Text": "Vyvanse",
            "Category": "MEDICATION",
            "Type": "BRAND_NAME",
            "Score": 0.9995427131652832,
            "BeginOffset": 148,
            "EndOffset": 155,
            "Attributes": [
                {
                    "Type": "DOSAGE",
                    "Score": 0.9910679459571838,
                    "RelationshipScore": 0.9999822378158569,
                    "Id": 2,
                    "BeginOffset": 156,
                    "EndOffset": 162,
                    "Text": "50 mgs",
                    "Traits": []
                },
                {
                    "Type": "ROUTE_OR_MODE",
                    "Score": 0.9997182488441467,
                    "RelationshipScore": 0.9993833303451538,
                    "Id": 3,
                    "BeginOffset": 163,
                    "EndOffset": 165,
                    "Text": "po",
                    "Traits": []
                },
                {
                    "Type": "FREQUENCY",
                    "Score": 0.983681321144104,
                    "RelationshipScore": 0.9999642372131348,
                    "Id": 4,
                    "BeginOffset": 166,
                    "EndOffset": 184,
                    "Text": "at breakfast daily",
                    "Traits": []
                }
            ],
            "Traits": [],
            "RxNormConcepts": [
                {
                    "Description": "lisdexamfetamine dimesylate 50 MG Oral Capsule [Vyvanse]",
                    "Code": "854852",
                    "Score": 0.8883932828903198
                },
                {
                    "Description": "lisdexamfetamine dimesylate 50 MG Chewable Tablet [Vyvanse]",
                    "Code": "1871469",
                    "Score": 0.7482635378837585
                },
                {
                    "Description": "Vyvanse",
                    "Code": "711043",
                    "Score": 0.7041242122650146
                },
                {
                    "Description": "lisdexamfetamine dimesylate 70 MG Oral Capsule [Vyvanse]",
                    "Code": "854844",
                    "Score": 0.23675969243049622
                },
                {
                    "Description": "lisdexamfetamine dimesylate 60 MG Oral Capsule [Vyvanse]",
                    "Code": "854848",
                    "Score": 0.14077001810073853
                }
            ]
        },
        {
            "Id": 5,
            "Text": "Clonidine",
            "Category": "MEDICATION",
            "Type": "GENERIC_NAME",
            "Score": 0.9982216954231262,
            "BeginOffset": 199,
            "EndOffset": 208,
            "Attributes": [
                {
                    "Type": "STRENGTH",
                    "Score": 0.7696017026901245,
                    "RelationshipScore": 0.9999960660934448,
                    "Id": 6,
                    "BeginOffset": 209,
                    "EndOffset": 216,
                    "Text": "0.2 mgs",
                    "Traits": []
                },
                {
                    "Type": "DOSAGE",
                    "Score": 0.777644693851471,
                    "RelationshipScore": 0.9999927282333374,
                    "Id": 7,
                    "BeginOffset": 220,
                    "EndOffset": 236,
                    "Text": "1 and 1 / 2 tabs",
                    "Traits": []
                },
                {
                    "Type": "ROUTE_OR_MODE",
                    "Score": 0.9981689453125,
                    "RelationshipScore": 0.999950647354126,
                    "Id": 8,
                    "BeginOffset": 237,
                    "EndOffset": 239,
                    "Text": "po",
                    "Traits": []
                },
                {
                    "Type": "FREQUENCY",
                    "Score": 0.99753737449646,
                    "RelationshipScore": 0.9999889135360718,
                    "Id": 9,
                    "BeginOffset": 240,
                    "EndOffset": 243,
                    "Text": "qhs",
                    "Traits": []
                }
            ],
            "Traits": [],
            "RxNormConcepts": [
                {
                    "Description": "Clonidine Hydrochloride 0.2 MG Oral Tablet",
                    "Code": "884185",
                    "Score": 0.9600071907043457
                },
                {
                    "Description": "Clonidine Hydrochloride 0.025 MG Oral Tablet",
                    "Code": "892791",
                    "Score": 0.8955953121185303
                },
                {
                    "Description": "24 HR Clonidine Hydrochloride 0.2 MG Extended Release Oral Tablet",
                    "Code": "885880",
                    "Score": 0.8706559538841248
                },
                {
                    "Description": "12 HR Clonidine Hydrochloride 0.2 MG Extended Release Oral Tablet",
                    "Code": "1013937",
                    "Score": 0.786146879196167
                },
                {
                    "Description": "Chlorthalidone 15 MG / Clonidine Hydrochloride 0.2 MG Oral Tablet",
                    "Code": "884198",
                    "Score": 0.601354718208313
                }
            ]
        }
    ],
    "ModelVersion": "0.0.0"
}

Similarly, this is the output when detecting medical conditions (ICD-10-CM concepts):

{
    "Entities": [
        {
            "Id": 0,
            "Text": "coronary artery disease",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9933860898017883,
            "BeginOffset": 90,
            "EndOffset": 113,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "DIAGNOSIS",
                    "Score": 0.9682672023773193
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Atherosclerotic heart disease of native coronary artery without angina pectoris",
                    "Code": "I25.10",
                    "Score": 0.8199513554573059
                },
                {
                    "Description": "Atherosclerotic heart disease of native coronary artery",
                    "Code": "I25.1",
                    "Score": 0.4950370192527771
                },
                {
                    "Description": "Old myocardial infarction",
                    "Code": "I25.2",
                    "Score": 0.18753206729888916
                },
                {
                    "Description": "Atherosclerotic heart disease of native coronary artery with unstable angina pectoris",
                    "Code": "I25.110",
                    "Score": 0.16535982489585876
                },
                {
                    "Description": "Atherosclerotic heart disease of native coronary artery with unspecified angina pectoris",
                    "Code": "I25.119",
                    "Score": 0.15222692489624023
                }
            ]
        },
        {
            "Id": 2,
            "Text": "atrial fibrillation",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9923409223556519,
            "BeginOffset": 116,
            "EndOffset": 135,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "DIAGNOSIS",
                    "Score": 0.9708861708641052
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Unspecified atrial fibrillation",
                    "Code": "I48.91",
                    "Score": 0.7011875510215759
                },
                {
                    "Description": "Chronic atrial fibrillation",
                    "Code": "I48.2",
                    "Score": 0.28612759709358215
                },
                {
                    "Description": "Paroxysmal atrial fibrillation",
                    "Code": "I48.0",
                    "Score": 0.21157972514629364
                },
                {
                    "Description": "Persistent atrial fibrillation",
                    "Code": "I48.1",
                    "Score": 0.16996538639068604
                },
                {
                    "Description": "Atrial premature depolarization",
                    "Code": "I49.1",
                    "Score": 0.16715925931930542
                }
            ]
        },
        {
            "Id": 3,
            "Text": "hypertension",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9993137121200562,
            "BeginOffset": 138,
            "EndOffset": 150,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "DIAGNOSIS",
                    "Score": 0.9734011888504028
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Essential (primary) hypertension",
                    "Code": "I10",
                    "Score": 0.6827990412712097
                },
                {
                    "Description": "Hypertensive heart disease without heart failure",
                    "Code": "I11.9",
                    "Score": 0.09846580773591995
                },
                {
                    "Description": "Hypertensive heart disease with heart failure",
                    "Code": "I11.0",
                    "Score": 0.09182810038328171
                },
                {
                    "Description": "Pulmonary hypertension, unspecified",
                    "Code": "I27.20",
                    "Score": 0.0866364985704422
                },
                {
                    "Description": "Primary pulmonary hypertension",
                    "Code": "I27.0",
                    "Score": 0.07662317156791687
                }
            ]
        },
        {
            "Id": 4,
            "Text": "hyperlipidemia",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9998835325241089,
            "BeginOffset": 153,
            "EndOffset": 167,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "DIAGNOSIS",
                    "Score": 0.9702492356300354
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Hyperlipidemia, unspecified",
                    "Code": "E78.5",
                    "Score": 0.8378056883811951
                },
                {
                    "Description": "Disorders of lipoprotein metabolism and other lipidemias",
                    "Code": "E78",
                    "Score": 0.20186281204223633
                },
                {
                    "Description": "Lipid storage disorder, unspecified",
                    "Code": "E75.6",
                    "Score": 0.18514418601989746
                },
                {
                    "Description": "Pure hyperglyceridemia",
                    "Code": "E78.1",
                    "Score": 0.1438658982515335
                },
                {
                    "Description": "Other hyperlipidemia",
                    "Code": "E78.49",
                    "Score": 0.13983778655529022
                }
            ]
        },
        {
            "Id": 5,
            "Text": "chills",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9989762306213379,
            "BeginOffset": 211,
            "EndOffset": 217,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "SYMPTOM",
                    "Score": 0.9510533213615417
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Chills (without fever)",
                    "Code": "R68.83",
                    "Score": 0.7460958361625671
                },
                {
                    "Description": "Fever, unspecified",
                    "Code": "R50.9",
                    "Score": 0.11848161369562149
                },
                {
                    "Description": "Typhus fever, unspecified",
                    "Code": "A75.9",
                    "Score": 0.07497859001159668
                },
                {
                    "Description": "Neutropenia, unspecified",
                    "Code": "D70.9",
                    "Score": 0.07332006841897964
                },
                {
                    "Description": "Lassa fever",
                    "Code": "A96.2",
                    "Score": 0.0721040666103363
                }
            ]
        },
        {
            "Id": 6,
            "Text": "nausea",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9993392825126648,
            "BeginOffset": 220,
            "EndOffset": 226,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "SYMPTOM",
                    "Score": 0.9175007939338684
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Nausea",
                    "Code": "R11.0",
                    "Score": 0.7333012819290161
                },
                {
                    "Description": "Nausea with vomiting, unspecified",
                    "Code": "R11.2",
                    "Score": 0.20183530449867249
                },
                {
                    "Description": "Hematemesis",
                    "Code": "K92.0",
                    "Score": 0.1203150525689125
                },
                {
                    "Description": "Vomiting, unspecified",
                    "Code": "R11.10",
                    "Score": 0.11658868193626404
                },
                {
                    "Description": "Nausea and vomiting",
                    "Code": "R11",
                    "Score": 0.11535880714654922
                }
            ]
        },
        {
            "Id": 8,
            "Text": "flank pain",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9315784573554993,
            "BeginOffset": 235,
            "EndOffset": 245,
            "Attributes": [
                {
                    "Type": "ACUITY",
                    "Score": 0.9809532761573792,
                    "RelationshipScore": 0.9999837875366211,
                    "Id": 7,
                    "BeginOffset": 229,
                    "EndOffset": 234,
                    "Text": "acute",
                    "Traits": []
                }
            ],
            "Traits": [
                {
                    "Name": "SYMPTOM",
                    "Score": 0.8182812929153442
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Unspecified abdominal pain",
                    "Code": "R10.9",
                    "Score": 0.4959934949874878
                },
                {
                    "Description": "Generalized abdominal pain",
                    "Code": "R10.84",
                    "Score": 0.12332479655742645
                },
                {
                    "Description": "Lower abdominal pain, unspecified",
                    "Code": "R10.30",
                    "Score": 0.08319114148616791
                },
                {
                    "Description": "Upper abdominal pain, unspecified",
                    "Code": "R10.10",
                    "Score": 0.08275411278009415
                },
                {
                    "Description": "Jaw pain",
                    "Code": "R68.84",
                    "Score": 0.07797083258628845
                }
            ]
        },
        {
            "Id": 10,
            "Text": "numbness",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9659366011619568,
            "BeginOffset": 255,
            "EndOffset": 263,
            "Attributes": [
                {
                    "Type": "SYSTEM_ORGAN_SITE",
                    "Score": 0.9976192116737366,
                    "RelationshipScore": 0.9999089241027832,
                    "Id": 11,
                    "BeginOffset": 271,
                    "EndOffset": 274,
                    "Text": "leg",
                    "Traits": []
                }
            ],
            "Traits": [
                {
                    "Name": "SYMPTOM",
                    "Score": 0.7310190796852112
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Anesthesia of skin",
                    "Code": "R20.0",
                    "Score": 0.767346203327179
                },
                {
                    "Description": "Paresthesia of skin",
                    "Code": "R20.2",
                    "Score": 0.13602739572525024
                },
                {
                    "Description": "Other complications of anesthesia",
                    "Code": "T88.59",
                    "Score": 0.09990577399730682
                },
                {
                    "Description": "Hypothermia following anesthesia",
                    "Code": "T88.51",
                    "Score": 0.09953102469444275
                },
                {
                    "Description": "Disorder of the skin and subcutaneous tissue, unspecified",
                    "Code": "L98.9",
                    "Score": 0.08736388385295868
                }
            ]
        }
    ],
    "ModelVersion": "0.0.0"
}

Available Now
You can use Amazon Comprehend Medical via the console, AWS Command Line Interface (CLI), or AWS SDKs. With Comprehend Medical, you pay only for what you use. You are charged based on the amount of text processed on a monthly basis, depending on the features you use. For more information, please see the Comprehend Medical section in the Comprehend Pricing page. Ontology Linking is available in all regions were Amazon Comprehend Medical is offered, as described in the AWS Regions Table.

The new ontology linking APIs make it easy to detect medications and medical conditions in unstructured clinical text and link them to RxNorm and ICD-10-CM codes respectively. This new feature can help you reduce the cost, time and effort of processing large amounts of unstructured medical text with high accuracy.

Danilo

AWS Links & Updates – Monday, December 9, 2019

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-links-updates-monday-december-9-2019/

With re:Invent 2019 behind me, I have a fairly light blogging load for the rest of the month. I do, however, have a collection of late-breaking news and links that I want to share while they are still hot out of the oven!

AWS Online Tech Talks for December – We have 18 tech talks scheduled for the remainder of the month. You can lean about Running Kubernetes on AWS Fargate, What’s New with AWS IoT, Transforming Healthcare with AI, and much more!

AWS Outposts: Ordering and Installation Overview – This video walks you through the process of ordering and installing an Outposts rack. You will learn about the physical, electrical, and network requirements, and you will get to see an actual install first-hand.

NFL Digital Athlete – We have partnered with the NFL to use data and analytics to co-develop the Digital Athlete, a platform that aims to improve player safety & treatment, and to predict & prevent injury. Watch the video in this tweet to learn more:

AWS JPL Open Source Rover Challenge – Build and train a reinforcement learning (RL) model on AWS to autonomously drive JPL’s Open-Source Rover between given locations in a simulated Mars environment with the least amount of energy consumption and risk of damage. To learn more, visit the web site or watch the Launchpad Video.

Map for Machine Learning on AWS – My colleague Julien Simon created an awesome map that categories all of the ML and AI services. The map covers applied ML, SageMaker’s built-in environments, ML for text, ML for any data, ML for speech, ML for images & video, fraud detection, personalization & recommendation, and time series. The linked article contains a scaled-down version of the image; the original version is best!

Verified Author Badges for Serverless App Repository – The authors of applications in the Serverless Application Repository can now apply for a Verified Author badge that will appear next to the author’s name on the application card and the detail page.

Cloud Innovation Centers – We announced that we will open three more Cloud Innovation Centers in 2020 (one in Australia and two in Bahrain), bringing the global total to eleven.

Machine Learning Embark – This new program is designed to help companies transform their development teams into machine learning practitioners. It is based on our own internal experience, and will help to address and overcome common challenges in the machine learning journey. Read the blog post to learn more.

Enjoy!

Jeff;

Check out The Amazon Builders’ Library – This is How We Do It!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/check-out-the-amazon-builders-library-this-is-how-we-do-it/

Amazon customers often tell us that they want to know more about how we build and run our business. On the retail side, they tour Amazon Fulfillment Centers and see how we we organize our warehouses. Corporate customers often ask about our Leadership Principles, and sometimes adopt (and then adapt) them for their own use. I regularly speak with customers in our Executive Briefing Center (EBC), and talk to them about working backwards, PRFAQs, narratives, bar-raising, accepting failure as part of long-term success, and our culture of innovation.

The same curiosity that surrounds our business surrounds our development culture. We are often asked how we design, build, measure, run, and scale the hardware and software systems that underlie Amazon.com, AWS, and our other businesses.

New Builders’ Library
Today I am happy to announce The Amazon Builders’ Library. We are launching with a collection of detailed articles that will tell you exactly how we build and run our systems, each one written by the senior technical leaders who have deep expertise in that part of our business.

This library is designed to give you direct access to the theory and the practices that underlie our work. Students, developers, dev managers, architects, and CTOs will all find this content to be helpful. This is the content that is “not sold in stores” and not taught in school!

The library is organized by category:

Architecture – The design decisions that we make when designing a cloud service that help us to optimize for security, durability, high availability, and performance.

Software Delivery & Operations – The process of releasing new software to the cloud and maintaining health & high availability thereafter.

Inside the Library
I took a quick look at two of the articles while writing this post, and learned a lot!

Avoiding insurmountable queue backlogs – Principal Engineer David Yanacek explores the ins and outs of message queues, exploring the benefits and the risks, including many of the failure modes that can arise. He talks about how queues are used to power AWS Lambda and AWS IoT Core, and describes the sophisticated strategies that are used to maintain responsive and to implement (in his words) “magical resource isolation.” David shares multiple patterns that are used to create asynchronous multitenant systems that are resilient, including use of multiple queues, shuffle sharding, delay queues, back-pressure, and more.

Challenges with distributed systems – Senior Principal Engineer Jacob Gabrielson discusses they many ways that distributed systems can fail. After defining three distinct types (offline, soft real-time, and hard real-time) of systems, he uses an analogy with Bizarro to explain why hard real-time systems are (again, in his words) “frankly, a bit on the evil side.” Building on an example based on Pac-Man, he adds some request/reply communication and enumerates all of the ways that it can succeed or fail. He discussed fate sharing and how it can be used to reduce the number of test cases, and also talks about many of the other difficulties that come with testing distributed systems.

These are just two of the articles; be sure to check out the entire collection.

More to Come
We’ve got a lot more content in the pipeline, and we are also interested in your stories. Please feel free to leave feedback on this post, and we’ll be in touch.

Jeff;

 

AWS Launches & Previews at re:Invent 2019 – Wednesday, December 4th

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-launches-previews-at-reinvent-2019-wednesday-december-4th/

Here’s what we announced today:

Amplify DataStore – This is a persistent, on-device storage repository that will help you to synchronize data across devices and to handle offline operations. It can be used as a standalone local datastore for web and mobile applications that have no connection to the cloud or an AWS account. When used with a cloud backend, it transparently synchronizes data with AWS AppSync.

Amplify iOS and Amplify Android – These open source libraries enable you can build scalable and secure mobile applications. You can easily add analytics, AI/ML, API (GraphQL and REST), datastore, and storage functionality to your mobile and web applications. The use case-centric libraries provide a declarative interface that enables you to programmatically apply best practices with abstractions. The libraries, along with the Amplify CLI, a toolchain to create, integrate, and manage the cloud services used by your applications, are part of the Amplify Framework.

Amazon Neptune Workbench – You can now query your graphs from within the Neptune Console using either Gremlin or SPARQL queries. You get a fully managed, interactive development environment that supports live code and narrative text within Jupyter notebooks. In addition to queries, the notebooks support bulk loading, query planning, and query profiling. To get started, visit the Neptune Console.

Amazon Chime Meetings App for Slack – This new app allows Slack users to start and join Amazon Chime online meetings from their Slack workspace channels and conversations. Slack users that are new to Amazon Chime will be auto-registered with Chime when they use the app for the first time, and can get access to all of the benefits of Amazon Chime meetings from their Slack workspace. Administrators of Slack workspaces can install the Amazon Chime Meetings App for Slack from the Slack App Directory. To learn more, visit this blog post.

HTTP APIs for Amazon API Gateway in Preview – This is a new API Gateway feature that will let you build cost-effective, high-performance RESTful APIs for serverless workloads using Lambda functions and other services with an HTTP endpoint. HTTP APIs are optimized for performance—they offer the core functionality of API Gateway at a cost savings of up to 70% compared to REST APIs in API Gateway. You will be able to create routes that map to multiple disparate backends, define & apply authentication and authorization to routes, set up rate limiting, and use custom domains to route requests to the APIs. Visit this blog post to get started.

Windows gMSA Support in ECS – Amazon Elastic Container Service (ECS) now supports Windows group Managed Service Account (gMSA), a new capability that allows you to authenticate and authorize your ECS-powered Windows containers with network resources using an Active Directory (AD). You can now easily use Integrated Windows Authentication with your Windows containers on ECS to secure services.

Jeff;

 

Amplify DataStore – Simplify Development of Offline Apps with GraphQL

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amplify-datastore-simplify-development-of-offline-apps-with-graphql/

The open source Amplify Framework is a command line tool and a library allowing web & mobile developers to easily provision and access cloud based services. For example, if I want to create a GraphQL API for my mobile application, I use amplify add api on my development machine to configure the backend API. After answering a few questions, I type amplify push to create an AWS AppSync API backend in the cloud. Amplify generates code allowing my app to easily access the newly created API. Amplify supports popular web frameworks, such as Angular, React, and Vue. It also supports mobile applications developed with React Native, Swift for iOS, or Java for Android. If you want to learn more about how to use Amplify for your mobile applications, feel free to attend one the workshops (iOS or React Native) we prepared for the re:Invent 2019 conference.

AWS customers told us the most difficult tasks when developing web & mobile applications is to synchronize data across devices and to handle offline operations. Ideally, when a device is offline, your customers should be able to continue to use your application, not only to access data but also to create and modify them. When the device comes back online, the application must reconnect to the backend, synchronize the data and resolve conflicts, if any. It requires a lot of undifferentiated code to correctly handle all edge cases, even when using AWS AppSync SDK’s on-device cache with offline mutations and delta sync.

Today, we are introducing Amplify DataStore, a persistent on-device storage repository for developers to write, read, and observe changes to data. Amplify DataStore allows developers to write apps leveraging distributed data without writing additional code for offline or online scenario. Amplify DataStore can be used as a stand-alone local datastore in web and mobile applications, with no connection to the cloud, or the need to have an AWS Account. However, when used with a cloud backend, Amplify DataStore transparently synchronizes data with an AWS AppSync API when network connectivity is available. Amplify DataStore automatically versions data, implements conflict detection and resolution in the cloud using AppSync. The toolchain also generates object definitions for my programming language based on the GraphQL schema developers provide.

Let’s see how it works.

I first install the Amplify CLI and create a React App. This is standard React, you can find the script on my git repo. I add Amplify DataStore to the app with npx amplify-app. npx is specific for NodeJS, Amplify DataStore also integrates with native mobile toolchains, such as the Gradle plugin for Android Studio and CocoaPods that creates custom XCode build phases for iOS.

Now that the scaffolding of my app is done, I add a GraphQL schema representing two entities: Posts and Comments on these posts. I install the dependencies and use AWS Amplify CLI to generate the source code for the objects defined in the GraphQL schema.

# add a graphql schema to amplify/backend/api/amplifyDatasource/schema.graphql
echo "enum PostStatus {
  ACTIVE
  INACTIVE
}

type Post @model {
  id: ID!
  title: String!
  comments: [Comment] @connection(name: "PostComments")
  rating: Int!
  status: PostStatus!
}
type Comment @model {
  id: ID!
  content: String
  post: Post @connection(name: "PostComments")
}" > amplify/backend/api/amplifyDatasource/schema.graphql

# install dependencies 
npm i @aws-amplify/core @aws-amplify/DataStore @aws-amplify/pubsub

# generate the source code representing the model 
npm run amplify-modelgen

# create the API in the cloud 
npm run amplify-push

@model and @connection are directives that the Amplify GraphQL Transformer uses to generate code. Objects annotated with @model are top level objects in your API, they are stored in DynamoDB, you can make them searchable, version them or restrict their access to authorised users only. @connection allows to express 1-n relationships between objects, similarly to what you would define when using a relational database (you can use the @key directive to model n-n relationships).

The last step is to create the React app itself. I propose to download a very simple sample app to get started quickly:

# download a simple react app
curl -o src/App.js https://raw.githubusercontent.com/sebsto/amplify-datastore-js-e2e/master/src/App.js

# start the app 
npm run start

I connect my browser to the app http://localhost:8080and start to test the app.

The demo app provides a basic UI (as you can guess, I am not a graphic designer !) to create, query, and to delete items. Amplify DataStore provides developers with an easy to use API to store, query and delete data. Read and write are propagated in the background to your AppSync endpoint in the cloud. Amplify DataStore uses a local data store via a storage adapter, we ship IndexedDB for web and SQLite for mobile. Amplify DataStore is open source, you can add support for other database, if needed.

From a code perspective, interacting with data is as easy as invoking the save(), delete(), or query() operations on the DataStore object (this is a Javascript example, you would write similar code for Swift or Java). Notice that the query() operation accepts filters based on Predicates expressions, such as item.rating("gt", 4) or Predicates.All.

function onCreate() {
  DataStore.save(
    new Post({
      title: `New title ${Date.now()}`,
      rating: 1,
      status: PostStatus.ACTIVE
    })
  );
}

function onDeleteAll() {
  DataStore.delete(Post, Predicates.ALL);
}

async function onQuery(setPosts) {
  const posts = await DataStore.query(Post, c => c.rating("gt", 4));
  setPosts(posts)
}

async function listPosts(setPosts) {
  const posts = await DataStore.query(Post, Predicates.ALL);
  setPosts(posts);
}

I connect to Amazon DynamoDB console and observe the items are stored in my backend:

There is nothing to change in my code to support offline mode. To simulate offline mode, I turn off my wifi. I add two items in the app and turn on the wifi again. The app continues to operate as usual while offline. The only noticeable change is the _version field is not updated while offline, as it is populated by the backend.

When the network is back, Amplify DataStore transparently synchronizes with the backend. I verify there are 5 items now in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below):

aws dynamodb scan --table-name Post-raherug3frfibkwsuzphkexewa-amplify \
                   --filter-expression "#deleted <> :value"            \
                   --expression-attribute-names '{"#deleted" : "_deleted"}' \
                   --expression-attribute-values '{":value" : { "BOOL": true} }' \
                   --query "Count"

5 // <= there are now 5 non deleted items in the table !

Amplify DataStore leverages GraphQL subscriptions to keep track of changes that happen on the backend. Your customers can modify the data from another device and Amplify DataStore takes care of synchronizing the local data store transparently. No GraphQL knowledge is required, Amplify DataStore takes care of the low level GraphQL API calls for you automatically. Real-time data, connections, scalability, fan-out and broadcasting are all handled by the Amplify client and AppSync, using WebSocket protocol under the cover.

We are effectively using GraphQL as a network protocol to dynamically transform model instances to GraphQL documents over HTTPS.

To refresh the UI when a change happens on the backend, I add the following code in the useEffect() React hook. It uses the DataStore.observe() method to register a callback function ( msg => { ... } ). Amplify DataStore calls this function when an instance of Post changes on the backend.

const subscription = DataStore.observe(Post).subscribe(msg => {
  console.log(msg.model, msg.opType, msg.element);
  listPosts(setPosts);
});

Now, I open the AppSync console. I query existing Posts to retrieve a Post ID.

query ListPost {
  listPosts(limit: 10) {
    items {
      id
      title
      status
      rating
      _version
    }
  }
}

I choose the first post in my app, the one starting with 7d8… and I send the following GraphQL mutation:

mutation UpdatePost {
  updatePost(input: {
    id: "7d80688f-898d-4fb6-a632-8cbe060b9691"
    title: "updated title 13:56"
    status: ACTIVE
    rating: 7
    _version: 1
  }) {
    id
    title
    status
    rating
    _lastChangedAt
    _version
    _deleted    
  }
}

Immediately, I see the app receiving the notification and refreshing its user interface.

Finally, I test with multiple devices. I first create a hosting environment for my app using amplify add hosting and amplify publish. Once the app is published, I open the iOS Simulator and Chrome side by side. Both apps initially display the same list of items. I create new items in both apps and observe the apps refreshing their UI in near real time. At the end of my test, I delete all items.

I verify there are no more items in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below):

aws dynamodb scan --table-name Post-raherug3frfibkwsuzphkexewa-amplify \
                   --filter-expression "#deleted <> :value"            \
                   --expression-attribute-names '{"#deleted" : "_deleted"}' \
                   --expression-attribute-values '{":value" : { "BOOL": true} }' \
                   --query "Count"

0 // <= all the items have been deleted !

When syncing local data with the backend, AWS AppSync keeps track of version numbers to detect conflicts. When there is a conflict, the default resolution strategy is to automerge the changes on the backend. Automerge is an easy strategy to resolve conflit without writing client-side code. For example, let’s pretend I have an initial Post, and Bob & Alice update the post at the same time:

The original item:

{
   "_version": 1,
   "id": "25",
   "rating": 6,
   "status": "ACTIVE",
   "title": "DataStore is Available"
}
Alice updates the rating:

{
   "_version": 2,
   "id": "25",
   "rating": 10,
   "status": "ACTIVE",
   "title": "DataStore is Available"
}
At the same time, Bob updates the title:

{
   "_version": 2,
   "id": "25",
   "rating": 6,
   "status": "ACTIVE",
   "title": "DataStore is great !"
}
The final item after auto-merge is:

{
   "_version": 3,
   "id": "25",
   "rating": 10,
   "status": "ACTIVE",
   "title": "DataStore is great !"
}

Automerge strictly defines merging rules at field level, based on type information defined in the GraphQL schema. For example List and Map are merged, and conflicting updates on scalars (such as numbers and strings) preserve the value existing on the server. Developers can chose other conflict resolution strategies: optimistic concurrency (conflicting updates are rejected) or custom (an AWS Lambda function is called to decide what version is the correct one). You can choose the conflit resolution strategy with amplify update api. You can read more about these different strategies in the AppSync documentation.

The full source code for this demo is available on my git repository. The app has less than 100 lines of code, 20% being just UI related. Notice that I did not write a single line of GraphQL code, everything happens in the Amplify DataStore.

Your Amplify DataStore cloud backend is available in all AWS Regions where AppSync is available, which, at the time I write this post are: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London).

There is no additional charges to use Amplify DataStore in your application, you only pay for the backend resources you use, such as AppSync and DynamoDB (see here and here for the pricing detail). Both services have a free tier allowing you to discover and to experiment for free.

Amplify DataStore allows you to focus on the business value of your apps, instead of writing undifferentiated code. I can’t wait to discover the great applications you’re going to build with it.

— seb

AWS Launches & Previews at re:Invent 2019 – Tuesday, December 3rd

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-launches-previews-at-reinvent-2019-tuesday-december-3rd/

Whew, what a day. This post contains a summary of the announcements that we made today.

Launch Blog Posts
Here are detailed blog posts for the launches:

Other Launches
Here’s an overview of some launches that did not get a blog post. I’ve linked to the What’s New or product information pages instead:

EBS-Optimized Bandwidth Increase – Thanks to improvements to the Nitro system, all newly launched C5/C5d/C5n/C5dn, M5/M5d/M5n/M5dn, R5/R5d/R5n/R5dn, and P3dn instances will support 36% higher EBS-optimized instance bandwidth, up to 19 Gbps. In addition newly launched High Memory instances (6, 9, 12 TB) will also support 19 Gbps of EBS-optimized instance bandwidth, a 36% increase from 14Gbps. For details on each size, read more about Amazon EBS-Optimized Instances.

EC2 Capacity Providers – You will have additional control over how your applications use compute capacity within EC2 Auto Scaling Groups and when using AWS Fargate. You get an abstraction layer that lets you make late binding decisions on capacity, including the ability to choose how much Spot capacity that you would like to use. Read the What’s New to learn more.

Previews
Here’s an overview of the previews that we revealed today, along with links that will let you sign up and/or learn more (most of these were in Andy’s keynote):

AWS Wavelength – AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G network to provide developers the ability to build applications that serve end-users with single-digit millisecond latencies. You will be able to extend your existing VPC to a Wavelength Zone and then make use of EC2, EBS, ECS, EKS, IAM, CloudFormation, Auto Scaling, and other services. This low-latency access to AWS will enable the next generation of mobile gaming, AR/VR, security, and video processing applications. To learn more, visit the AWS Wavelength page.

Amazon Managed Apache Cassandra Service (MCS) – This is a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon Managed Cassandra Service is serverless, so you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. To learn more, read New – Amazon Managed Apache Cassandra Service (MCS).

Graviton2-Powered EC2 Instances – New Arm-based general purpose, compute-optimized, and memory-optimized EC2 instances powered by the new Graviton2 processor. The instances offer a significant performance benefit over the 5th generation (M5, C5, and R5) instances, and also raise the bar on security. To learn more, read Coming Soon – Graviton2-Powered General Purpose, Compute-Optimized, & Memory-Optimized EC2 Instances.

AWS Nitro EnclavesAWS Nitro Enclaves will let you create isolated compute environments to further protect and securely process highly sensitive data such as personally identifiable information (PII), healthcare, financial, and intellectual property data within your Amazon EC2 instances. Nitro Enclaves uses the same Nitro Hypervisor technology that provides CPU and memory isolation for EC2 instances. To learn more, visit the Nitro Enclaves page. The Nitro Enclaves preview is coming soon and you can sign up now.

Amazon Detective – This service will help you to analyze and visualize security data at scale. You will be able to quickly identify the root causes of potential security issues or suspicious activities. It automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that will accelerate your security investigation. Amazon Detective can scale to process terabytes of log data and trillions of events. Sign up for the Amazon Detective Preview.

Amazon Fraud Detector – This service makes it easy for you to identify potential fraud that is associated with online activities. It uses machine learning and incorporates 20 years of fraud detection expertise from AWS and Amazon.com, allowing you to catch fraud faster than ever before. You can create a fraud detection model with a few clicks, and detect fraud related to new accounts, guest checkout, abuse of try-before-you-buy, and (coming soon) online payments. To learn more, visit the Amazon Fraud Detector page.

Amazon Kendra – This is a highly accurate and easy to use enterprise search service that is powered by machine learning. It supports natural language queries and will allow users to discover information buried deep within your organization’s vast content stores. Amazon Kendra will include connectors for popular data sources, along with an API to allow data ingestion from other sources. You can access the Kendra Preview from the AWS Management Console.

Contact Lens for Amazon Connect – This is a set of analytics capabilities for Amazon Connect that use machine learning to understand sentiment and trends within customer conversations in your contact center. Once enabled, specified calls are automatically transcribed using state-of-the-art machine learning techniques, fed through a natural language processing engine to extract sentiment, and indexed for searching. Contact center supervisors and analysts can look for trends, compliance risks, or contacts based on specific words and phrases mentioned in the call to effectively train agents, replicate successful interactions, and identify crucial company and product feedback. Sign up for the Contact Lens for Amazon Connect Preview.

Amazon Augmented AI (A2I) – This service will make it easy for you to build workflows that use a human to review low-confidence machine learning predictions. The service includes built-in workflows for common machine learning use cases including content moderation (via Amazon Rekognition) and text extraction (via Amazon Textract), and also allows you to create your own. You can use a pool of reviewers within your own organization, or you can access the workforce of over 500,000 independent contractors who are already performing machine learning tasks through Amazon Mechanical Turk. You can also make use of workforce vendors that are pre-screened by AWS for quality and adherence to security procedures. To learn more, read about Amazon Augmented AI (Amazon A2I), or visit the A2I Console to get started.

Amazon CodeGuru – This ML-powered service provides code reviews and application performance recommendations. It helps to find the most expensive (computationally speaking) lines of code, and gives you specific recommendations on how to fix or improve them. It has been trained on best practices learned from millions of code reviews, along with code from thousands of Amazon projects and the top 10,000 open source projects. It can identify resource leaks, data race conditions between concurrent threads, and wasted CPU cycles. To learn more, visit the Amazon CodeGuru page.

Amazon RDS Proxy – This is a fully managed database proxy that will help you better scale applications, including those built on modern serverless architectures, without worrying about managing connections and connection pools, while also benefiting from faster failover in the event of a database outage. It is highly available and deployed across multiple AZs, and integrates with IAM and AWS Secrets Manager so that you don’t have to embed your database credentials in your code. Amazon RDS Proxy is fully compatible with MySQL protocol and requires no application change. You will be able to create proxy endpoints and start using them in minutes. To learn more, visit the RDS Proxy page.

Jeff;

New – AWS Step Functions Express Workflows: High Performance & Low Cost

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-step-functions-express-workflows-high-performance-low-cost/

We launched AWS Step Functions at re:Invent 2016, and our customers took to the service right away, using them as a core element of their multi-step workflows. Today, we see customers building serverless workflows that orchestrate machine learning training, report generation, order processing, IT automation, and many other multi-step processes. These workflows can run for up to a year, and are built around a workflow model that includes checkpointing, retries for transient failures, and detailed state tracking for auditing purposes.

Based on usage and feedback, our customers really like the core Step Functions model. They love the declarative specifications and the ease with which they can build, test, and scale their workflows. In fact, customers like Step Functions so much that they want to use them for high-volume, short-duration use cases such as IoT data ingestion, streaming data processing, and mobile application backends.

New Express Workflows
Today we are launching Express Workflows as an option to the existing Standard Workflows. The Express Workflows use the same declarative specification model (the Amazon States Language) but are designed for those high-volume, short-duration use cases. Here’s what you need to know:

Triggering – You can use events and read/write API calls associated with a long list of AWS services to trigger execution of your Express Workflows.

Execution Model – Express Workflows use an at-least-once execution model, and will not attempt to automatically retry any failed steps, but you can use Retry and Catch, as described in Error Handling. The steps are not checkpointed, so per-step status information is not available. Successes and failures are logged to CloudWatch Logs, and you have full control over the logging level.

Workflow Steps – Express Workflows support many of the same service integrations as Standard Workflows, with the exception of Activity Tasks. You can initiate long-running services such as AWS Batch, AWS Glue, and Amazon SageMaker, but you cannot wait for them to complete.

Duration – Express Workflows can run for up to five minutes of wall-clock time. They can invoke other Express or Standard Workflows, but cannot wait for them to complete. You can also invoke Express Workflows from Standard Workflows, composing both types in order to meet the needs of your application.

Event Rate – Express Workflows are designed to support a per-account invocation rate greater than 100,000 events per second. Accounts are configured for 6,000 events per second by default and we will, as usual, raise it on request.

Pricing – Standard Workflows are priced based on the number of state transitions. Express Workflows are priced based on the number of invocations and a GB/second charge based on the amount of memory used to track the state of the workflow during execution. While the pricing models are not directly comparable, Express Workflows will be far more cost-effective at scale. To learn more, read about AWS Step Functions Pricing.

As you can see, most of what you already know about Standard Workflows also applies to Express Workflows! You can replace some of your Standard Workflows with Express Workflows, and you can use Express Workflows to build new types of applications.

Using Express Workflows
I can create an Express Workflow and attach it to any desired events with just a few minutes of work. I simply choose the Express type in the console:

Then I define my state machine:

I configure the CloudWatch logging, and add a tag:

Now I can attach my Express Workflow to my event source. I open the EventBridge Console and create a new rule:

I define a pattern that matches PutObject events on a single S3 bucket:

I select my Express Workflow as the event target, add a tag, and click Create:

The particular event will occur only if I have a CloudTrail trail that is set up to record object-level activity:

Then I upload an image to my bucket, and check the CloudWatch Logs group to confirm that my workflow ran as expected:

As a more realistic test, I can upload several hundred images at once and confirm that my Lambda functions are invoked with high concurrency:

I can also use the new Monitoring tab in the Step Functions console to view the metrics that are specific to the state machine:

Available Now
You can create and use AWS Step Functions Express Workflows today in all AWS Regions!

Jeff;

New – Provisioned Concurrency for Lambda Functions

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-provisioned-concurrency-for-lambda-functions/

It’s really true that time flies, especially when you don’t have to think about servers: AWS Lambda just turned 5 years old and the team is always looking for new ways to help customers build and run applications in an easier way.

As more mission critical applications move to serverless, customers need more control over the performance of their applications. Today we are launching Provisioned Concurrency, a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. This is ideal for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs.

When you invoke a Lambda function, the invocation is routed to an execution environment to process the request. When a function has not been used for some time, when you need to process more concurrent invocations, or when you update a function, new execution environments are created. The creation of an execution environment takes care of installing the function code and starting the runtime. Depending on the size of your deployment package, and the initialization time of the runtime and of your code, this can introduce latency for the invocations that are routed to a new execution environment. This latency is usually referred to as a “cold start”. For most applications this additional latency is not a problem. For some applications, however, this latency may not be acceptable.

When you enable Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations.

Configuring Provisioned Concurrency
I create two Lambda functions that use the same Java code and can be triggered by Amazon API Gateway. To simulate a production workload, these functions are repeating some mathematical computation 10 million times in the initialization phase and 200,000 times for each invocation. The computation is using java.Math.Random and conditions (if ...) to avoid compiler optimizations (such as “unlooping” the iterations). Each function has 1GB of memory and the size of the code is 1.7MB.

I want to enable Provisioned Concurrency only for one of the two functions, so that I can compare how they react to a similar workload. In the Lambda console, I select one the functions. In the configuration tab, I see the new Provisioned Concurrency settings.

I select Add configuration. Provisioned Concurrency can be enabled for a specific Lambda function version or alias (you can’t use $LATEST). You can have different settings for each version of a function. Using an alias, it is easier to enable these settings to the correct version of your function. In my case I select the alias live that I keep updated to the latest version using the AWS SAM AutoPublishAlias function preference. For the Provisioned Concurrency, I enter 500 and Save.

Now, the Provisioned Concurrency configuration is in progress. The execution environments are being prepared to serve concurrent incoming requests based on my input. During this time the function remains available and continues to serve traffic.

After a few minutes, the concurrency is ready. With these settings, up to 500 concurrent requests will find an execution environment ready to process them. If I go above that, the usual scaling of Lambda functions still applies.

To generate some load, I use an Amazon Elastic Compute Cloud (EC2) instance in the same region. To keep it simple, I use the ab tool bundled with the Apache HTTP Server to call the two API endpoints 10,000 times with a concurrency of 500. Since these are new functions, I expect that:

  • For the function with Provisioned Concurrency enabled and set to 500, my requests are managed by pre-initialized execution environments.
  • For the other function, that has Provisioned Concurrency disabled, about 500 execution environments need to be provisioned, adding some latency to the same amount of invocations, about 5% of the total.

One cool feature of the ab tool is that is reporting the percentage of the requests served within a certain time. That is a very good way to look at API latency, as described in this post on Serverless Latency by Tim Bray.

Here are the results for the function with Provisioned Concurrency disabled:

Percentage of the requests served within a certain time (ms)
50% 351
66% 359
75% 383
80% 396
90% 435
95% 1357
98% 1619
99% 1657
100% 1923 (longest request)

Looking at these numbers, I see that 50% the requests are served within 351ms, 66% of the requests within 359ms, and so on. It’s clear that something happens when I look at 95% or more of the requests: the time suddenly increases by about a second.

These are the results for the function with Provisioned Concurrency enabled:

Percentage of the requests served within a certain time (ms)
50% 352
66% 368
75% 382
80% 387
90% 400
95% 415
98% 447
99% 513
100% 593 (longest request)

Let’s compare those numbers in a graph.

As expected for my test workload, I see a big difference in the response time of the slowest 5% of the requests (between 95% and 100%), where the function with Provisioned Concurrency disabled shows the latency added by the creation of new execution environments and the (slow) initialization in my function code.

In general, the amount of latency added depends on the runtime you use, the size of your code, and the initialization required by your code to be ready for a first invocation. As a result, the added latency can be more, or less, than what I experienced here.

The number of invocations affected by this additional latency depends on how often the Lambda service needs to create new execution environments. Usually that happens when the number of concurrent invocations increases beyond what already provisioned, or when you deploy a new version of a function.

A small percentage of slow response times (generally referred to as tail latency) really makes a difference in end user experience. Over an extended period of time, most users are affected during some of their interactions. With Provisioned Concurrency enabled, user experience is much more stable.

Provisioned Concurrency is a Lambda feature and works with any trigger. For example, you can use it with WebSockets APIsGraphQL resolvers, or IoT Rules. This feature gives you more control when building serverless applications that require low latency, such as web and mobile apps, games, or any service that is part of a complex transaction.

Available Now
Provisioned Concurrency can be configured using the console, the AWS Command Line Interface (CLI), or AWS SDKs for new or existing Lambda functions, and is available today in the following AWS Regions: in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo).

You can use the AWS Serverless Application Model (SAM) and SAM CLI to test, deploy and manage serverless applications that use Provisioned Concurrency.

With Application Auto Scaling you can automate configuring the required concurrency for your functions. As policies, Target Tracking and Scheduled Scaling are supported. Using these policies, you can automatically increase the amount of concurrency during times of high demand and decrease it when the demand decreases.

You can also use Provisioned Concurrency today with AWS Partner tools, including configuring Provisioned Currency settings with the Serverless Framework and Terraform, or viewing metrics with Datadog, Epsagon, Lumigo, New Relic, SignalFx, SumoLogic, and Thundra.

You only pay for the amount of concurrency that you configure and for the period of time that you configure it. Pricing in US East (N. Virginia) is $0.015 per GB-hour for Provisioned Concurrency and $0.035 per GB-hour for Duration. The number of requests is charged at the same rate as normal functions. You can find more information in the Lambda pricing page.

This new feature enables developers to use Lambda for a variety of workloads that require highly consistent latency. Let me know what you are going to use it for!

Danilo

New – Programmatic Access to EBS Snapshot Content

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-programmatic-access-to-ebs-snapshot-content/

EBS Snapshots are really cool! You can create them interactively from the AWS Management Console:

You can create them from the Command Line (create-snapshot) or by making a call to the CreateSnapshot function, and you can use the Data Lifecycle Manager (DLM) to set up automated snapshot management.

All About Snapshots
The snapshots are stored in Amazon Simple Storage Service (S3), and can be used to quickly create fresh EBS volumes as needed. The first snapshot of a volume contains a copy of every 512K block on the volume. Subsequent snapshots contain only the blocks that have changed since the previous snapshot. The incremental nature of the snapshots makes them very cost-effective, since (statistically speaking) many of the blocks on an EBS volume do not change all that often.

Let’s look at a quick example. Suppose that I create and format an EBS volume with 8 blocks (this is smaller than the allowable minimum size, but bear with me), copy some files to it, and then create my first snapshot (Snap1). The snapshot contains all of the blocks, and looks like this:

Then I add a few more files, delete one, and create my second snapshot (Snap2). The snapshot contains only the blocks that were modified after I created the first one, and looks like this:

I make a few more changes, and create a third snapshot (Snap3):

Keep in mind that the relationship between directories, files, and the underlying blocks is controlled by the file system, and is generally quite complex in real-world situations.

Ok, so now I have three snapshots, and want to use them to create a new volume. Each time I create a snapshot of an EBS volume, an internal reference to the previous snapshot is created. This allows CreateVolume to find the most recent copy of each block, like this:

EBS manages all of the details for me behind the scenes. For example, if I delete Snap2, the copy of Block 0 in the snapshot also deleted since the copy in Snap3 is newer, but the copy of Block 4 in Snap2 becomes part of Snap3:

By the way, the chain of backward references (Snap3 to Snap1, or Snap3 to Snap2 to Snap1) is referred to as the lineage of the set of snapshots.

Now that I have explained all this, I should also tell you that you generally don’t need to know this, and can focus on creating, using, and deleting snapshots!

However…

Access to Snapshot Content
Today we are introducing a new set of functions that provide you with access to the snapshot content, as described above. These functions are designed for developers of backup/recovery, disaster recovery, and data management products & services, and will allow them to make their offerings faster and more cost-effective.

The new functions use a block index (0, 1, 2, and so forth), to identify a particular 512K block within a snapshot. The index is returned in the form of an encrypted token, which is meaningful only to the GetSnapshotBlock function. I have represented these tokens as T0, T1, and so forth below. The functions currently work on blocks of 512K bytes, with plans to support more block sizes in the future.

Here are the functions:

ListSnapshotBlocks – Identifies all of the blocks in a given snapshot as encrypted tokens. For Snap1, it would return [T0, T1, T2, T3, T4, T5, T6, T7] and for Snap2 it would return [T0, T4].

GetSnapshotBlock – Returns the content of a block. If the block is part of an encrypted snapshot, it will be returned in decrypted form.

ListChangedBlocks – Returns the list of blocks that have changed between two snapshots in a lineage, again as encrypted tokens. For Snap2 it would return [T0, T4] and for Snap3 it would return [T0, T5].

Like I said, these functions were built to address one specialized yet very important use case. Having said that, I am now highly confident that new and unexpected ones will pop up within 48 hours (feel free to share them with me)!

Available Now
The new functions are available now and you can start using them today in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Singapore), and Asia Pacific (Tokyo) Regions; they will become available in the remaining regions in the next few weeks. There is a charge for calls to the List and Get functions, and the usual KMS charges will apply when you call GetSnapshotBlock to access a block that is part of an encrypted snapshot.

Jeff;

AWS Compute Optimizer – Your Customized Resource Optimization Service

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-compute-optimizer-your-customized-resource-optimization-service/

When I publicly speak about Amazon Elastic Compute Cloud (EC2) instance type, one frequently asked question I receive is “How can I be sure I choose the right instance type for my application?” Choosing the correct instance type is between art and science. It usually involves knowing your application performance characteristics under normal circumstances (the baseline) and the expected daily variations, and to pick up an instance type that matches these characteristics. After that, you monitor key metrics to validate your choice, and you iterate over the time to adjust the instance type that best suits the cost vs performance ratio for your application. Over-provisioning resources results in paying too much for your infrastructure, and under-provisioning resources results in lower application performance, possibly impacting customer experience.

Earlier this year, we launched Cost Explorer Rightsizing Recommendations, which help you identify under-utilized Amazon Elastic Compute Cloud (EC2) instances that may be downsized within the same family to save money. We received great feedback and customers are asking for more recommendations beyond just downsizing within the same instance family.

Today, we are announcing a new service to help you to optimize compute resources for your workloads: AWS Compute Optimizer. AWS Compute Optimizer uses machine learning techniques to analyze the history of resource consumption on your account, and make well-articulated and actionable recommendations tailored to your resource usage. AWS Compute Optimizer is integrated to AWS Organizations, you can view recommendations for multiple accounts from your master AWS Organizations account.

To get started with AWS Compute Optimizer, I navigate to the AWS Management Console, select AWS Compute Optimizer and activate the service. It immediately starts to analyze my resource usage and history using Amazon CloudWatch metrics and delivers the first recommendations a few hours later.

I can see the first recommendations on the AWS Compute Optimizer dashboard:

I click Over-provisioned: 8 instances to get the details:

I click on one of the eight links to get the actionable findings:

AWS Compute Optimizer offers multiple options. I can and I scroll down the bottom of that page to verify what is the impact if I decide to apply this recommendation:

I can also access the recommendation from the AWS Command Line Interface (CLI):

$ aws compute-optimizer get-ec2-instance-recommendations --instance-arns arn:aws:ec2:us-east-1:012345678912:instance/i-0218a45abd8b53658
{
    "instanceRecommendations": [
        {
            "instanceArn": "arn:aws:ec2:us-east-1:012345678912:instance/i-0218a45abd8b53658",
            "accountId": "012345678912",
            "currentInstanceType": "m5.xlarge",
            "finding": "OVER_PROVISIONED",
            "utilizationMetrics": [
                {
                    "name": "CPU",
                    "statistic": "MAXIMUM",
                    "value": 2.0
                }
            ],
            "lookBackPeriodInDays": 14.0,
            "recommendationOptions": [
                {
                    "instanceType": "r5.large",
                    "projectedUtilizationMetrics": [
                        {
                            "name": "CPU",
                            "statistic": "MAXIMUM",
                            "value": 3.2
                        }
                    ],
                    "performanceRisk": 1.0,
                    "rank": 1
                },
                {
                    "instanceType": "t3.xlarge",
                    "projectedUtilizationMetrics": [
                        {
                            "name": "CPU",
                            "statistic": "MAXIMUM",
                            "value": 2.0
                        }
                    ],
                    "performanceRisk": 3.0,
                    "rank": 2
                },
                {
                    "instanceType": "m5.xlarge",
                    "projectedUtilizationMetrics": [
                        {
                            "name": "CPU",
                            "statistic": "MAXIMUM",
                            "value": 2.0
                        }
                    ],
                    "performanceRisk": 1.0,
                    "rank": 3
                }
            ],
            "recommendationSources": [
                {
                    "recommendationSourceArn": "arn:aws:ec2:us-east-1:012345678912:instance/i-0218a45abd8b53658",
                    "recommendationSourceType": "Ec2Instance"
                }
            ],
            "lastRefreshTimestamp": 1575006953.102
        }
    ],
    "errors": []
}

Keep in mind that AWS Compute Optimizer uses Amazon CloudWatch metrics as basis for the recommendations. By default, CloudWatch metrics are the ones it can observe from an hypervisor point of view, such as CPU utilization, disk IO, and network IO. If I want AWS Compute Optimizer to take into account operating system level metrics, such as memory usage, I need to install a CloudWatch agent on my EC2 instance. AWS Compute Optimizer automatically recognizes these metrics when available and takes these into account when creating recommendation, otherwise, it shows “Data Unavailable” in the console.

AWS customers told us performance is not the only metric they look at when choosing a resource, the price vs performance ratio is important too. For example, it might make sense to use a new generation instance family, such as m5, rather than the older generation (m3 or m4), even when the new generation seems over-provisioned for the workload. This is why, after AWS Compute Optimizer identifies a list of optimal AWS resources for your workload, it presents on-demand pricing, reserved instance pricing, reserved instance utilization, and reserved instance coverage, along with expected resource efficiency to its recommendations.

AWS Compute Optimizer makes it easy to right-size your resource. However, keep in mind that while it is relatively easy to right-size resources for modern applications, or stateless applications that scale horizontally, it might be very difficult to right-size older apps. Some older apps might not run correctly under different hardware architecture, or need different drivers, or not be supported by the application vendor at all. Be sure to check with your vendor before trying to optimize cloud resources for packaged or older apps.

We strongly advise you to thoroughly test your applications on the new recommended instance type before applying any recommendation into production.

Compute Optimizer is free to use and available initially in these AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), US East (Ohio), South America (São Paulo). Connect to the AWS Management Console today and discover how much you can save by choosing the right resource size for your cloud applications.

— seb

New for AWS Transit Gateway – Build Global Networks and Centralize Monitoring Using Network Manager

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-transit-gateway-build-global-networks-and-centralize-monitoring-using-network-manager/

As your company grows and gets the benefits of a cloud-based infrastructure, your on-premises sites like offices and stores increasingly need high performance private connectivity to AWS and to other sites at a reasonable cost. Growing your network is hard, because traditional branch networks based on leased lines are costly, and they suffer from the same lack of elasticity and agility as traditional data centers.

At the same time, it becomes increasingly complex to manage and monitor a global network that is spread across AWS regions and on-premises sites. You need to stitch together data from these diverse locations. This results in an inconsistent operational experience, increased costs and efforts, and missed insights from the lack of visibility across different technologies.

Today, we want to make it easier to build, manage, and monitor global networks with the following new capabilities for AWS Transit Gateway:

  • Transit Gateway Inter-Region Peering
  • Accelerated Site-to-Site VPN
  • AWS Transit Gateway Network Manager

These new networking capabilities enable you to optimize your network using AWS’s global backbone, and to centrally visualize and monitor your global network. More specifically:

  • Inter-Region Peering and Accelerated VPN improve application performance by leveraging the AWS Global Network. In this way, you can reduce the number of leased-lines required to operate your network, optimizing your cost and improving agility. Transit Gateway Inter-Region Peering sends inter region traffic privately over AWS’s global network backbone. Accelerated VPN uses AWS Global Accelerator to route VPN traffic from remote locations through the closest AWS edge location to improve connection performance.
  • Network Manager reduces the operational complexity of managing a global network across AWS and on-premises. With Network Manager, you set up a global view of your private network simply by registering your Transit Gateways and on-premises resources. Your global network can then be visualized and monitored via a centralized operational dashboard.

These features allow you to optimize connectivity from on-premises sites to AWS and also between on-premises sites, by routing traffic through Transit Gateways and the AWS Global Network, and centrally managing through Network Manager.

Visualizing Your Global Network
In the Network Manager console, that you can reach from the Transit Gateways section of the Amazon Virtual Private Cloud console, you have an overview of your global networks. Each global network includes AWS and on-premises resources. Specifically, it provides a central point of management for your AWS Transit Gateways, your physical devices and sites connected to the Transit Gateways via Site-to-Site VPN Connections, and AWS Direct Connect locations attached to the Transit Gateways.

For example, this is the Geographic view of a global network covering North America and Europe with 5 Transit Gateways in 3 AWS Regions, 80 VPCs, 50 VPNs, 1 Direct Connect location, and 16 on-premises sites with 50 devices:

As I zoom in the map, I get a description on what these nodes represent, for example if they are AWS Regions, Direct Connect locations, or branch offices.

I can select any node in the map to get more information. For example, I select the US West (Oregon) AWS Region to see the details of the two Transit Gateways I am using there, including the state of all VPN connections, VPCs, and VPNs handled by the selected Transit Gateway.

Selecting a site, I get a centralized view with the status of the VPN connections, including site metadata such as address, location, and description. For example, here are the details of the Colorado branch offices.

In the Topology panel, I see the logical relationship of all the resources in my network. On the left here there is the entire topology of my global network, on the right the detail of the European part. Connections status is reported as color in the topology view.

Selecting any node in the topology map displays details specific to the resource type (Transit Gateway, VPC, customer gateway, and so on) including links to the corresponding service in the AWS console to get more information and configure the resource.

Monitoring Your Global Network
Network Manager is using Amazon CloudWatch, which collects raw data and processes it into readable, near real-time metrics for data in/out, packets dropped, and VPN connection status.

These statistics are kept for 15 months, so that you can access historical information and gain a better perspective on how your web application or service is performing. You can also set alarms that watch for certain thresholds, and send notifications or take actions when those thresholds are met.

For example, these are the last 12 hours of Monitoring for the Transit Gateway in Europe (Ireland).

In the global network view, you have a single point of view of all events affecting your network, simplifying root cause analysis in case of issues. Clicking on any of the messages in the console will take to a more detailed view in the Events tab.

Your global network events are also delivered by CloudWatch Events. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. To process the same events, you can also use the additional capabilities offered by Amazon EventBridge.

Network Manager sends the following types of events:

  • Topology changes, for example when a VPN connection is created for a transit gateway.
  • Routing updates, such as when a route is deleted in a transit gateway route table.
  • Status updates, for example in case a VPN tunnel’s BGP session goes down.

Configuring Your Global Network
To get your on-premises resources included in the above visualizations and monitoring, you need to input into Network Manager information about your on-premises devices, sites, and links. You also need to associate devices with the customer gateways they host for VPN connections.

Our software-defined wide area network (SD-WAN) partners, such as Cisco, Aruba, Silver Peak, and Aviatrix, have configured their SD-WAN devices to connect with Transit Gateway Network Manager in only a few clicks. Their SD-WANs also define the on-premises devices, sites, and links automatically in Network Manager. SD-WAN integrations enable to include your on-premises network in the Network Manager global dashboard view without requiring you to input information manually.

Available Now
AWS Transit Gateway Network Manager is a global service available for Transit Gateways in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Mumbai), Canada (Central), South America (São Paulo).

There is no additional cost for using Network Manager. You pay for the network resources you use, like Transit Gateways, VPNs, and so on. Here you can find more information on pricing for VPN and Transit Gateway.

You can learn more in the documentation of the Network ManagerInter-Region Peering, and Accelerated VPN.

With these new features, you can take advantage of the performance of our AWS Global Network, and simplify network management and monitoring across your AWS and on-premises resources.

Danilo

Amazon EC2 Update – Inf1 Instances with AWS Inferentia Chips for High Performance Cost-Effective Inferencing

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ec2-update-inf1-instances-with-aws-inferentia-chips-for-high-performance-cost-effective-inferencing/

Our customers are taking to Machine Learning in a big way. They are running many different types of workloads, including object detection, speech recognition, natural language processing, personalization, and fraud detection. When running on large-scale production workloads, it is essential that they can perform inferencing as quickly and as cost-effectively as possible. According to what they have told us, inferencing can account for up to 90% of the cost of their machine learning work.

New Inf1 Instances
Today we are launching Inf1 instances in four sizes. These instances are powered by AWS Inferentia chips, and are designed to provide you with fast, low-latency inferencing.

AWS Inferentia chips are designed to accelerate the inferencing process. Each chip can deliver the following performance:

  • 64 teraOPS on 16-bit floating point (FP16 and BF16) and mixed-precision data.
  • 128 teraOPS on 8-bit integer (INT8) data.

The chips also include a high-speed interconnect, and lots of memory. With 16 chips on the largest instance, your new and existing TensorFlow, PyTorch, and MxNet inferencing workloads can benefit from over 2 petaOPS of inferencing power. When compared to the G4 instances, the Inf1 instances offer up to 3x the inferencing throughput, and up to 40% lower cost per inference.

Here are the sizes and specs:

Instance Name
Inferentia Chips
vCPUsRAMEBS BandwidthNetwork Bandwidth
inf1.xlarge148 GiBUp to 3.5 GbpsUp to 25 Gbps
inf1.2xlarge1816 GiBUp to 3.5 GbpsUp to 25 Gbps
inf1.6xlarge42448 GiB3.5 Gbps25 Gbps
inf1.24xlarge1696192 GiB14 Gbps100 Gbps

The instances make use of custom Second Generation Intel® Xeon® Scalable (Cascade Lake) processors, and are available in On-Demand, Spot, and Reserved Instance form, or as part of a Savings Plan in the US East (N. Virginia) and US West (Oregon) Regions. You can launch the instances directly, and they will also be available soon through Amazon SageMaker and Amazon ECS, and Amazon Elastic Kubernetes Service.

Using Inf1 Instances
Amazon Deep Learning AMIs have been updated and contain versions of TensorFlow and MxNet that have been optimized for use in Inf1 instances, with PyTorch coming very soon. The AMIs contain the new AWS Neuron SDK, which contains commands to compile, optimize, and execute your ML models on the Inferentia chip. You can also include the SDK in your own AMIs and images.

You can build and train your model on a GPU instance such as a P3 or P3dn, and then move it to an Inf1 instance for production use. You can use a model natively trained in FP16, or you can use models that have been trained to 32 bits of precision and have AWS Neuron automatically convert them to BF16 form. Large models, such as those for language translation or natural language processing, can be split across multiple Inferentia chips in order to reduce latency.

The AWS Neuron SDK also allows you to assign models to Neuron Compute Groups, and to run them in parallel. This allows you to maximize hardware utilization and to use multiple models as part of Neuron Core Pipeline mode, taking advantage of the large on-chip cache on each Inferentia chip. Be sure to read the AWS Neuron SDK Tutorials to learn more!

Jeff;

 

AWS Outposts Now Available – Order Yours Today!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-outposts-now-available-order-your-racks-today/

We first discussed AWS Outposts at re:Invent 2018. Today, I am happy to announce that we are ready to take orders and install Outposts racks in your data center or colo facility.

Why Outposts?
This new and unique AWS offering is a comprehensive, single-vendor compute & storage solution that is designed to meet the needs of customers who need local processing and very low latency. You no longer need to spend time creating detailed hardware specifications, soliciting & managing bids from multiple disparate vendors, or racking & stacking individual servers. Instead, you place your order online, take delivery, and relax while trained AWS technicians install, connect, set up, and verify your Outposts.

Once installed, we take care of monitoring, maintaining, and upgrading your Outposts. All of the hardware is modular and can be replaced in the field without downtime. When you need more processing or storage, or want to upgrade to newer generations of EC2 instances, you can initiate the request with a couple of clicks and we will take care of the rest.

Everything that you and your team already know about AWS still applies. You use the same APIs, tools, and operational practices. You can create a single deployment pipeline that target your Outposts and your cloud-based environments, and you can create hybrid architectures that span both.

Each Outpost is connected to and controlled by a specific AWS Region. The region treats a collection of up to 16 racks at a single location as a unified capacity pool. The collection can be associated with subnets of one or more VPCs in the parent region.

Outposts Hardware
The Outposts hardware is the same as what we use in our own data centers, with some additional security devices. The hardware is designed for reliability & efficiency, with redundant network switches and power supplies, and DC power distribution. Outpost racks are 80″ tall, 24″ wide, 48″ deep, and can weigh up to 2000 lbs. They arrive fully assembled, and roll in on casters, ready for connection to power and networking.

To learn more about the Outposts hardware, watch my colleague Anthony Liguori explain it:

Outposts supports multiple Intel®-powered Nitro-based EC2 instance types including C5, C5d, M5, M5d, R5, R5d, G4, and I3en. You can choose the mix of types that is right for your environment, and you can add more later. You will also be able to upgrade to newer instance types as they become available.

On the storage side, Outposts support EBS gp2 (general purpose SSD) storage, with a minimum size of 2.7 TB.

Outpost Networking
Each Outpost has a pair of networking devices, each with 400 Gbps of connectivity and support for 1 GigE, 10 GigE, 40 GigE, and 100 Gigabit fiber connections. The connections are used to host a pair of Link Aggregation Groups, one for the link to the parent region, and another to your local network. The link to the parent region is used for control and VPC traffic; all connections originate from the Outpost. Traffic to and from your local network flows through a Local Gateway (LGW), giving you full control over access and routing. Here’s an overview of the networking topology within your premises:

You will need to allocate a /26 CIDR block to each Outpost, which is advertised as a pair of /27 blocks in order to protect against device and link failures. The CIDR block can be within your own range of public IP addresses, or it can be an RFC 1918 private address plus NAT at your network edge.

Outposts are simply new subnets on an existing VPC in the parent region. Here’s how to create one:

$ aws ec2 create-subnet --vpc-id VVVVVV \
  --cidr-block A.B.C.D/24 \
  --outpost-arn arn:aws:outposts:REGION:ACCOUNT_ID:outpost:OUTPOST_ID 

If you have Cisco or Juniper hardware in your data center, the following guides will be helpful:

CiscoOutposts Solution Overview.

JuniperAWS Outposts in a Juniper QFX-Based Datacenter.

In most cases you will want to use AWS Direct Connect to establish a connection between your Outposts and the parent AWS Region. For more information on this and to learn a lot more about how to plan your Outposts network model, consult the How it Works documentation.

Outpost Services
We are launching with support for Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Amazon ECS, Amazon Elastic Kubernetes Service, and Amazon EMR, with additional services in the works. Amazon RDS for PostgreSQL and Amazon RDS for MySQL are available in preview form.

Your applications can also make use of any desired services in the parent region, including Amazon Simple Storage Service (S3), Amazon DynamoDB, Auto Scaling, AWS CloudFormation, Amazon CloudWatch, AWS CloudTrail, AWS Config, Load Balancing, and so forth. You can create and use Interface Endpoints from within the VPC, or you can access the services through the regional public endpoints. Services & applications in the parent region that launch, manage, or refer to EC2 instances or EBS volumes can operate on those objects within an Outpost with no changes.

Purchasing an Outpost
The process of purchasing an Outpost is a bit more involved than that of launching an EC2 instance or creating an S3 bucket, but it should be straightforward. I don’t actually have a data center, and won’t actually take delivery of an Outpost, but I’ll do my best to show you the actual experience!

The first step is to describe and qualify my site. I enter my address:

I confirm temperature, humidity, and airflow at the rack position, that my loading dock can accommodate the shipping crate, and that there’s a clear access path from the loading dock to the rack’s final resting position:

I provide information about my site’s power configuration:

And the networking configuration:

After I create the site, I create my Outpost:

Now I am ready to order my hardware. I can choose any one of 18 standard configurations, with varied amounts of compute capacity and storage (custom configurations are also available), and click Create order to proceed:

The EC2 capacity shown above indicates the largest instance size of a particular type. I can launch instances of that size, or I can use the smaller sizes, as needed. For example, the the capacity of the OR-HUZEI16 configuration that I selected is listed as 7 m5.24xlarge instances and 3 c5.24xlarge instances. I could launch a total of 10 instances in those sizes, or (if I needed lots of smaller ones) I could launch 168 m5.xlarge instances and 72 c5.xlarge instances. I could also use a variety of sizes, subject to available capacity and the details of how the instances are assigned to the hardware.

I confirm my order, choose the Outpost that I created earlier, and click Submit order:

My order will be reviewed, my colleagues might give me a call to review some details, and my Outpost will be shipped to my site. A team of AWS installers will arrive to unpack & inspect the Outpost, transport it to its resting position in my data center, and work with my data center operations (DCO) team to get it connected and powered up.

Once the Outpost is powered up and the network is configured, it will set itself up automatically. At that point I can return to the console and monitor capacity exceptions (situations where demand exceeds supply), capacity availability, and capacity utilization:

Using an Outpost
The next step is to set up one or more subnets in my Outpost, as shown above. Then I can launch EC2 instances and create EBS volumes in the subnet, just as I would with any other VPC subnet.

I can ask for more capacity by selecting Increase capacity from the Actions menu:

The AWS team will contact me within 3 business days to discuss my options.

Things to Know
Here are a couple of other things to keep in mind when thinking about using Outposts:

Availability – Outposts are available in the following countries:

  • North America (United States)
  • Europe (All EU countries, Switzerland, Norway)
  • Asia Pacific (Japan, South Korea, Australia)

Support – You must subscribe to AWS Enterprise Support in order to purchase an Outpost. We will remotely monitor your Outpost, and keep it happy & healthy over time. We’ll look for failing components and arrange to replace them without disturbing your operations.

Billing & Payment Options – You can purchase Outposts on a three-year term, with All Upfront, Partial Upfront, and No Upfront payment options. The purchase price covers all EC2 and EBS usage within the Outpost; other services are billed by the hour, with the EC2 and EBS portions removed. You pay the regular inter-AZ data transfer charge to move data between an Outpost and another subnet in the same VPC, and the usual AWS data transfer charge for data that exits to the Internet across the link to the parent region.

Capacity Expansion – Today, you can group up to 16 racks into a single capacity pool. Over time we expect to allow you to group thousands of racks together in this manner.

Stay Tuned
This is, like most AWS announcements, just the starting point. We have a lot of cool stuff in the works, and it is still Day One for AWS Outposts!

Jeff;