All posts by Jeff Barr

AWS Data Exchange – Find, Subscribe To, and Use Data Products

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-data-exchange-find-subscribe-to-and-use-data-products/

We live in a data-intensive, data-driven world! Organizations of all types collect, store, process, analyze data and use it to inform and improve their decision-making processes. The AWS Cloud is well-suited to all of these activities; it offers vast amounts of storage, access to any conceivable amount of compute power, and many different types of analytical tools.

In addition to generating and working with data internally, many organizations generate and then share data sets with the general public or within their industry. We made some initial steps to encourage this back in 2008 with the launch of AWS Public Data Sets (Paging Researchers, Analysts, and Developers). That effort has evolved into the Registry of Open Data on AWS (New – Registry of Open Data on AWS (RODA)), which currently contains 118 interesting datasets, with more added all the time.

New AWS Data Exchange
Today, we are taking the next step forward, and are launching AWS Data Exchange. This addition to AWS Marketplace contains over one thousand licensable data products from over 80 data providers. There’s a diverse catalog of free and paid offerings, in categories such as financial services, health care / life sciences, geospatial, weather, and mapping.

If you are a data subscriber, you can quickly find, procure, and start using these products. If you are a data provider, you can easily package, license, and deliver products of your own. Let’s take a look at Data Exchange from both vantage points, and then review some important details.

Let’s define a few important terms before diving in:

Data Provider – An organization that has one or more data products to share.

Data Subscriber – An AWS customer that wants to make use of data products from Data Providers.

Data Product – A collection of data sets.

Data Set – A container for data assets that belong together, grouped by revision.

Revision – A container for one or more data assets as of a point in time.

Data Asset – The actual data, in any desired format.

AWS Data Exchange for Data Subscribers
As a data subscriber, I click View product catalog and start out in the Discover data section of the AWS Data Exchange Console:

Products are available from a long list of vendors:

I can enter a search term, click Search, and then narrow down my results to show only products that have a Free pricing plan:

I can also search for products from a specific vendor, that match a search term, and that have a Free pricing plan:

The second one looks interesting and relevant, so I click on 5 Digit Zip Code Boundaries US (TRIAL) to learn more:

I think I can use this in my app, and want to give it a try, so I click Continue to subscribe. I review the details, read the Data Subscription Agreement, and click Subscribe:

The subscription is activated within a few minutes, and I can see it in my list of Subscriptions:

Then I can download the set to my S3 bucket, and take a look. I click into the data set, and find the Revisions:

I click into the revision, and I can see the assets (containing the actual data) that I am looking for:

I select the asset(s) that I want, and click Export to Amazon S3. Then I choose a bucket, and Click Export to proceed:

This creates a job that will copy the data to my bucket (extra IAM permissions are required here; read the Access Control documentation for more info):

The jobs run asynchronously and copy data from Data Exchange to the bucket. Jobs can be created interactively, as I just showed you, or programmatically. Once the data is in the bucket, I can access and process it in any desired way. I could, for example, use a AWS Lambda function to parse the ZIP file and use the results to update a Amazon DynamoDB table. Or, I could run an AWS Glue crawler to get the data into my Glue catalog, run an Amazon Athena query, and visualize the results in a Amazon QuickSight dashboard.

Subscription can last from 1-36 months with an auto-renew option; subscription fees are billed to my AWS account each month.

AWS Data Exchange for Data Providers
Now I am going to put my “data provider” hat and show you the basics of the publication process (the User Guide contains a more detailed walk-through). In order to be able to license data, I must agree to the terms and conditions, and my application must be approved by AWS.

After I apply and have been approved, I start by creating my first data set. I click Data sets in the navigation, and then Create data set:

I describe my data set, and have the option to tag it, then click Create:

Next, I click Create revision to create the first revision to the data set:

I add a comment, and have the option to tag the revision before clicking Create:

I can copy my data from an existing S3 location, or I can upload it from my desktop:

I choose the second option, select my file, and it appears as an Imported asset after the import job completes. I review everything, and click Finalize for the revision:

My data set is ready right away, and now I can use it to create one or more products:

The console outlines the principal steps:

I can set up public pricing information for my product:

AWS Data Exchange lets me create private pricing plans for individual customers, and it also allows my existing customers to bring their existing (pre-AWS Data Exchange) licenses for my products along with them by creating a Bring Your Own Subscription offer.

I can use the provided Data Subscription Agreement (DSA) provided by AWS Data Exchange, use it as the basis for my own, or I can upload an existing one:

I can use the AWS Data Exchange API to create, update, list, and manage data sets and revisions to them. Functions include CreateDataSet, UpdataSet, ListDataSets, CreateRevision, UpdateAsset, and CreateJob.

Things to Know
Here are a couple of things that you should know about Data Exchange:

Subscription Verification – The data provider can also require additional information in order to verify my subscription. If that is the case, the console will ask me to supply the info, and the provider will review and approve or decline within 45 days:

Here is what the provider sees:

Revisions & Notifications – The Data Provider can revise their data sets at any time. The Data Consumer receives a CloudWatch Event each time a product that they are subscribed to is updated; this can be used to launch a job to retrieve the latest revision of the assets. If you are implementing a system of this type and need some test events, find and subscribe to the Heartbeat product:

Data Categories & Types – Certain categories of data are not permitted on AWS Data Exchange. For example, your data products may not include information that can be used to identify any person, unless that information is already legally available to the public. See, Publishing Guidelines for detailed guidelines on what categories of data are permitted.

Data Provider Location – Data providers must either be a valid legal entity domiciled in the United States or in a member state of the EU.

Available Now
AWS Data Exchange is available now and you can start using it today. If you own some interesting data and would like to publish it, start here. If you are a developer, browse the product catalog and look for data that will add value to your product.

Jeff;

 

 

15 Years of AWS Blogging!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/15-years-of-aws-blogging/

I wrote the first post (Welcome) to this blog exactly 15 years ago today. It is safe to say that I never thought that writing those introductory sentences would lead my career in such a new and ever -challenging dimension. This seems like as good of a time as any to document and share the story of how the blog came to be, share some of my favorite posts, and to talk about the actual mechanics of writing and blogging.

Before the Beginning
Back in 1999 or so, I was part of the Visual Basic team at Microsoft. XML was brand new, and Dave Winer was just starting to talk about RSS. The intersection of VB6, XML, and RSS intrigued me, and I built a little app called Headline Viewer as a side project. I put it up for download, people liked it, and content owners started to send me their RSS feeds for inclusion. The list of feeds took on a life of its own, and people wanted it just as much as they wanted the app. I also started my third personal blog around this time after losing the earlier incarnations in server meltdowns.

With encouragement from Aaron Swartz and others, I put Headline Viewer aside and started Syndic8 in late 2001 to collect, organize, and share them. I wrote nearly 90,000 lines of PHP in my personal time, all centered around a very complex MySQL database that included over 50 tables. I learned a lot about hosting, scaling, security, and database management. The site also had an XML-RPC web service interface that supported a very wide range of query and update operations. The feed collection grew to nearly 250,000 over the first couple of years.

I did not know it at the time, but my early experience with XML, RSS, blogging, and web services would turn out to be the skills that set me apart when I applied to work at Amazon. Sometimes, as it turns out, your hobbies and personal interests can end up as career-changing assets & differentiators.

E-Commerce Web Services
In parallel to all of this, I left Microsoft in 2000 and was consulting in the then-new field of web services. At that time, most of the web services in use were nothing more than cute demos: stock quotes, weather forecasts, and currency conversions. Technologists could marvel at a function call that crossed the Internet and back, but investors simply shrugged and moved on.

In mid-2002 I became aware of Amazon’s very first web service (now known as the Product Advertising API). This was, in my eyes, the first useful web service. It did something non-trivial that could not have been done locally, and provided value to both the provider and the consumer. I downloaded the SDK (copies were later made available on the mini-CD shown at right), sent the developers some feedback, and before I knew it I was at Amazon HQ, along with 4 or 5 other early fans of the service, for a day-long special event. Several teams shared their plans with us, and asked for our unvarnished feedback.

At some point during the day, one of the presenters said “We launched our first service, developers found it, and were building & sharing apps within 24 hours or so. We are going to look around the company and see if we can put web service interfaces on other parts of our business.”

This was my light-bulb moment — Amazon.com was going to become accessible to developers! I turned to Sarah Bryar (she had extended the invite to the event) and told her that I wanted to be a part of this. She said that they could make that happen, and a few weeks later (summer of 2002), I was a development manager on the Amazon Associates team, reporting to Larry Hughes. In addition to running a team that produced daily reports for each member of the Associates program, Larry gave me the freedom to “help out” with the nascent web services effort. I wrote sample programs, helped out on the forums, and even contributed to the code base. I went through the usual Amazon interview loop, and had to write some string-handling code on the white board.

Web Services Evangelist
A couple of months in to the job, Sarah and Rob Frederick approached me and asked me to speak at a conference because no one else wanted to. I was more than happy to do this, and a few months later Sarah offered me the position of Web Services Evangelist. This was a great match for my skills and I took to it right away, booking events with any developer, company, school, or event that wanted to hear from me!

Later in 2003 I was part of a brainstorming session at Jeff Bezos’ house. Jeff, Andy Jassy, Al Vermeulen, me, and a few others (I should have kept better notes) spent a day coming up with a long list of ideas that evolved into EC2, S3, RDS, and so forth. I am fairly sure that this is the session discussed in How AWS Came to Be, but I am not 100% certain.

Using this list as a starting point, Andy started to write a narrative to define the AWS business. I was fortunate enough to have an office just 2 doors up the hall from him, and spent a lot of time reviewing and commenting on his narrative (read How Jeff Bezos Turned Narrative into Amazon’s Competitive Advantage to learn how we use narratives to define businesses and drive decisions). I also wrote some docs of my own that defined our plans for a developer relations team.

We Need a Blog
As I read through early drafts of Andy’s first narrative, I began to get a sense that we were going to build something complex & substantial.

My developer relations plan included a blog, and I spent a ton of time discussing the specifics in meetings with Andy and Drew Herdener. I remember that it was very hard for me to define precisely what this blog would look like, and how it would work from a content-generation and approval perspective. As is the Amazon way, every answer that I supplied basically begat even more questions from Andy and Drew! We ultimately settled on a few ground rules regarding tone and review, and I was raring to go.

I was lucky enough to be asked to accompany Jeff Bezos to the second Foo Camp as his technical advisor. Among many others, I met Ben and Mena Trott of Six Apart, and they gave me a coupon for 1000 free days of access to TypePad, their blogging tool.

We Have a Blog
Armed with that coupon, I returned to Seattle, created the AWS Blog (later renamed the AWS News Blog), and wrote the first two posts (Welcome and Browse Node API) later that year. Little did I know that those first couple of posts would change the course of my career!

I struggled a bit with “voice” in the early days, and could not decide if I was writing as the company, the group, the service, or simply as me. After some experimentation, I found that a personal, first-person style worked best and that’s what I settled on.

In the early days, we did not have much of a process or a blog team. Interesting topics found their way in to my inbox, and I simply wrote about them as I saw fit. I had an incredible amount of freedom to pick and choose topics, and words, and I did my best to be a strong, accurate communicator while staying afield of controversies that would simply cause more work for my colleagues in Amazon PR.

Launching AWS
Andy started building teams and I began to get ready for the first launches. We could have started with a dramatic flourish, proclaiming that we were about to change the world with the introduction of a broad lineup of cloud services. But we don’t work that way, and are happy to communicate in a factual, step-by-step fashion. It was definitely somewhat disconcerting to see that Business Week characterized our early efforts as Jeff Bezo’s Risky Bet, but we accept that our early efforts can sometimes be underappreciated or even misunderstood.

Here are some of the posts that I wrote for the earliest AWS services and features:

SQS – I somehow neglected to write about the first beta of Amazon Simple Queue Service (SQS), and the first mention is in a post called Queue Scratchpad. This post references AWS Zone, a site built by long-time Amazonian Elena Dykhno before she even joined the company! I did manage to write a post for Simple Queue Service Beta 2. At this point I am sure that many people wondered why their bookstore was trying to sell messages queues, but we didn’t see the need to over-explain ourselves or to telegraph our plans.

S3 – I wrote my first Amazon S3 post while running to catch a plane, but I did manage to cover all of the basics: a service overview, definitions of major terms, pricing, and an invitation for developers to create cool applications!

EC2 – EC2 had been “just about to launch” for quite some time, and I knew that the launch would be a big deal. I had already teased the topic of scalable on-demand web services in Sometimes You Need Just a Little…, and I was ever so ready to actually write about EC2. Of course, our long-scheduled family vacation was set to coincide with the launch, and I wrote part of the Amazon EC2 Beta post while sitting poolside in Cabo San Lucas, Mexico! That post was just about perfect, but I probably should have been clear that “AMI” should be pronounced, and not spelled out, as some pundits claim.

EBS – Initially, all of the storage on EC2 instances was ephemeral, and would be lost when the instance was shut down. I think it is safe to say that the launch of EBS (Amazon EBS (Elastic Block Store) – Bring Us Your Data) greatly simplified the use of EC2.

These are just a few of my early posts, but they definitely laid the foundation for what has followed. I still take great delight in reading those posts, thinking back to the early days of the cloud.

AWS Blogging Today
Over the years, the fraction of my time that is allocated to blogging has grown, and now stands at about 80%. This leaves me with time to do a little bit of public speaking, meet with customers, and to do what I can to keep up with this amazing and ever-growing field. I thoroughly enjoy the opportunities that I have to work with the AWS service teams that work so hard to listen to our customers and do their best to respond with services that meet their needs.

We now have a strong team and an equally strong production process for new blog posts. Teams request a post by creating a ticket, attaching their PRFAQ (Press Release + FAQ, another type of Amazon document) and giving the bloggers early internal access to their service. We review the materials, ask hard questions, use the service, and draft our post. We share the drafts internally, read and respond to feedback, and eagerly await the go-ahead to publish.

Planning and Writing a Post
With 3100 posts under my belt (and more on the way), here is what I focus on when planning and writing a post:

Learn & Be Curious – This is an Amazon Leadership Principle. Writing is easy once I understand what I want to say. I study each PRFAQ, ask hard questions, and am never afraid to admit that I don’t grok some seemingly obvious point. Time after time I am seemingly at the absolute limit of what I can understand and absorb, but that never stops me from trying.

Accuracy – I never shade the truth, and I never use weasel words that could be interpreted in more than one way to give myself an out. The Internet is the ultimate fact-checking vehicle, and I don’t want to be wrong. If I am, I am more than happy to admit it, and to fix the issue.

Readability – I have plenty of words in my vocabulary, but I don’t feel the need to use all of them. I would rather use the most appropriate word than the longest and most obscure one. I am also cautious with acronyms and enterprise jargon, and try hard to keep my terabytes and tebibytes (ugh) straight.

Frugality – This is also an Amazon Leadership Principle, and I use it in an interesting way. I know that you are busy, and that you don’t need extra words or flowery language. So I try hard (this post notwithstanding) to keep most of my posts at 700 to 800 words. I’d rather you spend the time using the service and doing something useful.

Some Personal Thoughts
Before I wrap up, I have a couple of reflections on this incredible journey…

Writing – Although I love to write, I was definitely not a natural-born writer. In fact, my high school English teacher gave me the lowest possible passing grade and told me that my future would be better if I could only write better. I stopped trying to grasp formal English, and instead started to observe how genuine writers used words & punctuation. That (and decades of practice) made all the difference.

Career Paths – Blogging and evangelism have turned out to be a great match for my skills and interests, but I did not figure this out until I was on the far side of 40. It is perfectly OK to be 20-something, 30-something, or even 40-something before you finally figure out who you are and what you like to do. Keep that in mind, and stay open and flexible to new avenues and new opportunities throughout your career.

Special Thanks – Over the years I have received tons of good advice and 100% support from many great managers while I slowly grew into a full-time blogger: Andy Jassy, Prashant Sridharan, Steve Rabuchin, and Ariel Kelman. I truly appreciate the freedom that they have given me to develop my authorial voice and my blogging skills over the years! Ana Visneski and Robin Park have done incredible work to build a blogging team that supports me and the other bloggers.

Thanks for Reading
And with that, I would like to thank you, dear reader, for your time, attention, and very kind words over the past 15 years. It has been the privilege of a lifetime to be able to share so much interesting technology with you!

Jeff;

 

New – Savings Plans for AWS Compute Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-savings-plans-for-aws-compute-services/

I first wrote about EC2 Reserved Instances a decade ago! Since I wrote that post, our customers have saved billions of dollars by using Reserved Instances to commit to usage of a specific instance type and operating system within an AWS region.

Over the years we have enhanced the Reserved Instance model to make it easier for you to take advantage of the RI discount. This includes:

Regional Benefit – This enhancement gave you the ability to apply RIs across all Availability Zones in a region.

Convertible RIs – This enhancement allowed you to change the operating system or instance type at any time.

Instance Size Flexibility – This enhancement allowed your Regional RIs to apply to any instance size within a particular instance family.

The model, as it stands today, gives you discounts of up to 72%, but it does require you to coordinate your RI purchases and exchanges in order to ensure that you have an optimal mix that covers usage that might change over time.

New Savings Plans
Today we are launching Savings Plans, a new and flexible discount model that provides you with the same discounts as Reserved Instances, in exchange for a commitment to use a specific amount (measured in dollars per hour) of compute power over a one or three year period.

Every type of compute usage has an On Demand price and a (lower) Savings Plan price. After you commit to a specific amount of compute usage per hour, all usage up to that amount will be covered by the Saving Plan, and anything past it will be billed at the On Demand rate.

If you own Reserved Instances, the Savings Plan applies to any On Demand usage that is not covered by the RIs. We will continue to sell RIs, but Savings Plans are more flexible and I think many of you will prefer them!

Savings Plans are available in two flavors:

Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66% (just like Convertible RIs). The plans automatically apply to any EC2 instance regardless of region, instance family, operating system, or tenancy, including those that are part of EMR, ECS, or EKS clusters, or launched by Fargate. For example, you can shift from C4 to C5 instances, move a workload from Dublin to London, or migrate from EC2 to Fargate, benefiting from Savings Plan prices along the way, without having to do anything.

EC2 Instance Savings Plans apply to a specific instance family within a region and provide the largest discount (up to 72%, just like Standard RIs). Just like with RIs, your savings plan covers usage of different sizes of the same instance type (such as a c5.4xlarge or c5.large) throughout a region. You can even switch switch from Windows to Linux while continuing to benefit, without having to make any changes to your savings plan.

Purchasing a Savings Plan
AWS Cost Explorer will help you to choose a Savings Plan, and will guide you through the purchase process. Since my own EC2 usage is fairly low, I used a test account that had more usage. I open AWS Cost Explorer, then click Recommendations within Savings Plans:

I choose my Recommendation options, and review the recommendations:

Cost Explorer recommends that I purchase $2.40 of hourly Savings Plan commitment, and projects that I will save 40% (nearly $1200) per month, in comparison to On-Demand. This recommendation tries to take into account variable usage or temporary usage spikes in order to recommend the steady state capacity for which we believe you should consider a Savings Plan. In my case, the variable usage averages out to $0.04 per hour that we’re recommending I keep as On-Demand.

I can see the recommended Savings Plans at the bottom of the page, select those that I want to purchase, and Add them to my cart:

When I am ready to proceed, I click View cart, review my purchases, and click Submit order to finalize them:

My Savings Plans become active right away. I can use the Cost Explorer’s Performance & Coverage reports to review my actual savings, and to verify that I own sufficient Savings Plans to deliver the desired amount of coverage.

Available Now
As you can see, Savings Plans are easy to use! You can access compute power at discounts of up to 72%, while gaining the flexibility to change compute services, instance types, operating systems, regions, and so forth.

Savings Plans are available in all AWS regions outside of China, and you can start to purchase (and benefit) from them today!

Jeff;

 

In the Works – AWS Region in Spain

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-spain/

We opened AWS Regions in Sweden, Hong Kong, and Bahrain in the span of less than a year, and are currently working on regions in Jakarta, Indonesia, Cape Town, South Africa and Milan, Italy.

Coming to Spain
Today I am happy to announce that the AWS Europe (Spain) Region is in the works, and will open in late 2022 or early 2023 with three Availability Zones. This will be our seventh region in Europe, joining existing regions in Dublin, Frankfurt, London, Paris, Stockholm, and the upcoming Milan region that will open in early 2020 (check out the AWS Global Infrastructure page to learn more).

AWS customers are already making use of 69 Availability Zones across 22 regions worldwide. Today’s announcement brings the total number of global regions (operational and in the works) up to 26.

I was in Spain just last month, and was able to meet with developers in Madrid and Barcelona. Their applications were impressive and varied: retail management, entertainment, analytics for online advertising, investment recommendations, social scoring, and more.

Several of the companies were born-in-the-cloud startups; all made heavy use of the entire line of AWS database services (Amazon Redshift was mentioned frequently), along with AWS Lambda and AWS CloudFormation. Some were building for the domestic market and others for the global market, but I am confident that they will all be able to benefit from this new region.

We launched AWS Activate in Spain in 2013, giving startups access to guidance and one-on-one time with AWS experts, along with web-based training, self-paced labs, customer support, offers from third-parties, and up to $100,000 in AWS service credits. We also work with the VC community (Caixa Risk Capital and KFund), and several startup accelerators (Seedrocket and Wayra).

AWS in Spain
This upcoming region is the latest in a long series of investments that we have made in the Iberian Peninsula. We opened an edge location in Madrid in 2012, and an office in the same city in 2014.We added our first Direct Connect location in 2016, and another one in 2017, all to support the rapid growth of AWS in the area. We now have two edge locations in Madrid, and an office in Barcelona as well.

In addition to our support for startups through AWS Activate, we provide training via AWS Academy and AWS Educate. Both of these programs are designed to build knowledge and skills in cloud computing, and are available in Spanish. Today, hundreds of universities and business schools in Spain are making great use of these programs.

The AWS office in Madrid (which I visited on my recent trip) is fully staffed with account managers, business development managers, customer service representatives, partner managers, professional services consultants, solutions architects, and technical account managers. I had the opportunity to participate in an internal fireside with the team, and I can tell you that (like every Amazonian) they are 100% customer-obsessed, and ready to help you to succeed in any possible way.

Jeff;

PS – If you would like to join our team in Spain, check out our open positions in Madrid and Barcelona.

200 Amazon CloudFront Points of Presence + Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/200-amazon-cloudfront-points-of-presence-price-reduction/

Less than two years ago I announced the 100th Point of Presence for Amazon CloudFront.

The overall Point of Presence footprint is now growing at 50% per year. Since we launched the 100th PoP in 2017, we have expanded to 77 cities in 34 countries including China, Israel, Denmark, Norway, South Africa, UAE, Bahrain, Portugal, and Belgium.

CloudFront has been used to deliver many high-visibility live-streaming events including Superbowl LIII, Thursday Night Football (via Prime Video), the Royal Wedding, the Winter Olympics, the Commonwealth Games, a multitude of soccer games (including the 2019 FIFA World Cup), and much more.

Whether used alone or in conjunction with other AWS services, CloudFront is a great way to deliver content, with plenty of options that also help to secure the content and to protect the underlying source. For example:

DDoS ProtectionAmazon CloudFront customers were automatically protected against 84,289 Distributed Denial of Service (DDoS) attacks in 2018, including a 1.4 Tbps memcached reflection attack.

Attack MitigationCloudFront customers used AWS Shield Advanced and AWS WAF to mitigate application-layer attacks, including a flood of over 20 million requests per second.

Certificate Management – We announced CloudFront Integration with AWS Certificate Manager in 2016, and use of custom certificates has grown by 600%.

New Locations in South America
Today I am happy to announce that our global network continues to grow, and now includes 200 Points of Presence, including new locations in Argentina (198), Chile (199), and Colombia (200):

AWS customer NED is based in Chile. They are using CloudFront to deliver server-side ad injection and low-latency content distribution to their clients, and are also using [email protected] to implement robust anti-piracy protection.

Price Reduction
We are also reducing the pricing for on-demand data transfer from CloudFront by 56% for all Points of Presence in South America, effective November 1, 2019. Check out the CloudFront Pricing page to learn more.

CloudFront Resources
Here are some resources to help you to learn how to make great use CloudFront in your organization:

Jeff;

 

New – Amazon CloudWatch Anomaly Detection

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-cloudwatch-anomaly-detection/

Amazon CloudWatch launched in early 2009 as part of our desire to (as I said at the time) “make it even easier for you to build sophisticated, scalable, and robust web applications using AWS.” We have continued to expand CloudWatch over the years, and our customers now use it to monitor their infrastructure, systems, applications, and even business metrics. They build custom dashboards, set alarms, and count on CloudWatch to alert them to issues that affect the performance or reliability of their applications.

If you have used CloudWatch Alarms, you know that there’s a bit of an art to setting your alarm thresholds. You want to make sure to catch trouble early, but you don’t want to trigger false alarms. You need to deal with growth and with scale, and you also need to make sure that you adjust and recalibrate your thresholds to deal with cyclic and seasonal behavior.

Anomaly Detection
Today we are enhancing CloudWatch with a new feature that will help you to make more effective use of CloudWatch Alarms. Powered by machine learning and building on over a decade of experience, CloudWatch Anomaly Detection has its roots in over 12,000 internal models. It will help you to avoid manual configuration and experimentation, and can be used in conjunction with any standard or custom CloudWatch metric that has a discernible trend or pattern.

Anomaly Detection analyzes the historical values for the chosen metric, and looks for predictable patterns that repeat hourly, daily, or weekly. It then creates a best-fit model that will help you to better predict the future, and to more cleanly differentiate normal and problematic behavior. You can adjust and fine-tune the model as desired, and you can even use multiple models for the same CloudWatch metric.

Using Anomaly Detection
I can create my own models in a matter of seconds! I have an EC2 instance that generates a spike in CPU Utilization every 24 hours:

I select the metric, and click the “wave” icon to enable anomaly detection for this metric and statistic:

This creates a model with default settings. If I select the model and zoom in to see one of the utilization spikes, I can see that the spike is reflected in the prediction bands:

I can use this model as-is to drive alarms on the metric, or I can select the model and click Edit model to customize it:

I can exclude specific time ranges (past or future) from the data that is used to train the model; this is a good idea if the data reflects a one-time event that will not happen again. I can also specify the timezone of the data; this lets me handle metrics that are sensitive to changes in daylight savings time:

After I have set this up, the anomaly detection model goes in to effect and I can use to create alarms as usual. I choose Anomaly detection as my Threshold type, and use the Anomaly detection threshold to control the thickness of the band. I can raise the alarm when the metric is outside of, great than, or lower than the band:

The remaining steps are identical to the ones that you already use to create other types of alarms.

Things to Know
Here are a couple of interesting things to keep in mind when you are getting ready to use this new CloudWatch feature:

Suitable Metrics – Anomaly Detection works best when the metrics have a discernible pattern or trend, and when there is a minimal number of missing data points.

Updates – Once the model has been created, it will be updated every five minutes with any new metric data.

One-Time Events – The model cannot predict one-time events such as Black Friday or the holiday shopping season.

API / CLI / CloudFormation – You can create and manage anomaly models from the Console, the CloudWatch API (PutAnomalyDetector) and the CloudWatch CLI. You can also create AWS::CloudWatch::AnomalyDetector resources in your AWS CloudFormation templates.

Now Available
You can start creating and using CloudWatch Anomaly Detection today in all commercial AWS regions. To learn more, read about CloudWatch Anomaly Detection in the CloudWatch Documentation.

Jeff;

 

Now Available – Amazon Relational Database Service (RDS) on VMware

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-relational-database-service-rds-on-vmware/

Last year I told you that we were working to give you Amazon RDS on VMware, with the goal of bringing many of the benefits of Amazon Relational Database Service (RDS) to your on-premises virtualized environments. These benefits include the ability to provision new on-premises databases in minutes, make backups, and restore to a point in time. You get automated management of your on-premises databases, without having to provision and manage the database engine.

Now Available
Today, I am happy to report that Amazon RDS on VMware is available for production use, and that you can start using it today. We are launching with support for Microsoft SQL Server, PostgreSQL, and MySQL.

Here are some important prerequisites:

CompatibilityRDS on VMware works with vSphere clusters that run version 6.5 or better.

Connectivity – Your vSphere cluster must have outbound connectivity to the Internet, and must be able to make HTTPS connections to the public AWS endpoints.

Permissions – You will need to have Administrative privileges (and the skills to match) on the cluster in order to set up RDS on VMware. You will also need to have (or create) a second set of credentials for use by RDS on VMware.

Hardware – The hardware that you use to host RDS on VMware must be listed in the relevant VMware Hardware Compatibility Guide.

Resources – Each cluster must have at least 24 vCPUs, 24 GiB of memory, and 180 GB of storage for the on-premises management components of RDS on VMware, along with additional resources to support the on-premises database instances that you launch.

Setting up Amazon RDS on VMware
Due to the nature of this service, the setup process is more involved than usual and I am not going to walk through it at my usual level of detail. Instead, I am going to outline the process and refer you to the Amazon RDS on VMware User Guide for more information. During the setup process, you will be asked to supply details of your vCenter/ESXi configuration. For best results, I advise a dry-run through the User Guide so that you can find and organize all of the necessary information.

Here are the principal steps, assuming that you already have a running vSphere data center:

Prepare Environment – Check vSphere version, confirm storage device & free space, provision resource pool.

Configure Cluster Control Network – Create a network for control traffic and monitoring. Must be a vSphere distributed port group with 128 to 1022 ports.

Configure Application Network – This is the network that applications, users, and DBAs will use to interact with the RDS on VMware DB instances. It must be a vSphere distributed port group with 128 to 1022 ports, and it must span all of the ESXi hosts that underly the cluster. The network must have an IPv4 subnet large enough to accommodate all of the instances that you expect to launch. In many cases your cluster will already have an Application Network.

Configure Management Network – Configure your ESXi hosts to add a route to the Edge Router (part of RDS on VMware) in the Cluster Control Network

Configure vCenter Credentials – Create a set of credentials for use during the onboarding process.

Configure Outbound Internet Access – Confirm that outbound connections can be made from the Edge Router in your virtual data center to AWS services.

With the preparatory work out of the way, the next step is to bring the cluster onboard by creating a custom (on-premises) Availability Zone and using the installer to install the product. I open the RDS Console, choose the US East (N. Virginia) Region, and click Custom availability zones:

I can see my existing custom AZs and their status. I click Create custom AZ to proceed:

I enter a name for my AZ and for the VPN tunnel between the selected AWS region and my vSphere data center, and then I enter the IP address of the VPN. Then I click Create custom AZ:

My new AZ is visible, in status Unregistered:

To register my vSphere cluster as a Custom AZ, I click Download Installer from the AWS Console to download the RDS on VMware installer. I deploy the installer in my cluster and follow through the guided wizard to fill in the network configurations, AWS credentials, and so forth, then start the installation. After the installation is complete, the status of my custom AZ will change to Active. Behind the scenes, the installer automatically deploys the on-premises components of RDS on VMware and connects the vSphere cluster to the AWS region.

Some of the database engines require me to bring my own media and an on-premises license. I can import the installation media that I have in my data center onto RDS and use it to launch the database engine. For example, here’s my media image for SQL Server Enterprise Edition:

The steps above must be done on a cluster-by-cluster basis. Once a cluster has been set up, multiple Database instances can be launched, based on available compute, storage, and network (IP address) resources.

Using Amazon RDS for VMware
With all of the setup work complete, I can use the same interfaces (RDS Console, RDS CLI, or the RDS APIs) to launch and manage Database instances in the cloud and on my on-premises network.

I’ll use the RDS Console, and click Create database to get started. I choose On-premises and pick my custom AZ, then choose a database engine:

I enter a name for my instance, another name for the master user, and enter (or let RDS assign) a password:

Then I pick the DB instance class (the v11 in the names refers to version 11 of the VMware virtual hardware definition) and click Create database:

Here’s a more detailed look at some of the database instance sizes. As is the case with cloud-based instance sizes, the “c” instances are compute-intensive, the “r” instances are memory-intensive, and the “m” instances are general-purpose:

The status of my new database instance starts out as Creating, and progresses though Backing-up and then to Available:

Once it is ready, the endpoint is available in the console:

On-premises applications can use this endpoint to connect to the database instance across the Application Network.

Before I wrap up, let’s take a look at a few other powerful features of RDS on VMware: Snapshot backups, point-in-time restores, and the power to change the DB instance class.

Snapshot backups are a useful companion to the automated backups taken daily by RDS on VMware. I simply select Take snapshot from the Action menu:

To learn more, read Creating a DB Snapshot.

Point in time recovery allows me to create a fresh on-premises DB instance based on the state of an existing one at an earlier point in time. To learn more, read Restoring a DB Instance to a Specified Time.

I can change the DB instance class in order to scale up or down in response to changing requirements. I select Modify from the Action menu, choose the new class, and click Submit:

The modification will be made during the maintenance window for the DB instance.

A few other features that I did not have the space to cover include renaming an existing DB instance (very handy for disaster recovery), and rebooting a DB instance.

Available Now
Amazon RDS on VMware is available now and you can start using it today in the US East (N. Virginia) Region.

Jeff;

 

Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/migration-complete-amazons-consumer-business-just-turned-off-its-final-oracle-database/

Over my 17 years at Amazon, I have seen that my colleagues on the engineering team are never content to leave good-enough alone. They routinely re-evaluate every internal system to make sure that it is as scalable, efficient, performant, and secure as possible. When they find an avenue for improvement, they will use what they have learned to thoroughly modernize our architectures and implementations, often going so far as to rip apart existing systems and rebuild them from the ground up if necessary.

Today I would like to tell you about an internal database migration effort of this type that just wrapped up after several years of work. Over the years we realized that we were spending too much time managing and scaling thousands of legacy Oracle databases. Instead of focusing on high-value differentiated work, our database administrators (DBAs) spent a lot of time simply keeping the lights on while transaction rates climbed and the overall amount of stored data mounted. This included time spent dealing with complex & inefficient hardware provisioning, license management, and many other issues that are now best handled by modern, managed database services.

More than 100 teams in Amazon’s Consumer business participated in the migration effort. This includes well-known customer-facing brands and sites such as Alexa, Amazon Prime, Amazon Prime Video, Amazon Fresh, Kindle, Amazon Music, Audible, Shopbop, Twitch, and Zappos, as well as internal teams such as AdTech, Amazon Fulfillment Technology, Consumer Payments, Customer Returns, Catalog Systems, Deliver Experience, Digital Devices, External Payments, Finance, InfoSec, Marketplace, Ordering, and Retail Systems.

Migration Complete
I am happy to report that this database migration effort is now complete. Amazon’s Consumer business just turned off its final Oracle database (some third-party applications are tightly bound to Oracle and were not migrated).

We migrated 75 petabytes of internal data stored in nearly 7,500 Oracle databases to multiple AWS database services including Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS), and Amazon Redshift. The migrations were accomplished with little or no downtime, and covered 100% of our proprietary systems. This includes complex purchasing, catalog management, order fulfillment, accounting, and video streaming workloads. We kept careful track of the costs and the performance, and realized the following results:

  • Cost Reduction – We reduced our database costs by over 60% on top of the heavily discounted rate we negotiated based on our scale. Customers regularly report cost savings of 90% by switching from Oracle to AWS.
  • Performance Improvements – Latency of our consumer-facing applications was reduced by 40%.
  • Administrative Overhead – The switch to managed services reduced database admin overhead by 70%.

The migration gave each internal team the freedom to choose the purpose-built AWS database service that best fit their needs, and also gave them better control over their budget and their cost model. Low-latency services were migrated to DynamoDB and other highly scalable non-relational databases such as Amazon ElastiCache. Transactional relational workloads with high data consistency requirements were moved to Aurora and RDS; analytics workloads were migrated to Redshift, our cloud data warehouse.

We captured the shutdown of the final Oracle database, and had a quick celebration:

DBA Career Path
As I explained earlier, our DBAs once spent a lot of time managing and scaling our legacy Oracle databases. The migration freed up time that our DBAs now use to do an even better job of performance monitoring and query optimization, all with the goal of letting them deliver a better customer experience.

As part of the migration, we also worked to create a fresh career path for our Oracle DBAs, training them to become database migration specialists and advisors. This training includes education on AWS database technologies, cloud-based architecture, cloud security, OpEx-style cost management. They now work with both internal and external customers in an advisory role, where they have an opportunity to share their first-hand experience with large-scale migration of mission-critical databases.

Migration Examples
Here are examples drawn from a few of the migrations:

Advertising – After the migration, this team was able to double their database fleet size (and their throughput) in minutes to accommodate peak traffic, courtesy of RDS. This scale-up effort would have taken months.

Buyer Fraud – This team moved 40 TB of data with just one hour of downtime, and realized the same or better performance at half the cost, powered by Amazon Aurora.

Financial Ledger – This team moved 120 TB of data, reduced latency by 40%, cut costs by 70%, and cut overhead by the same 70%, all powered by DynamoDB.

Wallet – This team migrated more than 10 billion records to DynamoDB, reducing latency by 50% and operational costs by 90% in the process. To learn more about this migration, read Amazon Wallet Scales Using Amazon DynamoDB.

My recent Prime Day 2019 post contains more examples of the extreme scale and performance that are possible with AWS.

Migration Resources
If you are ready to migrate from Oracle (or another hand-managed legacy database) to one or more AWS database services, here are some resources to get you started:

AWS Migration Partners – Our slate of AWS Migration Partners have the experience, expertise, and tools to help you to understand, plan, and execute a database migration.

Migration Case Studies -Read How Amazon is Achieving Database Freedom Using AWS to learn more about this effort; read the Prime Video, Advertising, Items & Offers, Amazon Fulfillment, and Analytics case studies to learn more about the examples that I mentioned above.

AWS Professional Services – My colleagues at AWS Professional Services are ready to work alongside you to make your migration a success.

AWS Migration Tools & Services – Check out our Cloud Migration page, read more about Migration Hub, and don’t forget about the Database Migration Service.

AWS Database Freedom – The AWS Database Freedom program is designed to help qualified customers migrate from traditional databases to cloud-native AWS databases.

AWS re:Invent Sessions – We are finalizing an extensive lineup of chalk talks and breakout sessions for AWS re:Invent that will focus on this migration effort, all led by the team members that planned and executed the migrations.

Jeff;

 

 

AWS Firewall Manager Update – Support for VPC Security Groups

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-firewall-manager-update-support-for-vpc-security-groups/

I introduced you to AWS Firewall Manager last year, and showed you how you can use it to centrally configure and manage your AWS WAF rules and AWS Shield advanced protections. AWS Firewall Manager makes use of AWS Organizations, and lets you build policies and apply them across multiple AWS accounts in a consistent manner.

Security Group Support
Today we are making AWS Firewall Manager even more useful, giving you the power to define, manage, and audit organization-wide policies for the use of VPC Security Groups.

You can use the policies to apply security groups to specified accounts and resources, check and manage the rules that are used in security group, and to find and then clean up unused and redundant security groups. You get real-time notification when misconfigured rules are detected, and can take corrective action from within the Firewall Manager Console.

In order to make use of this feature, you need to have an AWS Organization and AWS Config must be enabled for all of the accounts in it. You must also designate an AWS account as the Firewall Administrator. This account has permission to deploy AWS WAF rules, Shield Advanced protections, and security group rules across your organization.

Creating and Using Policies
After logging in to my organization’s root account, I open the Firewall Manager Console, and click Go to AWS Firewall Manager:

Then I click Security Policies in the AWS FMS section to get started. The console displays my existing policies (if any); I click Create policy to move ahead:

I select Security group as the Policy type and Common security groups as the Security group policy type, choose the target region, and click Next to proceed (I will examine the other policy types in a minute):

I give my policy a name (OrgDefault), choose a security group (SSH_Only), and opt to protect the group’s rules from changes, then click Next:

Now I define the scope of the policy. As you can see, I can choose the accounts, resource types, and even specifically tagged resources, before clicking Next:

I can also choose to exclude resources that are tagged in a particular way; this can be used to create an organization-wide policy that provides special privileges for a limited group of resources.

I review my policy, confirm that I have to enable Config and to pay the associated charges, and click Create policy:

The policy takes effect immediately, and begins to evaluate compliance within 3-5 minutes. The Firewall Manager Policies page shows an overview:

I can click the policy to learn more:

Policies also have an auto-remediation option. While this can be enabled when the policy is created, our advice is to wait until after policy has taken effect so that you can see what will happen when you go ahead and enable auto-remediation:

Let’s take a look at the other two security group policy types:

Auditing and enforcement of security group rules – This policy type centers around an audit security group that can be used in one of two ways:

You can use this policy type when you want to establish guardrails that establish limits on the rules that can be created. For example, I could create a policy rule that allows inbound access from a specific set of IP addresses (perhaps a /24 used by my organization), and use it to detect any resource that is more permissive.

Auditing and cleanup of unused and redundant security groups – This policy type looks for security groups that are not being used, or that are redundant:

Available Now
You can start to use this feature today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Seoul) Regions. You will be charged $100 per policy per month.

Jeff;

EC2 High Memory Update – New 18 TB and 24 TB Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-high-memory-update-new-18-tb-and-24-tb-instances/

Last year we launched EC2 High Memory Instances with 6, 9, and 12 TiB of memory. Our customers use these instances to run large-scale SAP HANA installations, while also taking advantage of AWS services such as Amazon Elastic Block Store (EBS), Amazon Simple Storage Service (S3), AWS Identity and Access Management (IAM), Amazon CloudWatch, and AWS Config. Customers appreciate that these instances use the same AMIs and management tools as their other EC2 instances, and use them to build production systems that provide enterprise-grade data protection and business continuity.

These are bare metal instances that can be run in a Virtual Private Cloud (VPC), and are EBS-Optimized by default.

Today we are launching instances with 18 TiB and 24 TiB of memory. These are 8-socket instances powered by 2nd generation Intel® Xeon® Scalable (Cascade Lake) processors running at 2.7 GHz, and are available today in the US East (N. Virginia) Region, with more to come. Just like the existing 6, 9, and 12 TiB bare metal instances, the 18 and 24 TiB instances are available in Dedicated Host form with a Three Year Reservation. You also have the option to upgrade a reservation for a smaller size to one of the new sizes.

Here are the specs:

Instance NameMemoryLogical Processors
Dedicated EBS BandwidthNetwork BandwidthSAP Workload Certifications
u-6tb1.metal6 TiB44814 Gbps25 GbpsOLAP, OLTP
u-9tb1.metal9 TiB44814 Gbps25 GbpsOLAP, OLTP
u-12tb1.metal12 TiB44814 Gbps25 GbpsOLAP, OLTP
u-18tb1.metal18 TiB44828 Gbps100 GbpsOLAP, OLTP
u-24tb1.metal24 TiB44828 Gbps100 GbpsOLTP

SAP OLAP workloads include SAP BW/4HANA, BW on HANA (BWoH), and Datamart. SAP OLTP workloads include S/4HANA and Suite on HANA (SoH). Consult the SAP Hardware Directory for more information on the workload certifications.

With 28 Gbps of dedicated EBS bandwidth, the u-18tb1.metal and u-24tb1.metal instances can load data into memory at very high speed. For example, my colleagues loaded 9 TB of data in just 45 minutes, an effective rate of 3.4 gigabytes per second (GBps):

Here’s an overview of the scale-up and scale-out options that are possible when using these new instances to run SAP HANA:

New Instances in Action
My colleagues were kind enough to supply me with some screen shots from 18 TiB and 24 TiB High Memory instances. Here’s the output from the lscpu and free commands on an 18 TiB instance:

Here’s top on the same instance:

And here is HANA Studio on a 24 TiB instance:

Available Now
As I mentioned earlier, the new instance sizes are available today.

Jeff;

PS – Be sure to check out the AWS Quick Start for SAP HANA and the AWS Quick Start for S/4HANA.

AWS IQ – Get Help from AWS Certified Third Party Experts on Demand

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iq-get-help-from-aws-certified-third-party-experts-on-demand/

We want to make sure that you are able to capture the value of cloud computing by thinking big and building fast! As you embark on your journey to the cloud, we also want to make sure that you have access to the resources that you will need to have in order to success. For example:

AWS Training and Certification – This program helps you and your team to build and validate your cloud skills.

AWS Support – This program gives you access to tools, technology, and people, all designed to help you to optimize performance, lower costs, and innovate faster.

AWS Professional Services – Our global team of experts are ready to work with you (and your chosen APN partner) to help you to achieve your enterprise cloud computing goals.

APN Consulting Partners – This global team of professional service providers are able to help you design, architect, build, migrate, and manage your applications and workloads.

AWS Managed Services (AMS) – This service operates AWS on behalf of our enterprise-scale customers.

Today I would like to tell you about AWS IQ, a new service that will help you to engage with AWS Certified third party experts for project work. While organizations of any size can use and benefit from AWS IQ, I believe that small and medium-sized businesses will find it particularly useful. Regardless of the size of your organization, AWS IQ will let you quickly & securely find, engage, and pay AWS Certified experts for hands-on help. All of the experts have active AWS Associate, Professional, or Specialty Certifications, and are ready & willing to help you.

AWS IQ is integrated with your AWS account and your AWS bill. If you are making use of the services of an expert, AWS IQ lets you grant, monitor, and control access to your AWS Account. You can also pay the expert at the conclusion of each project milestone.

AWS IQ for Customers
I can create a new request in minutes. I visit the AWS IQ Console and click New request to get started:

One important note: The IAMFullAccess and AWSIQFullAccess managed policies must be in force if I am logged in as an IAM user.

Then I describe my request and click Submit Request:

My request is shared with the experts and they are encouraged to reply with proposals. I can monitor their responses from within the console, and I can also indicate that I am no longer accepting new responses:

After one or more responses arrive, I can evaluate the proposals, chat with the experts via text or video, and ultimately decide to Accept the proposal that best meets my needs:

A contract is created between me and the expert, and we are ready to move forward!

The expert then requests permission to access my AWS account, making use of one of nine IAM policies. I review and approve their request, and the expert is supplied with a URL that will allow them to log in to the AWS Management Console using this role:

When the agreed-upon milestones are complete, the expert creates payment requests. I approve them, and work continues until the project is complete.

After the project is complete, I enter public and private feedback for the expert. The public feedback becomes part of the expert’s profile; the private feedback is reviewed in confidence by the AWS IQ team.

AWS IQ for Experts
I can register as an expert by visiting AWS IQ for Experts. I must have one or more active AWS Certifications, I must reside in the United States, and I must have US banking and tax information. After I complete the registration process and have been approved as an expert, I can start to look for relevant requests and reply with questions or an initial expression of interest:

I can click Create to create a proposal:

When a customer accepts a proposal, the status switches to ACCEPTED. Then I click Request Permission to gain IAM-controlled access to their AWS account:

Then I ask for permission to access their AWS account:

After the customer reviews and accepts the request, I click Console access instructions to log in to the customer’s AWS account, with my access governed by the IAM policy that I asked for:

I do the work, and then request payment for a job well done:

I can request full or partial payment. Requesting full payment also concludes the proposal, and immediately disallows further console access to the customer’s AWS account and resources:

Things to Know
Here are a couple of things that you should know about AWS IQ:

Customers – Customers can reside anywhere in the world except China.

Experts – Applications from several hundred would-be experts have already been reviewed and accepted; we’ll continue to add more as quickly as possible. As I noted earlier, experts must reside in the United States.

Project Value – The project value must be $1 or more.

Payment – The customer’s payment is charged to their AWS account at their request, and disbursed monthly to the expert’s account. Customers will be able to see their payments on their AWS bill.

In the Works – We have a long roadmap for this cool new service, but we are eager to get your feedback and will use it to drive our prioritization process. Please take a look at AWS IQ and let us know what you think!

Jeff;

 

 

AWS DataSync News – S3 Storage Class Support and Much More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-datasync-news-s3-storage-class-support-and-much-more/

AWS DataSync helps you to move large amounts of data into or out of the AWS Cloud (read my post, New – AWS DataSync – Automated and Accelerated Data Transfer, to learn more). As I explained in my post DataSync is a great fit for you Migration, Upload & Process, and Backup / DR use cases. DataSync is a managed service, and can be used to do one-time or periodic transfers of any size.

Newest Features
We launched DataSync at AWS re:Invent 2018 and have been adding features to it ever since. Today I would like to give you a brief recap of some of the newest features, and also introduce a few new ones:

  • S3 Storage Class Support
  • SMB Support
  • Additional Regions
  • VPC Endpoint Support
  • FIPS for US Endpoints
  • File and Folder Filtering
  • Embedded CloudWatch Metrics

Let’s take a look at each one…

S3 Storage Class Support
If you are transferring data to an Amazon S3 bucket, you now have control over the storage class that is used for the objects. You simply choose the class when you create a new location for use with DataSync:

You can choose from any of the S3 storage classes:

Objects stored in certain storage classes can incur additional charges for overwriting, deleting, or retrieving. To learn more, read Considerations When Working with S3 Storage Classes in DataSync.

SMB Support
Late last month we announced that AWS DataSync Can Now Transfer Data to and from SMB File Shares. SMB (Server Message Block) protocol is common in Windows-centric environments, and is also the preferred protocol for many file servers and network attached storage (NAS) devices. You can use filter patterns to control the files that are included in or excluded from the transfer, and you can use SMB file shares as the data transfer source or destination (Amazon S3 and Amazon EFS can also be used). You simply create a DataSync location that references your SMB server and share:

To learn more, read Creating a Location for SMB.

Additional Regions
AWS DataSync is now available in more locations. Earlier this year it became available in the AWS GovCloud (US-West) and Middle East (Bahrain) Regions.

VPC Endpoint Support
You can deploy AWS DataSync in a Virtual Private Cloud (VPC). If you do this, data transferred between the DataSync agent will not traverse the public internet:

The VPC endpoints for DataSync are powered by AWS PrivateLink; to learn more read AWS DataSync Now Supports Amazon VPC Endpoints and Using AWS DataSync in a Virtual Private Cloud.

FIPS for US Endpoints
In addition to support for VPC endpoints, we announced that AWS DataSync supports FIPS 140-2 Validated Endpoints in US Regions. The endpoints in these regions use a FIPS 140-2 validated cryptographic security module, making it easier for you to use DataSync for regulated workloads. You can use these endpoints by selecting them when you create your DataSync agent:

File and Folder Filtering
Earlier this year we added the ability to use file path and object key filters to exercise additional control over the data copied in a data transfer. To learn more, read about Excluding and including specific data in transfer tasks using AWS DataSync filters.

Embedded CloudWatch Metrics
Data transfer metrics are available in the Task Execution Details page so that you can track the progress of your transfer:

Other AWS DataSync Resources
Here are some resources to help you to learn more about AWS DataSync:

Jeff;

Cloud-Powered, Next-Generation Banking

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cloud-powered-next-generation-banking/

Traditional banks make extensive use of labor-intensive, human-centric control structures such as Production Support groups, Security Response teams, and Contingency Planning organizations. These control structures were deemed necessary in order to segment responsibilities and to maintain a security posture that is risk averse. Unfortunately, this traditional model tends to keep the subject matter experts in these organizations at a distance from the development teams, reducing efficiency and getting in the way of innovation.

Banks and other financial technology (fintech) companies have realized that they need to move faster in order to meet the needs of the newest generation of customers. These customers, some in markets that have not been well-served by the traditional banks, expect a rich, mobile-first experience, top-notch customer service, and access to a broad array of services and products. They prefer devices to retail outlets, and want to patronize a bank that is responsive to their needs.

AWS-Powered Banking
Today I would like to tell you about a couple of AWS-powered banks that are addressing these needs. Both of these banks are born-in-the-cloud endeavors, and take advantage of the scale, power, and flexibility of AWS in new and interesting ways. For example, they make extensive use of microservices, deploy fresh code dozens or hundreds of times per day, and use analytics & big data to better understand their customers. They also apply automation to their compliance and control tasks, scanning code for vulnerabilities as it is committed, and also creating systems that systemically grant and enforce use of least-privilege IAM roles.

NuBank – Headquartered in Brazil and serving over 10 million customers, NuBank has been recognized by Fast Company as one of the most innovative companies in the world. They were founded in 2013 and reached unicorn status (a valuation of one billion dollars), just four years later. After their most recent round of funding, their valuation has jumped to ten billion dollars. Here are some resources to help you learn more about how they use AWS:

Starling – Headquartered in London and founded in 2014, Starling is backed by over $300M in funding. Their mobile apps provide instant notification of transactions, support freezing and unfreezing of cards, and provide in-app chat with customer service representatives. Here are some resources to help you learn more about how they use AWS:

Both banks are strong supporters of open banking, with support for APIs that allow third-party developers to build applications and services (read more about the NuBank API and the Starling API).

I found two of the videos (How the Cloud… and Automated Privilege Management…) particularly interesting. The two videos detail how NuBank and Starling have implemented Compliance as Code, with an eye toward simplifying permissions management and increasing the overall security profile of their respective banks.

I hope that you have enjoyed this quick look at how two next-generation banks are making use of AWS. The videos that I linked above contain tons of great technical information that you should also find of interest!

Jeff;

 

 

 

 

 

 

Now Available – EC2 Instances (G4) with NVIDIA T4 Tensor Core GPUs

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-ec2-instances-g4-with-nvidia-t4-tensor-core-gpus/

The NVIDIA-powered G4 instances that I promised you earlier this year are available now and you can start using them today in eight AWS regions in six sizes! You can use them for machine learning training & inferencing, video transcoding, game streaming, and remote graphics workstations applications.

The instances are equipped with up to four NVIDIA T4 Tensor Core GPUs, each with 320 Turing Tensor cores, 2,560 CUDA cores, and 16 GB of memory. The T4 GPUs are ideal for machine learning inferencing, computer vision, video processing, and real-time speech & natural language processing. The T4 GPUs also offer RT cores for efficient, hardware-powered ray tracing. The NVIDIA Quadro Virtual Workstation (Quadro vWS) is available in AWS Marketplace. It supports real-time ray-traced rendering and can speed creative workflows often found in media & entertainment, architecture, and oil & gas applications.

G4 instances are powered by AWS-custom Second Generation Intel® Xeon® Scalable (Cascade Lake) processors with up to 64 vCPUs, and are built on the AWS Nitro system. Nitro’s local NVMe storage building block provides direct access to up to 1.8 TB of fast, local NVMe storage. Nitro’s network building block delivers high-speed ENA networking. The Intel AVX512-Deep Learning Boost feature extends AVX-512 with a new set of Vector Neural Network Instructions (VNNI for short). These instructions accelerate the low-precision multiply & add operations that reside in the inner loop of many inferencing algorithms.

Here are the instance sizes:

Instance Name
NVIDIA T4 Tensor Core GPUsvCPUsRAMLocal StorageEBS BandwidthNetwork Bandwidth
g4dn.xlarge1416 GiB1 x 125 GBUp to 3.5 GbpsUp to 25 Gbps
g4dn.2xlarge1832 GiB1 x 225 GBUp to 3.5 GbpsUp to 25 Gbps
g4dn.4xlarge11664 GiB1 x 225 GBUp to 3.5 GbpsUp to 25 Gbps
g4dn.8xlarge132128 GiB1 x 900 GB7 Gbps50 Gbps
g4dn.12xlarge448192 GiB1 x 900 GB7 Gbps50 Gbps
g4dn.16xlarge164256 GiB1 x 900 GB7 Gbps50 Gbps

We are also working on a bare metal instance that will be available in the coming months:

Instance Name
NVIDIA T4 Tensor Core GPUsvCPUsRAMLocal StorageEBS BandwidthNetwork Bandwidth
g4dn.metal896384 GiB2 x 900 GB14 Gbps100 Gbps

If you want to run graphics workloads on G4 instances, be sure to use the latest version of the NVIDIA AMIs (available in AWS Marketplace) so that you have access to the requisite GRID and Graphics drivers, along with an NVIDIA Quadro Workstation image that contains the latest optimizations and patches. Here’s where you can find them:

  • NVIDIA Gaming – Windows Server 2016
  • NVIDIA Gaming – Windows Server 2019
  • NVIDIA Gaming – Ubuntu 18.04

The newest AWS Deep Learning AMIs include support for G4 instances. The team that produces the AMIs benchmarked a g3.16xlarge instance against a g4dn.12xlarge instance and shared the results with me. Here are some highlights:

  • MxNet Inference (resnet50v2, forward pass without MMS) – 2.03 times faster.
  • MxNet Inference (with MMS) – 1.45 times faster.
  • MxNet Training (resnet50_v1b, 1 GPU) – 2.19 times faster.
  • Tensorflow Inference (resnet50v1.5, forward pass) – 2.00 times faster.
  • Tensorflow Inference with Tensorflow Service (resnet50v2) – 1.72 times faster.
  • Tensorflow Training (resnet50_v1.5) – 2.00 times faster.

The benchmarks used FP32 numeric precision; you can expect an even larger boost if you use mixed precision (FP16) or low precision (INT8).

You can launch G4 instances today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Seoul), and Asia Pacific (Tokyo) Regions. We are also working to make them accessible in Amazon SageMaker and in Amazon EKS clusters.

Jeff;

Now Available – Amazon Quantum Ledger Database (QLDB)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-quantum-ledger-database-qldb/

Given the wide range of data types, query models, indexing options, scaling expectations, and performance requirements, databases are definitely not one size fits all products. That’s why there are many different AWS database offerings, each one purpose-built to meet the needs of a different type of application.

Introducing QLDB
Today I would like to tell you about Amazon QLDB, the newest member of the AWS database family. First announced at AWS re:Invent 2018 and made available in preview form, it is now available in production form in five AWS regions.

As a ledger database, QLDB is designed to provide an authoritative data source (often known as a system of record) for stored data. It maintains a complete, immutable history of all committed changes to the data that cannot be updated, altered, or deleted. QLDB supports PartiQL SQL queries to the historical data, and also provides an API that allows you to cryptographically verify that the history is accurate and legitimate. These features make QLDB a great fit for banking & finance, ecommerce, transportation & logistics, HR & payroll, manufacturing, and government applications and many other use cases that need to maintain the integrity and history of stored data.

Important QLDB Concepts
Let’s review the most important QLDB concepts before diving in:

Ledger – A QLDB ledger consists of a set of QLDB tables and a journal that maintains the complete, immutable history of changes to the tables. Ledgers are named and can be tagged.

Journal – A journal consists of a sequence of blocks, each cryptographically chained to the previous block so that changes can be verified. Blocks, in turn, contain the actual changes that were made to the tables, indexed for efficient retrieval. This append-only model ensures that previous data cannot be edited or deleted, and makes the ledgers immutable. QLDB allows you to export all or part of a journal to S3.

Table – Tables exist within a ledger, and contain a collection of document revisions. Tables support optional indexes on document fields; the indexes can improve performance for queries that make use of the equality (=) predicate.

Documents – Documents exist within tables, and must be in Amazon Ion form. Ion is a superset of JSON that adds additional data types, type annotations, and comments. QLDB supports documents that contain nested JSON elements, and gives you the ability to write queries that reference and include these elements. Documents need not conform to any particular schema, giving you the flexibility to build applications that can easily adapt to changes.

PartiQLPartiQL is a new open standard query language that supports SQL-compatible access to relational, semi-structured, and nested data while remaining independent of any particular data source. To learn more, read Announcing PartiQL: One Query Languge for All Your Data.

Serverless – You don’t have to worry about provisioning capacity or configuring read & write throughput. You create a ledger, define your tables, and QLDB will automatically scale to meet the needs of your application.

Using QLDB
You can create QLDB ledgers and tables from the AWS Management Console, AWS Command Line Interface (CLI), a CloudFormation template, or by making calls to the QLDB API. I’ll use the QLDB Console and I will follow the steps in Getting Started with Amazon QLDB. I open the console and click Start tutorial to get started:

The Getting Started page outlines the first three steps; I click Create ledger to proceed (this opens in a fresh browser tab):

I enter a name for my ledger (vehicle-registration), tag it, and (again) click Create ledger to proceed:

My ledger starts out in Creating status, and transitions to Active within a minute or two:

I return to the Getting Started page, refresh the list of ledgers, choose my new ledger, and click Load sample data:

This takes a second or so, and creates four tables & six indexes:

I could also use PartiQL statements such as CREATE TABLE, CREATE INDEX, and INSERT INTO to accomplish the same task.

With my tables, indexes, and sample data loaded, I click on Editor and run my first query (a single-table SELECT):

This returns a single row, and also benefits from the index on the VIN field. I can also run a more complex query that joins two tables:

I can obtain the ID of a document (using a query from here), and then update the document:

I can query the modification history of a table or a specific document in a table, with the ability to find modifications within a certain range and on a particular document (read Querying Revision History to learn more). Here’s a simple query that returns the history of modifications to all of the documents in the VehicleRegistration table that were made on the day that I wrote this post:

As you can see, each row is a structured JSON object. I can select any desired rows and click View JSON for further inspection:

Earlier, I mentioned that PartiQL can deal with nested data. The VehicleRegistration table contains ownership information that looks like this:

{
   "Owners":{
      "PrimaryOwner":{
         "PersonId":"6bs0SQs1QFx7qN1gL2SE5G"
      },
      "SecondaryOwners":[

      ]
  }

PartiQL lets me reference the nested data using “.” notation:

I can also verify the integrity of a document that is stored within my ledger’s journal. This is fully described in Verify a Document in a Ledger, and is a great example of the power (and value) of cryptographic verification. Each QLDB ledger has an associated digest. The digest is a 256-bit hash value that uniquely represents the ledger’s entire history of document revisions as of a point in time. To access the digest, I select a ledger and click Get digest:

When I click Save, the console provides me with a short file that contains all of the information needed to verify the ledger. I save this file in a safe place, for use when I want to verify a document in the ledger. When that time comes, I get the file, click on Verification in the left-navigation, and enter the values needed to perform the verification. This includes the block address of a document revision, and the ID of the document. I also choose the digest that I saved earlier, and click Verify:

QLDB recomputes the hashes to ensure that the document has not been surreptitiously changed, and displays the verification:

In a production environment, you would use the QLDB APIs to periodically download digests and to verify the integrity of your documents.

Building Applications with QLDB
You can use the Amazon QLDB Driver for Java to write code that accesses and manipulates your ledger database. This is a Java driver that allows you to create sessions, execute PartiQL commands within the scope of a transaction, and retrieve results. Drivers for other languages are in the works; stay tuned for more information.

Available Now
Amazon QLDB is available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions. Pricing is based on the following factors, and is detailed on the Amazon QLDB Pricing page, including some real-world examples:

  • Write operations
  • Read operations
  • Journal storage
  • Indexed storage
  • Data transfer

Jeff;

New – Client IP Address Preservation for AWS Global Accelerator

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-client-ip-address-preservation-for-aws-global-accelerator/

AWS Global Accelerator is a network service that routes incoming network traffic to multiple AWS regions in order to improve performance and availability for your global applications. It makes use of our collection of edge locations and our congestion-free global network to direct traffic based on application health, network health, and the geographic locations of your users, and provides a set of static Anycast IP addresses that are announced from multiple AWS locations (read New – AWS Global Accelerator for Availability and Performance to learn a lot more). The incoming TCP or UDP traffic can be routed to an Application Load Balancer, Network Load Balancer, or to an Elastic IP Address.

Client IP Address Preservation
Today we are announcing an important new feature for AWS Global Accelerator. If you are routing traffic to an Application Load Balancer, the IP address of the user’s client is now available to code running on the endpoint. This allows you to apply logic that is specific to a particular IP address. For example, you can use security groups that filter based on IP address, and you can serve custom content to users based on their IP address or geographic location. You can also use the IP addresses to collect more accurate statistics on the geographical distribution of your user base.

Using Client IP Address Preservation
If you are already using AWS Global Accelerator, we recommend that you phase in your use of Client IP Address Preservation by using weights on the endpoints. This will allow you to verify that any rules or systems that make use of IP addresses continue to function as expected.

In order to test this new feature, I launched some EC2 instances, set up an Application Load Balancer, put the instances into a target group, and created an accelerator in front of my ALB:

I checked the IP address of my browser:

I installed a simple Python program (courtesy of the Global Accelerator team), sent an HTTP request to one of the Global Accelerator’s IP addresses, and captured the output:

The Source (99.82.172.36) is an internal address used by my accelerator. With my baseline established and everything working as expected, I am now ready to enable Client IP Address Preservation!

I open the AWS Global Accelerator Console, locate my accelerator, and review the current configuration, as shown above. I click the listener for port 80, and click the existing endpoint group:

From there I click Add endpoint, add a new endpoint to the group, use a Weight of 255, and select Preserve client IP address:

My endpoint group now has two endpoints (one with client IP preserved, and one without), both of which point to the same ALB:

In a production environment I would start with a low weight and test to make sure that any security groups or other logic that was dependent on IP addresses continue to work as expected (I can also use the weights to manage traffic during blue/green deployments and software updates). Since I’m simply testing, I can throw caution to the wind and delete the old (non-IP-preserving) endpoint. Either way, the endpoint change becomes effective within a couple of minutes, and I can refresh my test window:

Now I can see that my code has access to the IP address of the browser (via the X-Forwarded-For header) and I can use it as desired. I can also use this IP address in security group rules.

To learn more about best practices for switching over, read Transitioning Your ALB Endpoints to Use Client IP Address Preservation.

Things to Know
Here are a couple of important things to know about client IP preservation:

Elastic Network Interface (ENI) Usage – The Global Accelerator creates one ENI for each subnet that contains IP-preserving endpoints, and will delete them when they are no longer required. Don’t edit or delete them.

Security Groups – The Global Accelerator creates and manages a security group named GlobalAccelerator. Again, you should not edit or delete it.

Available Now
You can enable this new feature for Application Load Balancers in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Mumbai), and Asia Pacific (Sydney) Regions.

Jeff;

Amazon Prime Day 2019 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2019-powered-by-aws/

What did you buy for Prime Day? I bought a 34″ Alienware Gaming Monitor and used it to replace a pair of 25″ monitors that had served me well for the past six years:

 

As I have done in years past, I would like to share a few of the many ways that AWS helped to make Prime Day a reality for our customers. You can read How AWS Powered Amazon’s Biggest Day Ever and Prime Day 2017 – Powered by AWS to learn more about how we evaluate the results of each Prime Day and use what we learn to drive improvements to our systems and processes.

This year I would like to focus on three ways that AWS helped to support record-breaking amounts of traffic and sales on Prime Day: Amazon Prime Video Infrastructure, AWS Database Infrastructure, and Amazon Compute Infrastructure. Let’s take a closer look at each one…

Amazon Prime Video Infrastructure
Amazon Prime members were able to enjoy the second Prime Day Concert (presented by Amazon Music) on July 10, 2019. Headlined by 10-time Grammy winner Taylor Swift, this live-streamed event also included performances from Dua Lipa, SZA, and Becky G.

Live-streaming an event of this magnitude and complexity to an audience in over 200 countries required a considerable amount of planning and infrastructure. Our colleagues at Amazon Prime Video used multiple AWS Media Services including AWS Elemental MediaPackage and AWS Elemental live encoders to encode and package the video stream.

The streaming setup made use of two AWS Regions, with a redundant pair of processing pipelines in each region. The pipelines delivered 1080p video at 30 fps to multiple content distribution networks (including Amazon CloudFront), and worked smoothly.

AWS Database Infrastructure
A combination of NoSQL and relational databases were used to deliver high availability and consistent performance at extreme scale during Prime Day:

Amazon DynamoDB supports multiple high-traffic sites and systems including Alexa, the Amazon.com sites, and all 442 Amazon fulfillment centers. Across the 48 hours of Prime Day, these sources made 7.11 trillion calls to the DynamoDB API, peaking at 45.4 million requests per second.

Amazon Aurora also supports the network of Amazon fulfillment centers. On Prime Day, 1,900 database instances processed 148 billion transactions, stored 609 terabytes of data, and transferred 306 terabytes of data.

Amazon Compute Infrastructure
Prime Day 2019 also relied on a massive, diverse collection of EC2 instances. The internal scaling metric for these instances is known as a server equivalent; Prime Day started off with 372K server equivalents and scaled up to 426K at peak.

Those EC2 instances made great use of a massive fleet of Elastic Block Store (EBS) volumes. The team added an additional 63 petabytes of storage ahead of Prime Day; the resulting fleet handled 2.1 trillion requests per day and transferred 185 petabytes of data per day.

And That’s a A Wrap
These are some impressive numbers, and show you the kind of scale that you can achieve with AWS. As you can see, scaling up for one-time (or periodic) events and then scaling back down afterward, is easy and straightforward, even at world scale!

If you want to run your own world-scale event, I’d advise you to check out the blog posts that I linked above, and also be sure to read about AWS Infrastructure Event Management. My colleagues are ready (and eager) to help you to plan for your large-scale product or application launch, infrastructure migration, or marketing event. Here’s an overview of their process:

 

Jeff;

AWS CloudFormation Update – Public Coverage Roadmap & CDK Goodies

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudformation-update-public-coverage-roadmap-cdk-goodies/

I launched AWS CloudFormation in early 2011 with a pair of posts: AWS CloudFormation – Create Your AWS Stack From a Recipe and AWS CloudFormation in the AWS Management Console. Since that launch, we have added support for many AWS resource types, launched many new features, and worked behind the scenes to ensure that CloudFormation is efficient, scalable, and highly available.

Public Coverage Roadmap
CloudFormation use is growing even faster than AWS itself, and the team has prioritized scalability over complete resource coverage. While our goal of providing 100% coverage remains, the reality is that it will take us some time to get there. In order to be more transparent about our priorities and to give you an opportunity to manage them, I am pleased to announce the much-anticipated CloudFormation Coverage Roadmap:

Styled after the popular AWS Containers Roadmap, the CloudFormation Coverage Roadmap contains four columns:

Shipped – Available for use in production form in all public AWS regions.

Coming Soon – Generally a few months out.

We’re working on It – Work in progress, but further out.

Researching – We’re thinking about the right way to implement the coverage.

Please feel free to create your own issues, and to give a thumbs-up to those that you need to have in order to make better use of CloudFormation:

Before I close out, I would like to address one common comment – that AWS is part of a big company, and that we should simply throw more resources at it. While the team is growing, implementing robust, secure coverage is still resource-intensive. Please consider the following quote, courtesy of the must-read Mythical Man-Month:

Good cooking takes time. If you are made to wait, it is to serve you better, and to please you.

Cloud Development Kit Goodies
The Cloud Development Kit (CDK) lets you model and provision your AWS resources using a programming language that you are already familiar with. You use a set of CDK Constructs (VPCs, subnets, and so forth) to define your application, and then use the CDK CLI to synthesize a CloudFormation template, deploy it to AWS, and create a stack.

Here are some resources to help you to get started with the CDK:

Stay Tuned
The CloudFormation Coverage Roadmap is an important waypoint on a journey toward open source that started out with cfn-lint, with some more stops in the works. Stay tuned and I’ll tell you more just as soon as I can!

Jeff;

AWS DeepLens – Now Orderable in Seven Additional Countries

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-deeplens-now-orderable-in-seven-additional-countries/

The new (2019) edition of the AWS DeepLens can now be purchased in six countries (US, UK, Germany, France, Spain, Italy, and Canada), and preordered in Japan. The 2019 edition is easier to set up, and (thanks to Amazon SageMaker Neo) runs machine learning models up to twice as fast as the earlier edition.

New Tutorials
We are also launch a pair of new tutorials to help you to get started:

aws-deeplens-coffee-leaderboard – This tutorial focuses on a demo that uses face detection to track the number of people that drink coffee. It watches a scene, and triggers a Lambda function when a face is detected. Amazon Rekognition is used to detect the presence of a coffee mug, and the face is added to a DynamoDB database that is maintained by (and private to) the demo. The demo also includes a leaderboard that tracks the number of coffees over time. Here’s the architecture:

And here’s the leaderboard:

To learn more, read Track the number of coffees consumed using AWS DeepLens.

aws-deeplens-worker-safety-project – This tutorial focuses on a demo that identifies workers that are not wearing safety helmets. The DeepLens detects faces, and uploads the images to S3 for further processing. The results are analyze using AWS IoT and Amazon CloudWatch, and are displayed on a web dashboard. Here’s the architecture:

To learn more, register for and then take the free 30-minute course: Worker Safety Project with AWS DeepLens.

Detecting Cats, and Cats with Rats
Finally, I would like to share a really cool video featuring my colleague Ben Hamm. After growing tired of cleaning up the remains of rats and other creatures that his cat Metric had killed, Ben decided to put his DeepLens to work. Using a hand-labeled training set, Ben created a model that could tell when Metric was carrying an unsavory item its his mouth, and then lock him out. Ben presented his project at Ignite Seattle and the video has been very popular. Take a look for yourself:

Order Your DeepLens Today
If you are in one of the countries that I listed above, you can order your DeepLens today and get started with Machine Learning in no time flat! Visit the DeepLens home page to learn more.

Jeff;

AWS Named as a Leader in Gartner’s Infrastructure as a Service (IaaS) Magic Quadrant for the 9th Consecutive Year

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-named-as-a-leader-in-gartners-infrastructure-as-a-service-iaas-magic-quadrant-for-the-9th-consecutiveyear/

My colleagues on the AWS service teams work to deliver what customers want today, and also do their best to anticipate what they will need tomorrow. This Customer Obsession, along with our commitment to Hire and Develop the Best (two of the fourteen Amazon Leadership Principles), helps us to figure out, and then to deliver on, our vision. It is always good to see that our hard work continues to delight customers, and to be recognized by Gartner and other leading analysts.

For the ninth consecutive year, AWS has secured the top-right corner of the Leader’s quadrant in Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IaaS), earning highest placement for Ability to Execute and furthest for Completeness of Vision:

The full report contains a lot of detail and is a great summary of the features and factors that our customers examine when choosing a cloud provider.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.