Tag Archives: AWS re:Invent

AWS Config Update – New Managed Rules to Secure S3 Buckets

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-config-update-new-managed-rules-to-secure-s3-buckets/

AWS Config captures the state of your AWS resources and the relationships between them. Among other features, it allows you to select a resource and then view a timeline of configuration changes that affect the resource (read Track AWS Resource Relationships With AWS Config to learn more).

AWS Config rules extends Config with a powerful rule system, with support for a “managed” collection of AWS rules as well as custom rules that you write yourself (my blog post, AWS Config Rules – Dynamic Compliance Checking for Cloud Resources, contains more info). The rules (AWS Lambda functions) represent the ideal (properly configured and compliant) state of your AWS resources. The appropriate functions are invoked when a configuration change is detected and check to ensure compliance.

You already have access to about three dozen managed rules. For example, here are some of the rules that check your EC2 instances and related resources:

Two New Rules
Today we are adding two new managed rules that will help you to secure your S3 buckets. You can enable these rules with a single click. The new rules are:

s3-bucket-public-write-prohibited – Automatically identifies buckets that allow global write access. There’s rarely a reason to create this configuration intentionally since it allows
unauthorized users to add malicious content to buckets and to delete (by overwriting) existing content. The rule checks all of the buckets in the account.

s3-bucket-public-read-prohibited – Automatically identifies buckets that allow global read access. This will flag content that is publicly available, including web sites and documentation. This rule also checks all buckets in the account.

Like the existing rules, the new rules can be run on a schedule or in response to changes detected by Config. You can see the compliance status of all of your rules at a glance:

Each evaluation runs in a matter of milliseconds; scanning an account with 100 buckets will take less than a minute. Behind the scenes, the rules are evaluated by a reasoning engine that uses some leading-edge constraint solving techniques that can, in many cases, address NP-complete problems in polynomial time (we did not resolve P versus NP; that would be far bigger news). This work is part of a larger effort within AWS, some of which is described in a AWS re:Invent presentation: Automated Formal Reasoning About AWS Systems:

Now Available
The new rules are available now and you can start using them today. Like the other rules, they are priced at $2 per rule per month.

Jeff;

Get Ready for AWS re:Invent 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/get-ready-for-aws-reinvent-2017/

With just 110 days left until November 27, 2017, my colleagues and I are working hard to get ready for re:Invent 2017. I have not yet started on my blog posts or on any new LEGO creations, but I have taken a look at a very preliminary list of launches and am already gearing up for a very busy month or two!

We’ve got more venues, a bigger expo hall, more content (over 1,000 sessions), more hackathons, more bootcamps, more workshops, and more certification opportunities than ever before. In addition to perennial favorites like the Tatonka Challenge and the re:PLAY party, we’ve added broomball (a long-time Amazon tradition) and some all-star fitness activities.

Every year I get last-minute texts, calls, and emails from long-lost acquaintances begging for tickets and have to turn them all down (I’m still waiting for the one that starts with “I am pretty sure we were in first grade together…” but you get the idea). Even though we increase capacity every year, we are expecting a sell-out crowd once again and I’d like to encourage you to register today in order to avoid being left out.

See you in Vegas!

Jeff;

 

Now Available: Three New AWS Specialty Training Courses

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-three-new-aws-specialty-training-courses/

AWS Training allows you to learn from the experts so you can advance your knowledge with practical skills and get more out of the AWS Cloud. Today I am happy to announce that three of our most popular training bootcamps (a staple at AWS re:Invent and AWS Global Summits) are becoming part of our permanent instructor-led training portfolio:

These one-day courses are intended for individuals who would like to dive deeper into a specialized topic with an expert trainer.

You can explore our complete course catalog, and you can search for a public class near you within the AWS Training and Certification Portal. You can also request a private onsite training session for your team by contacting us.

Jeff;

 

 

Announcing the AWS Chatbot Challenge – Create Conversational, Intelligent Chatbots using Amazon Lex and AWS Lambda

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/announcing-the-aws-chatbot-challenge-create-conversational-intelligent-chatbots-using-amazon-lex-and-aws-lambda/

If you have been checking out the launches and announcements from the AWS 2017 San Francisco Summit, you may be aware that the Amazon Lex service is now Generally Available, and you can use the service today. Amazon Lex is a fully managed AI service that enables developers to build conversational interfaces into any application using voice and text. Lex uses the same deep learning technologies of Amazon Alexa-powered devices like Amazon Echo. With the release of Amazon Lex, developers can build highly engaging lifelike user experiences and natural language interactions within their own applications. Amazon Lex supports Slack, Facebook Messenger, and Twilio SMS enabling you to easily publish your voice or text chatbots using these popular chat services. There is no better time to try out the Amazon Lex service to add the gift of gab to your applications, and now you have a great reason to get started.

May I have a Drumroll please?

I am thrilled to announce the AWS Chatbot Challenge! The AWS Chatbot Challenge is your opportunity to build a unique chatbot that helps solves a problem or adds value for prospective users. The AWS Chatbot Challenge is brought to you by Amazon Web Services in partnership with Slack.

 

The Challenge

Your mission, if you choose to accept it is to build a conversational, natural language chatbot using Amazon Lex and leverage Lex’s integration with AWS Lambda to execute logic or data processing on the backend. Your submission can be a new or existing bot, however, if your bot is an existing one it must have been updated to use Amazon Lex and AWS Lambda within the challenge submission period.

 

You are only limited by your own imagination when building your solution. Therefore, I will share some recommendations to help you to get your creative juices flowing when creating or deploying your bot. Some suggestions that can help you make your chatbot more distinctive are:

  • Deploy your bot to Slack, Facebook Messenger, or Twilio SMS
  • Take advantage of other AWS services when building your bot solution.
  • Incorporate Text-To-speech capabilities using a service like Amazon Polly
  • Utilize other third-party APIs, SDKs, and services
  • Leverage Amazon Lex pre-built enterprise connectors and add services like Salesforce, HubSpot, Marketo, Microsoft Dynamics, Zendesk, and QuickBooks as data sources.

There are cost effective ways to build your bot using AWS Lambda. Lambda includes a free tier of one million requests and 400,000 GB-seconds of compute time per month. This free, per month usage, is for all customers and does not expire at the end of the 12 month Free Tier Term. Furthermore, new Amazon Lex customers can process up to 10,000 text requests and 5,000 speech requests per month free during the first year. You can find details here.

Remember, the AWS Free Tier includes services with a free tier available for 12 months following your AWS sign-up date, as well as additional service offers that do not automatically expire at the end of your 12 month term. You can review the details about the AWS Free Tier and related services by going to the AWS Free Tier Details page.

 

Can We Talk – How It Works

The AWS Chatbot Challenge is open to individuals, and teams of individuals, who have reached the age of majority in their eligible area of residence at the time of competition entry. Organizations that employ 50 or fewer people are also eligible to compete as long at the time of entry they are duly organized or incorporated and validly exist in an eligible area. Large organizations-employing more than 50-in eligible areas can participate but will only be eligible for a non-cash recognition prize.

Chatbot Submissions are judged using the following criteria:

  • Customer Value: The problem or painpoint the bot solves and the extent it adds value for users
  • Bot Quality: The unique way the bot solves users’ problems, and the originality, creativity, and differentiation of the bot solution
  • Bot Implementation: Determination of how well the bot was built and executed by the developer. Also, consideration of bot functionality such as if the bot functions as intended and recognizes and responds to most common phrases asked of it

Prizes

The AWS Chatbot Challenge is awarding prizes for your hard work!

First Prize

  • $5,000 USD
  • $2,500 AWS Credits
  • Two (2) tickets to AWS re:Invent
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

Second Prize

  • $3,000 USD
  • $1,500 AWS Credits
  • One (1) ticket to AWS re:Invent
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

Third Prize

  • $2,000 USD
  • $1,000 AWS Credits
  • 30 minute virtual meeting with the Amazon Lex team
  • Winning submission featured on the AWS AI blog
  • Cool swag

 

Challenge Timeline

  • Submissions Start: April 19, 2017 at 12:00pm PDT
  • Submissions End: July 18, 2017 at 5:00pm PDT
  • Winners Announced: August 11, 2017 at 9:00am PDT

 

Up to the Challenge – Get Started

Are ready to get started on your chatbot and dive into the challenge? Here is how to get started:

Review the details on the challenge rules and eligibility

  1. Register for the AWS Chatbot Challenge
  2. Join the AWS Chatbot Slack Channel
  3. Create an account on AWS.
  4. Visit the Resources page for links to documentation and resources.
  5. Shoot your demo video that demonstrates your bot in action. Prepare a written summary of your bot and what it does.
  6. Provide a way to access your bot for judging and testing by including a link to your GitHub repo hosting the bot code and all deployment files and testing instructions needed for testing your bot.
  7. Submit your bot on AWSChatbot2017.Devpost.com before July 18, 2017 at 5 pm ET and share access to your bot, its Github repo and its deployment files.

Summary

With Amazon Lex you can build conversation into web and mobile applications, as well as use it to build chatbots that control IoT devices, provide customer support, give transaction updates or perform operations for DevOps workloads (ChatOps). Amazon Lex provides built-in integration with AWS Lambda, AWS Mobile Hub, and Amazon CloudWatch and allows for easy integrate with other AWS services so you can use the AWS platform for to build security, monitoring, user authentication, business logic, and storage into your chatbot or application. You can make additional enhancements to your voice or text chatbot by taking advantage of Amazon Lex’s support of chat services like Slack, Facebook Messenger, and Twilio SMS.

Dive into building chatbots and conversational interfaces with Amazon Lex and AWS Lambda with the AWS Chatbot Challenge for a chance to win some cool prizes. Some recent resources and online tech talks about creating bots with Amazon Lex and AWS Lambda that may help you in your bot building journey are:

If you have questions about the AWS Chatbot Challenge you can email [email protected] or post a question to the Discussion Board.

 

Good Luck and Happy Coding.

Tara

Amazon Lex – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-lex-now-generally-available/

During AWS re:Invent I showed you how you could use Amazon Lex to build conversational voice & text interfaces. At that time we launched Amazon Lex in preview form and invited developers to sign up for access. Powered by the same deep learning technologies that drive Amazon Alexa, Amazon Lex allows you to build web & mobile applications that support engaging, lifelike interactions.

Today I am happy to announce that we are making Amazon Lex generally available, and that you can start using it today! Here are some of the features that we added during the preview:

Slack Integration – You can now create Amazon Lex bots that respond to messages and events sent to a Slack channel. Click on the Channels tab of your bot, select Slack, and fill in the form to get a callback URL for use with Slack:

Follow the tutorial (Integrating an Amazon Lex Bot with Slack) to see how to do this yourself.

Twilio Integration – You can now create Amazon Lex bots that respond to SMS messages sent to a Twilio SMS number. Again, you simply click on Channels, select Twilio, and fill in the form:

To learn more, read Integrating an Amazon Lex Bot with Twilio SMS.

SDK Support – You can now use the AWS SDKs to build iOS, Android, Java, JavaScript, Python, .Net, Ruby, PHP, Go, and C++ bots that span mobile, web, desktop, and IoT platforms and interact using either text or speech. The SDKs also support the build process for bots; you can programmatically add sample utterances, create slots, add slot values, and so forth. You can also manage the entire build, test, and deployment process programmatically.

Voice Input on Test Console – The Amazon Lex test console now supports voice input when used on the Chrome browser. Simply click on the microphone:

Utterance Monitoring – Amazon Lex now records utterances that were not recognized by your bot, otherwise known as missed utterances. You can review the list and add the relevant ones to your bot:

You can also watch the following CloudWatch metrics to get a better sense of how your users are interacting with your bot. Over time, as you add additional utterances and improve your bot in other ways, the metrics should be on the decline.

  • Text Missed Utterances (PostText)
  • Text Missed Utterances (PostContent)
  • Speech Missed Utterances

Easy Association of Slots with Utterances – You can now highlight text in the sample utterances in order to identify slots and add values to slot types:

Improved IAM Support – Amazon Lex permissions are now configured automatically from the console; you can now create bots without having to create your own policies.

Preview Response Cards – You can now view a preview of the response cards in the console:

To learn more, read about Using a Response Card.

Go For It
Pricing is based on the number of text and voice responses processed by your application; see the Amazon Lex Pricing page for more info.

I am really looking forward to seeing some awesome bots in action! Build something cool and let me know what you come up with.

Jeff;

 

AWS X-Ray Update – General Availability, Including Lambda Integration

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-x-ray-update-general-availability-including-lambda-integration/

I first told you about AWS X-Ray at AWS re:Invent in my post, AWS X-Ray – See Inside Your Distributed Application. X-Ray allows you to trace requests made to your application as execution traverses Amazon EC2 instances, Amazon ECS containers, microservices, AWS database services, and AWS messaging services. It is designed for development and production use, and can handle simple three-tier applications as well as applications composed of thousands of microservices. As I showed you last year, X-Ray helps you to perform end-to-end tracing of requests, record a representative sample of the traces, see a map of the services and the trace data, and to analyze performance issues and errors. This helps you understand how your application and its underlying services are performing so you can identify and address the root cause of issues.

You can take a look at the full X-Ray walk-through in my earlier post to learn more.

We launched X-Ray in preview form at re:Invent and invited interested developers and architects to start using it. Today we are making the service generally available, with support in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), South America (São Paulo), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Sydney), and Asia Pacific (Mumbai) Regions.

New Lambda Integration (Preview)
During the preview period we fine-tuned the service and added AWS Lambda integration, which we are launching today in preview form. Now, Lambda developers can use X-Ray to gain visibility into their function executions and performance. Previously, Lambda customers who wanted to understand their application’s latency breakdown, diagnose slowdowns, or troubleshoot timeouts had to rely on custom logging and analysis.

In order to make use of this new integration, you simply ensure that the functions of interest have execution roles that gives the functions permission to write to X-Ray, and then enable tracing on a function-by-function basis (when you create new functions using the console, the proper permissions are assigned automatically). Then you use the X-Ray service map to see how your requests flow through your Lambda functions, EC2 instances, ECS containers, and so forth. You can identify the services and resources of interest, zoom in, examine detailed timing information, and then remedy the issue.

Each call to a Lambda function generates two or more nodes in the X-Ray map:

Lambda Service – This node represents the time spent within Lambda itself.

User Function – This node represents the execution time of the Lambda function.

Downstream Service Calls – These nodes represent any calls that the Lambda function makes to other services.

To learn more, read Using X-Ray with Lambda.

Now Available
We will begin to charge for the usage of X-Ray on May 1, 2017.

Pricing is based on the number of traces that you record, and the number that you analyze (each trace represent a request made to your application). You can record 100,000 traces and retrieve or scan 1,000,000 traces every month at no charge. Beyond that, you pay $5 for every million traces that you record and $0.50 for every million traces that you retrieve for analysis, with more info available on the AWS X-Ray Pricing page. You can visit the AWS Billing Console to see how many traces you have recorded or accessed (data collection began on March 1, 2017).

Check out AWS X-Ray and the new Lambda integration today and let me know what you think!

Jeff;

 

EC2 F1 Instances with FPGAs – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-f1-instances-with-fpgas-now-generally-available/

We launched the Developer Preview of the FPGA-equipped F1 instances at AWS re:Invent. The response to the announcement was quick and overwhelming! We received over 2000 requests for entry, and were able to provide over 200 developers with access to the Hardware Development Kit (HDK) and the actual F1 instances.

In the post that I wrote for re:Invent, I told you that:

This highly parallelized model is ideal for building custom accelerators to process compute-intensive problems. Properly programmed, an FPGA has the potential to provide a 30x speedup to many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms and applications.

During the preview, partners and developers have been working on all sorts of exciting tools, services, and applications. I’ll tell you more about them in just a moment.

Now Generally Available
Today we are making the F1 instances generally available in the US East (Northern Virginia) Region, with plans to bring them to other regions before too long.

We continued to add features and functions during the preview, while also making the development tools more efficient and easier to use. Here’s a summary:

Developer Community – We launched the AWS FPGA Development Forum to provide a place for FPGA developers to hang out and to communicate with us and with each other.

HDK and SDK – We published the EC2 FPGA Hardware (HDK) and Software Development Kit to GitHub, and made many improvements in response to feedback that we received during the preview.

The improvements include support for VHDL (in addition to Verilog), an improved virtual lab environment (Virtual JTAG, Virtual LED, and Virtual DipSwitch), AWS libraries for FPGA management and the FPGA runtime, and support for OpenCL including the AWS OpenCL runtime library.

FPGA Developer AMI – This Marketplace AMI contains a full set of FPGA development tools including an RTL compiler and simulator, along with Xilinx SDAccel for OpenCL development, all tuned for use on C4, M4, and R4 instances.

FPGAs At Work
Here’s a sampling of the impressive work that our partners have been doing with the F1’s:

Edico Genome is deploying their DRAGEN Bio-IT Platform on F1 instances, with the expectation that it will provide whole-genome sequencing that runs in real time.

Ryft offers the Ryft Cloud, an accelerator for data analytics and machine learning that extends Elastic Stack. It sources data from Amazon Kinesis, Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and local instance storage and uses massive bitwise parallelism to drive performance. The product supports high-level JDBC, ODBC, and REST interfaces along with low-level C, C++, Java, and Python APIs (see the Ryft API page for more information).

Reconfigure.io launched a cloud-based service that allows you to program FPGAs using the Go programming language. You can build, test, and deploy your code from within their cloud-based environment while taking advantage of concurrency-oriented language features such as goroutines (lightweight threads), channels, and selects.

NGCodec ported their RealityCodec video encoder to the F1 and used it to produce broadcast-quality video at 80 frames per second. Their solution can encode up to 32 independent video streams on a single F1 instance (read their new post, You Deserve Better than Grainy Giraffes, to learn more).

FPGAs In School & Research
Research groups and graduate classes at top-tier universities contacted us via AWS Educate and were eager to gain access to F1 instances.

UCLA‘s CS133 class (Parallel and Distributed Computing) is setting up an F1-based FPGA lab that will be operational within 3 or 4 weeks. According to UCLA Chancellor’s Professor Jason Cong, they are expanding multiple research projects to cover F1 including FPGA performance debugging, machine learning acceleration, Spark to FPGA compilation, and systolic array compilation.

Last month we announced that we are collaborating with the National Science Foundation (NSF) to foster innovation in big data research (read AWS Collaborates With the National Science Foundation to Foster Innovation to learn more and to find out how to apply for a grant).

FPGA’s in the AWS Marketplace
As I shared in my original post, we have built a complete beginning to end solution that lets developers build FPGA-powered applications and services and list them in the AWS Marketplace. I can’t wait to see what kinds of cool things show up there!

Jeff;

Now Available – I3 Instances for Demanding, I/O Intensive Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/

On the first day of AWS re:Invent I published an EC2 Instance Update and promised to share additional information with you as soon as I had it.

Today I am happy to be able to let you know that we are making six sizes of our new I3 instances available in fifteen AWS regions! Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, these instances can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput. This makes them a great fit for any workload that requires high throughput and low latency including relational databases, NoSQL databases, search engines, data warehouses, real-time analytics, and disk-based caches. When compared to the I2 instances, I3 instances deliver storage that is less expensive and more dense, with the ability to deliver substantially more IOPS and more network bandwidth per CPU core.

The Specs
Here are the instance sizes and the associated specs:

Instance Name vCPU Count Memory
Instance Storage (NVMe SSD) Price/Hour
i3.large 2 15.25 GiB 0.475 TB $0.15
i3.xlarge 4 30.5 GiB 0.950 TB $0.31
i3.2xlarge 8 61 GiB 1.9 TB $0.62
i3.4xlarge 16 122 GiB 3.8 TB (2 disks) $1.25
i3.8xlarge 32 244 GiB 7.6 TB (4 disks) $2.50
i3.16xlarge 64 488 GiB 15.2 TB (8 disks) $4.99

The prices shown are for On-Demand instances in the US East (Northern Virginia) Region; see the EC2 pricing page for more information.

I3 instances are available in On-Demand, Reserved, and Spot form in the US East (Northern Virginia), US West (Oregon), US West (Northern California), US East (Ohio), Canada (Central), South America (São Paulo), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Sydney), and AWS GovCloud (US) Regions. You can also use them as Dedicated Hosts and as Dedicated Instances.

These instances support Hardware Virtualization (HVM) AMIs only, and must be run within a Virtual Private Cloud. In order to benefit from the performance made possible by the NVMe storage, you must run one of the following operating systems:

  • Amazon Linux AMI
  • RHEL – 6.5 or better
  • CentOS – 7.0 or better
  • Ubuntu – 16.04 or 16.10
  • SUSE 12
  • SUSE 11 with SP3
  • Windows Server 2008 R2, 2012 R2, and 2016

The I3 instances offer up to 8 NVMe SSDs. In order to achieve the best possible throughput and to get as many IOPS as possible, you can stripe multiple volumes together, or spread the I/O workload across them in another way.

Each vCPU (Virtual CPU) is a hardware hyperthread on an Intel E5-2686 v4 (Broadwell) processor running at 2.3 GHz. The processor supports the AVX2 instructions, along with Turbo Boost and NUMA.

Go For Launch
The I3 instances are available today in fifteen AWS regions and you can start to use them right now.

Jeff;

 

AWS Direct Connect Update – Link Aggregation Groups, Bundles, and re:Invent Recap

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-direct-connect-update-link-aggregation-groups-bundles-and-reinvent-recap/

AWS Direct Connect helps our large-scale customers to create private, dedicated network connections to their office, data center, or colocation facility. Our customers create 1 Gbps and 10 Gbps connections in order to reduce their network costs, increase data transfer throughput, and to get a more consistent network experience than is possible with an Internet-based connection.

Today I would like to tell you about a new Link Aggregation feature for Direct Connect. I’d also like to tell you about our new Direct Connect Bundles and to tell you more about how we used Direct Connect to provide a first-class customer experience at AWS re:Invent 2016.

Link Aggregation Groups
Some of our customers would like to set up multiple connections (generally known as ports) between their location and one of the 46 Direct Connect locations. Some of them would like to create a highly available link that is resilient in the face of network issues outside of AWS; others simply need more data transfer throughput.

In order to support this important customer use case, you can now purchase up to 4 ports and treat them as a single managed connection, which we call a Link Aggregation Group or LAG. After you have set this up, traffic is load-balanced across the ports at the level of individual packet flows. All of the ports are active simultaneously, and are represented by a single BGP session. Traffic across the group is managed via Dynamic LACP (Link Aggregation Control Protocol – or ISO/IEC/IEEE 8802-1AX:2016). When you create your group, you also specify the minimum number of ports that must be active in order for the connection to be activated.

You can order a new group with multiple ports and you can aggregate existing ports into a new group. Either way, all of the ports must have the same speed (1 Gbps or 10 Gbps).

All of the ports in as group will connect to the same device on the AWS side. You can add additional ports to an existing group as long as there’s room on the device (this information is now available in the Direct Connect Console). If you need to expand an existing group and the device has no open ports, you can simply order a new group and migrate your connections.

Here’s how you can make use of link aggregation from the Console. First, creating a new LAG from scratch:

And second, creating a LAG from existing connections:


Link Aggregation Groups are now available in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions and you can create them today. We expect to make them available in the remaining regions by the end of this month.

Direct Connect Bundles
We announced some powerful new Direct Connect Bundles at re:Invent 2016. Each bundle is an advanced, hybrid reference architecture designed to reduce complexity and to increase performance. Here are the new bundles:

Level 3 Communications Powers Amazon WorkSpaces – Connects enterprise applications, data, user workspaces, and end-point devices to offer reliable performance and a better end-user experience:

SaaS Architecture enhanced by AT&T NetBond – Enhances quality and user experience for applications migrated to the AWS Cloud:

Aviatrix User Access Integrated with Megaport DX – Supports encrypted connectivity between AWS Cloud Regions, between enterprise data centers and AWS, and on VPN access to AWS:

Riverbed Hybrid SDN/NFV Architecture over Verizon Secure Cloud Interconnect – Allows enterprise customers to provide secure, optimized access to AWS services in a hybrid network environment:

Direct Connect at re:Invent 2016
In order to provide a top-notch experience for attendees and partners at re:Invent, we worked with Level 3 to set up a highly available and fully redundant set of connections. This network was used to support breakout sessions, certification exams, the hands-on labs, the keynotes (including the live stream to over 25,000 viewers in 122 countries), the hackathon, bootcamps, and workshops. The re:Invent network used four 10 Gbps connections, two each to US West (Oregon) and US East (Northern Virginia):

It supported all of the re:Invent venues:

Here are some video resources that will help you to learn more about how we did this, and how you can do it yourself:

Jeff;

Excited about MXNet joining Apache!

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/excited-about-mxnet-joining-apache/

Post by Dr. Matt Wood

From Alexa to Amazon Go, we use deep learning extensively across all areas of Amazon, and we’ve tried a lot of deep learning engines along the way. One has emerged as the most scalable, efficient way to perform deep learning, and for these reasons, we have selected MXNet as our engine of choice at Amazon.

MXNet is an open source, state of the art deep learning engine, which allows developers to build sophisticated, custom artificial intelligence systems. Training these systems is significantly faster in MXNet, due to its scale and performance. For example, for the popular image recognition network, Resnet, MXNet has 2X the throughput compared to other engines, letting you train equivalent models in half the time. MXNet also shows close to linear scaling across hundreds of GPUs, while the performance of other engines show diminishing returns at scale.

We have a significant team at Amazon working with the MXNet community to continue to evolve it. The team proposed MXNet joining the Apache Incubator to take advantage of the Apache Software Foundation’s process, stewardship, outreach, and community events. We’re excited to announce that it has been accepted.

We’re at the start what we’ll be investing in Apache MXNet, and look forward to partnering with the community to keep extending its already significant utility.

If you’d like to get started with MXNet, take a look at the keynote presentation I gave at AWS Re:Invent, and fire up an instance (or an entire cluster), of the AWS Deep Learning AMI which includes MXNet with example code, pre-compiled and ready to rock. You should also watch Leo’s presentation and tutorial on recommendation modeling.

You should follow @apachemxnet on Twitter, or check out the new Apache MXNet page for updates from the open source project.

 

Introducing the AWS IoT Button Enterprise Program

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/introducing-the-aws-iot-button-enterprise-program/

The AWS IoT Button first made its appearance on the IoT scene in October of 2015 at AWS re:Invent with the introduction of the AWS IoT service.  That year all re:Invent attendees received the AWS IoT Button providing them the opportunity to get hands-on with AWS IoT.  Since that time AWS IoT button has been made broadly available to anyone interested in the clickable IoT device.

During this past AWS re:Invent 2016 conference, the AWS IoT button was launched into the enterprise with the AWS IoT Button Enterprise Program.  This program is intended to help businesses to offer new services or improve existing products at the click of a physical button.  With the AWS IoT Button Enterprise Program, enterprises can use a programmable AWS IoT Button to increase customer engagement, expand applications and offer new innovations to customers by simplifying the user experience.  By harnessing the power of IoT, businesses can respond to customer demand for their products and services in real-time while providing a direct line of communication for customers, all via a simple device.

 

 

AWS IoT Button Enterprise Program

Let’s discuss how the new AWS IoT Button Enterprise Program works.  Businesses start by placing a bulk order of the AWS IoT buttons and provide a custom label for the branding of the buttons.  Amazon manufactures the buttons and pre-provisions the IoT button devices by giving each a certificate and unique private key to grant access to AWS IoT and ensure secure communication with the AWS cloud.  This allows for easier configuration and helps customers more easily get started with the programming of the IoT button device.

Businesses would design and build their IoT solution with the button devices and creation of device companion applications.  The AWS IoT Button Enterprise Program provides businesses some complimentary assistance directly from AWS to ensure a successful deployment.  The deployed devices then would only need to be configured with Wi-Fi at user locations in order to function.

 

 

For enterprises, there are several use cases that would benefit from the implementation of an IoT button solution. Here are some ideas:

  • Reordering services or custom products such as pizza or medical supplies
  • Requesting a callback from a customer service agent
  • Retail operations such as a call for assistance button in stores or restaurants
  • Inventory systems for capturing products amounts for inventory
  • Healthcare applications such as alert or notification systems for the disabled or elderly
  • Interface with Smart Home systems to turn devices on and off such as turning off outside lights or opening the garage door
  • Guest check-in/check-out systems

 

AWS IoT Button

At the heart of the AWS IoT Button Enterprise Program is the AWS IoT Button.  The AWS IoT button is a 2.4GHz Wi-Fi with WPA2-PSK enabled device that has three click types: Single click, Double click, and Long press.  Note that a Long press click type is sent if the button is pressed for 1.5 seconds or longer.  The IoT button has a small LED light with color patterns for the status of the IoT button.  A blinking white light signifies that the IoT button is connecting to Wi-Fi and getting an IP address, while a blinking blue light signifies that the button is in wireless access point (AP) mode.  The data payload that is sent from the device when pressed contains the device serial number, the battery voltage, and the click type.

Currently, there are 3 ways to get started building your AWS IoT button solution.  The first option is to use the AWS IoT Button companion mobile app.  The mobile app will create the required AWS IoT resources, including the creation of the TLS 1.2 certificates, and create an AWS IoT rule tied to AWS Lambda.  Additionally, it will enable the IoT button device via AWS IoT to be an event source that invokes a new AWS Lambda function of your choosing from the Lambda blueprints.  You can download the aforementioned mobile apps for Android and iOS below.

 

The second option is to use the AWS Lambda Blueprint Wizard as an easy way to start using your AWS IoT Button. Like the mobile app, the wizard will create the required AWS IoT resources for you and add an event source to your button that invokes a new Lambda function.

The third option is to follow the step by step tutorial in the AWS IoT getting started guide and leverage the AWS IoT console to create these resources manually.

Once you have configured your IoT button successfully and created a simple one-click solution using one of the aforementioned getting started guides, you should be ready to start building your own custom IoT button solution.   Using a click of a button, your business will be able to build new services for customers, offer new features for existing services, and automate business processes to operate more efficiently.

The basic technical flow of an AWS IoT button solution is as follows:

  • A button is clicked and secure connection is established with AWS IoT with TLS 1.2
  • The button data payload is sent to AWS IoT Device Gateway
  • The rules engine evaluates received messages (JSON) published into AWS IoT and performs actions or trigger AWS Services based defined business rules.
  • The triggered AWS Service executes or action is performed
  • The device state can be read, stored and set with Device Shadows
  • Mobile and Web Apps can receive and update data based upon action

Now that you have general knowledge about the AWS IoT button, we should jump into a technical walk-through of building an AWS IoT button solution.

 

AWS IoT Button Solution Walkthrough

We will dive more deeply into building an AWS IoT Button solution with a quick example of a use case for providing one-click customer service options for a business.

To get started, I will go to the AWS IoT console, register my IoT button as a Thing and create a Thing type.  In the console, I select the Registry and then Things options in console menu.

The name of my IoT thing in this example will be TEW-AWSIoTButton.  If you desire to categorize the IoT things, you can create a Thing type and assign a type to similar IoT ‘things’.  I will categorize my IoT thing, TEW-AWSIoTButton, as an IoTButton thing type with a One-click-device attribute key and select Create thing button.

After my AWS IoT button device, TEW-AWSIoTButton, is registered in the Thing Registry, the next step is to acquire the required X.509 certificate and keys.  I will have AWS IoT generate the certificate for this device, but the service allows for to use your own certificates.  Authenticating the connection with the X.509 certificates helps to protect the data exchange between your device and AWS IoT service.

When the certificates are generated with AWS IoT, it is important that you download and save all of the files created since the public and private keys will not be available after you leave the download page. Additionally, do not forget to download the root CA for AWS IoT from the link provided on the page with your generated certificates.

The newly created certificate will be inactive, therefore, it is vital that you activate the certificate prior to use.  AWS IoT uses the TLS protocol to authenticate the certificates using the TLS protocol’s client authentication mode.  The certificates enable asymmetric keys to be used with devices, and AWS IoT service will request and validate the certificate’s status and the AWS account against a registry of certificates.  The service will challenge for proof of ownership of the private key corresponding to the public key contained in the certificate.  The final step in securing the AWS IoT connection to my IoT button is to create and/or attach an IAM policy for authorization.

I will choose the Attach a policy button and then select Create a Policy option in order to build a specific policy for my IoT button.  In Name field of the new IoT policy, I will enter IoTButtonPolicy for the name of this new policy. Since the AWS IoT Button device only supports button presses, our AWS IoT button policy will only need to add publish permissions.  For this reason, this policy will only allow the iot:Publish action.

 

For the Resource ARN of the IoT policy, the AWS IoT buttons typically follow the format pattern of: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ iotbutton /ButtonSerialNumber.  This means that the Resource ARN for this IoT button policy will be:

I should note that if you are creating an IAM policy for an IoT device that is not an AWS IoT button, the Resource ARN format pattern would be as follows: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ YourTopic/ OptionalSubTopic/

The created policy for our AWS IoT Button, IoTButtonPolicy, looks as follows:

The next step is to return to the AWS IoT console dashboard, select Security and then Certificates menu options.  I will choose the certificate created in the aforementioned steps.

Then on the selected certificate page, I will select the Actions dropdown on the far right top corner.  In order to add the IoTButtonPolicy IAM policy to the certificate, I will click the Attach policy option.

 

I will repeat all of the steps mentioned above but this time I will add the TEW-AWSIoTButton thing by selecting the Attach thing option.

All that is left is to add the certificate and private key to the physical AWS IoT button and connect the AWS IoT Button to Wi-Fi in order to have the IoT button be fully functional.

Important to note: For businesses that have signed up to participate in the AWS IoT Button Enterprise Program, all of these aforementioned steps; Button logo branding, AWS IoT thing creation, obtaining certificate & key creation, and adding certificates to buttons, are completed for them by Amazon and AWS.  Again, this is to help make it easier for enterprises to hit the ground running in the development of their desired AWS IoT button solution.

Now, going back to the AWS IoT button used in our example, I will connect the button to Wi-Fi by holding the button until the LED blinks blue; this means that the device has gone into wireless access point (AP) mode.

In order to provide internet connectivity to the IoT button and start configuring the device’s connection to AWS IoT, I will connect to the button’s Wi-Fi network which should start with Button ConfigureMe. The first time the connection is made to the button’s Wi-Fi, a password will be required.  Enter the last 8 characters of the device serial number shown on the back of the physical AWS IoT button device.

The AWS IoT button is now configured and ready to build a system around it. The next step will be to add the actions that will be performed when the IoT button is pressed.  This brings us to the AWS IoT Rules engine, which is used to analyze the IoT device data payload coming from the MQTT topic stream and/or Device Shadow, and trigger AWS Services actions.  We will set up rules to perform varying actions when different types of button presses are detected.

Our AWS IoT button solution will be a simple one, we will set up two AWS IoT rules to respond to the IoT button being clicked and the button’s payload being sent to AWS IoT.  In our scenario, a single button click will represent that a request is being sent by a customer to a fictional organization’s customer service agent.  A double click, however, will represent that a text will be sent containing a customer’s fictional current account status.

The first AWS IoT rule created will receive the IoT button payload and connect directly to Amazon SNS to send an email only if the rule condition is fulfilled that the button click type is SINGLE. The second AWS IoT rule created will invoke a Lambda function that will send a text message containing customer account status only if the rule condition is fulfilled that the button click type is DOUBLE.

In order to create the AWS IoT rule that will send an email to subscribers of an SNS topic for requesting a customer service agent’s help, we will go to Amazon SNS and create a SNS topic.

I will create an email subscription to the topic with the fictional subscribed customer service email, which in this case is just my email address.  Of course, this could be several customer service representatives that are subscribed to the topic in order to receive emails for customer assistance requests.

Now returning to the AWS IoT console, I will select the Rules menu and choose the Create rule option. I first provide a name and description for the rule.

Next, I select the SQL version to be used for the AWS IoT rules engine.  I select the latest SQL version, however, if I did not choose to set a version, the default version of 2015-10-08 will be used. The rules engine uses a SQL-like syntax with statements containing the SELECT, FROM, and WHERE clauses.  I want to return a literal string for the message, which is not apart of the IoT button data payload.  I also want to return the button serial number as the accountnum, which are not apart of the payload.  Since the latest version, 2016-03-23, supports literal objects, I will be able to send a custom payload to Amazon SNS.

I have created the rule, all that is left is to add a rule action to perform when the rule is analyzed.  As I mentioned above, an email should be sent to customer service representatives when this rule is triggered by a single IoT button press.  Therefore, my rule action will be the Send a message as an SNS push notification to the SNS topic that I created to send an email to our fictional customer service reps aka me. Remember that the use of an IAM role is required to provide access to SNS resources; if you are using the console you have the option to create a new role or update an existing role to provide the correct permissions.  Also, since I am doing a custom message and pushing to SNS, I select the Message format type to be RAW.

Our rule has been created, now all that is left is for us to test that an email is successfully sent when the AWS IoT button is pressed once, and therefore the data payload has a click type of SINGLE.

A single press of our AWS IoT Button and the custom message is published to the SNS Topic, and the email shown below was sent to the subscribed customer service agents email addresses; in this example, to my email address.

 

In order to create the AWS IoT rule that will send a text via Lambda and a SNS topic for the scenario in which customers request account status to be sent when the IoT Button is pressed twice.  We will start by creating an AWS IoT rule with an AWS Lambda action.  To create this IoT rule, we first need to create a Lambda function and the SNS Topic with a SNS text based subscription.

First, I will go to the Amazon SNS console and create a SNS Topic. After the topic is created, I will create a SNS text subscription for our SNS topic and add a number that will receive the text messages. I will then copy the SNS Topic ARN for use in my Lambda function. Please note, that I am creating the SNS Topic in a different region than previously created SNS topic to use a region with support for sending SMS via SNS. In the Lambda function, I will need to ensure the correct region for the SNS Topic is used by including the region as a parameter of the constructor of the SNS object. The created SNS topic, aws-iot-button-topic-text is shown below.

 

We now will go to the AWS Lambda console and create a Lambda function with an AWS IoT trigger, an IoT Type as IoT Button, and the requested Device Serial Number will be the serial number on the back of our AWS IoT Button. There is no need to generate the certificate and keys in this step because the AWS IoT button is already configured with certificates and keys for secure communication with AWS IoT.

The next is to create the Lambda function,  IoTNotifyByText, with the following code that will receive the IoT button data payload and create a message to publish to Amazon SNS.

'use strict';

console.log('Loading function');
var AWS = require("aws-sdk");
var sns = new AWS.SNS({region: 'us-east-1'});

exports.handler = (event, context, callback) => {
    // Load the message as JSON object 
    var iotPayload = JSON.stringify(event, null, 2);
    
    // Create a text message from IoT Payload 
    var snsMessage = "Attention: Customer Info for Account #: " + event.accountnum + " Account Status: In Good Standing " + 
    "Balance is: 1234.56"
    
    // Log payload and SNS message string to the console and for CloudWatch Logs 
    console.log("Received AWS IoT payload:", iotPayload);
    console.log("Message to send: " + snsMessage);
    
    // Create params for SNS publish using SNS Topic created for AWS IoT button
    // Populate the parameters for the publish operation using required JSON format
    // - Message : message text 
    // - TopicArn : the ARN of the Amazon SNS topic  
    var params = {
        Message: snsMessage,
        TopicArn: "arn:aws:sns:us-east-1:xxxxxxxxxxxx:aws-iot-button-topic-text"
     };
     
     sns.publish(params, context.done);
};

All that is left is for us to do is to alter the AWS IoT rule automatically created when we created a Lambda function with an AWS IoT trigger. Therefore, we will go to the AWS IoT console and select Rules menu option. We will find and select the IoT button rule created by Lambda which usually has a name with a suffix that is equal to the IoT button device serial number.

 

Once the rule is selected, we will choose the Edit option beside the Rule query statement section.

We change the Select statement to return the serial number as the accountnum and click Update button to save changes to the AWS IoT rule.

Time to Test. I click the IoT button twice and wait for the green LED light to appear, confirming a successful connection was made and a message was published to AWS IoT. After a few seconds, a text message is received on my phone with the fictitious customer account information.

 

This was a simple example of how a business could leverage the AWS IoT Button in order to build business solutions for their customers.  With the new AWS IoT Button Enterprise Program which helps businesses in obtaining the quantities of AWS IoT buttons needed, as well as, providing AWS IoT service pre-provisioning and deployment support; Businesses can now easily get started in building their own customized IoT solution.

Available Now

The original 1st generation of the AWS IoT button is currently available on Amazon.com, and the 2nd generation AWS IoT button will be generally available in February.  The main difference in the IoT buttons are the amount of battery life and/or clicks available for the button.  Please note that right now if you purchase the original AWS IoT button, you will receive $20 in AWS credits when you register.

Businesses can sign up today for the AWS IoT Button Enterprise Program currently in Limited Preview. This program is designed to enable businesses to expand their existing applications or build new IoT capabilities with the cloud and a click of an IoT button device.  You can read more about the AWS IoT button and learn more about building solutions with a programmable IoT button on the AWS IoT Button product page.  You can also dive deeper into the AWS IoT service by visiting the AWS IoT developer guide, the AWS IoT Device SDK documentation, and/or the AWS Internet of Things Blog.

 

Tara

AWS Lambda – A Look Back at 2016

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/aws-lambda-a-look-back-at-2016/

2016 was an exciting year for AWS Lambda, Amazon API Gateway and serverless compute technology, to say the least. But just in case you have been hiding away and haven’t heard of serverless computing with AWS Lambda and Amazon API Gateway, let me introduce these great services to you.  AWS Lambda lets you run code without provisioning or managing servers, making it a serverless compute service that is event-driven and allows developers to bring their functions to the cloud easily for virtually any type of application or backend.  Amazon API Gateway helps you quickly build highly scalable, secure, and robust APIs at scale and provides the ability to maintain and monitor created APIs.

With the momentum of serverless in 2016, of course, the year had to end with a bang as the AWS team launched some powerful service features at re:Invent to make it even easier to build serverless solutions.  These features include:

Since Jeff has already introduced most of the aforementioned new service features for building distributed applications and microservices like Step Functions, let’s walk-through the last four new features not yet discussed using a common serverless use case example: Real-time Stream Processing.  In our walk-through of the stream processing use case, we will implement a Dead Letter Queue for notifications of errors that may come from the Lambda function processing a stream of data, we will take an existing Lambda function written in Node.js to process the stream and rewrite it using the C# language.  We then will build an example of the monetization of a Lambda backed API using API Gateway’s integration with AWS Marketplace.  This will be exciting, so let’s get started.

During the AWS Developer Days in San Francisco and Austin, I presented an example of leveraging AWS Lambda for real-time stream processing by building a demo showcasing a streaming solution with Twitter Streaming APIs. I will build upon this example to demonstrate the power of Dead Letter Queues (DLQ), C# Support, API Gateway Monetization features, and the open source template for API Gateway Developer Portal.  In the demo, a console or web application streams tweets gathered from the Twitter Streaming API that has the keywords ‘awscloud’ and/or ‘serverless’.  Those tweets are sent real-time to Amazon Kinesis Streams where Lambda detects the new records and processes the stream batch by writing the tweets to the NoSQL database, Amazon DynamoDB.

Now that we understand the real-time streaming process demo’s workflow, let’s take a deeper look at the Lambda function that processes the batch records from Kinesis.  First, you will notice below that the Lambda function, DevDayStreamProcessor, has an event source or trigger that is a Kinesis stream named DevDay2016Stream with a Batch size of 100.  Our Lambda function will poll the stream periodically for new records and automatically read and process batches of records, in this case, the tweets detected on the stream.

Now we will examine our Lambda function code which is written in Node.js 4.3. The section of the Lambda function shown below loops through the batch of tweet records from our Kinesis stream, parses each record, and writes desired tweet information into an array of JSON data. The array of the JSON tweet items is passed to the function, ddbItemsWrite which is outside of our Lambda handler.

'use strict';
console.log('Loading function');

var timestamp;
var twitterID;
var tweetData;
var ddbParams;
var itemNum = 0;
var dataItemsBatch = [];
var dbBatch = [];
var AWS = require('aws-sdk');
var ddbTable = 'TwitterStream';
var dynamoDBClient = new AWS.DynamoDB.DocumentClient();

exports.handler = (event, context, callback) => {
    var counter = 0; 
    
    event.Records.forEach((record) => {
        // Kinesis data is base64 encoded so decode here
        console.log("Base 64 record: " + JSON.stringify(record, null, 2));
        const payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
        console.log('Decoded payload:', payload);
        
        var data = payload.replace(/[\u0000-\u0019]+/g," "); 
        try
        {  tweetData = JSON.parse(data);   }
        catch(err)
        {  callback(err, err.stack);   }
        
        timestamp = "" + new Date().getTime();
        twitterID = tweetData.id.toString();
        itemNum = itemNum+1;
               
         var ddbItem = {
                PutRequest: { 
                    Item: { 
                        TwitterID: twitterID,
                        TwitterUser: tweetData.username.toString(),
                        TwitterUserPic: tweetData.pic,
                        TwitterTime: new Date(tweetData.time.replace(/( \+)/, ' UTC$1')).toLocaleString(), 
                        Tweet: tweetData.text,
                        TweetTopic: tweetData.topic,
                        Tags: (tweetData.hashtags) ? tweetData.hashtags : " ",
                        Location: (tweetData.loc) ? tweetData.loc : " ",
                        Country: (tweetData.country) ? tweetData.country : " ",
                        TimeStamp: timestamp,
                        RecordNum: itemNum
                    }
                }
            };
            
            dataItemsBatch.push(ddbItem);
            counter++;
});
    
    var twitterItems = {}; 
    twitterItems[ddbTable] = dataItemsBatch; 
    ddbItemsWrite(twitterItems, 0, context, callback); 

};

The ddbItemsWrite function shown below will take the array of JSON tweet records processed from the Kinesis stream, and write the records multiple items at a time to our DynamoDB table using batch operations. This function leverages the DynamoDB best practice of retrying unprocessed items by implementing an exponential backoff algorithm to prevent write request failures due to throttling on the individual tables.

 function ddbItemsWrite(items, retries, ddbContext, ddbCallback) 
    { 
        dynamoDBClient.batchWrite({ RequestItems: items }, function(err, data) 
            { 
                if (err) 
                { 
                    console.log('DDB call failed: ' + err, err.stack); 
                    ddbCallback(err, err.stack); 
                } 
                else 
                { 
                    if(Object.keys(data.UnprocessedItems).length) 
                    { 
                        console.log('Unprocessed items remain, retrying.'); 
                        var delay = Math.min(Math.pow(2, retries) * 100, ddbContext.getRemainingTimeInMillis() - 200); 
                        setTimeout(function() {ddbItemsWrite(data.UnprocessedItems, retries + 1, ddbContext, ddbCallback)}, delay); 
                    } 
                    else 
                    { 
                         ddbCallback(null, "Success");
                         console.log("Completed Successfully");
                    } 
                } 
            } 
        );
    }

Currently, this Lambda function works as expected and will successfully process tweets captured in Kinesis from the Twitter Streaming API, however, this function has a flaw that will cause an error to occur when processing batch write requests to our DynamoDB table.  In the Lambda function, the current code does not take into account that the DynamoDB batchWrite function should be comprised of no more than 25 write (put) requests per single call to this function up to 16 MB of data. Therefore, without changing the code appropriately to have the ddbItemsWrite function to handle batches of 25 or have the handler function put items in the array in groups of 25 requests before sending to the ddbItemsWrite function; there will be a validation exception thrown when the batch of tweets items sent is greater than 25.  This is a great example of a bug that is not easily detected in small-scale testing scenarios yet will cause failures under production load.

 

Dead Letter Queues

Now that we are aware of an event that will cause the ddbItemsWrite Lambda function to throw an exception and/or an event that will fail while processing records, we have a first-rate scenario for leveraging Dead Letter Queues (DLQ).

Since AWS Lambda DLQ functionality is only available for asynchronous event sources like Amazon S3, Amazon SNS, AWS IoT or direct asynchronous invocations, and not for streaming event sources such as Amazon Kinesis or Amazon DynamoDB streams; our first step is to break this Lambda function into two functions.  The first Lambda function will handle the processing of the Kinesis stream, and the second Lambda function will take the data processed by the first function and write the tweet information to DynamoDB.  We will then setup our DLQ on the second Lambda function for the error that will occur on writing the batch of tweets to DynamoDB as noted above.

We have two options when setting up a target for our DLQ; Amazon SNS topic or an Amazon SQS queue.  In this walk-through, we will opt for using an Amazon SQS queue.  Therefore, my first step in using DLQ is to create a SQS Standard queue.  A Standard queue type is a queue which has high transactions throughput, a message will be delivered at least once, but another copy of the message may also be delivered, and it is possible that messages might be delivered in an order different from which they were sent.  You can learn more about creating SQS queues and queue type in the Amazon SQS documentation.

Once my queue, StreamDemoDLQ, is created, I will grab the ARN from the Details tab of this selected queue. If I am not using the console to designate the DLQ resource for this function, I will need the ARN for the queue for my Lambda function to identify this SQS queue as the DLQ target for error and event failure notifications. Additionally, I will use the ARN to add permissions to my Lambda execution role policy in order to access this SQS queue.

I will now return to my Lambda function and select the Configuration tab and expand the Advanced settings section. I will select SQS in the DLQ Resource field and select my StreamDemoDLQ queue in the SQS Queue field dropdown.

Remember, the execution role for the Lambda function must explicitly provide sqs:SendMessage access permissions to in order to successfully send messages to your SQS DLQ.  Therefore, I ensured that my Lambda role, lambda_kinesis_role, has the following IAM policy for SQS permissions.

 

We have now successfully configured a Dead Letter Queue for our Lambda function using Amazon SQS. To learn more about Dead Letter Queues in Lambda, read the Troubleshooting and Monitoring section of the AWS Lambda Developer Guide and check out the AWS Compute Blog post on Dead Letter Queues.

C# Support

As I mentioned earlier, another very exciting feature added to Lambda during AWS re:Invent was the support for the C# language via the open source .NET Core 1.0 platform.  Since the Lambda console does not offer editing for compiled languages yet, in order to author a C# Lambda function you can use tooling in Visual Studio with the AWS Toolkit, Yeoman, and/or the .NET CLI.  To deploy Lambda functions written in C#, you can use the Lambda plugin in the AWS ToolKit for Visual Studio or create a deployment package with the .NET Core command line.

A C# Lambda function handler should be defined as an instance or static method in a class. There are two handler function parameters; the first is the input type which is the event data and second is the Lambda context object of type ILambdaContext. The event data input object types for AWS Services include the following:

  • Amazon.Lambda.APIGatewayEvents
  • Amazon.Lambda.CognitoEvents
  • Amazon.Lambda.ConfigEvents
  • Amazon.Lambda.DynamoDBEvents
  • Amazon.Lambda.KinesisEvents
  • Amazon.Lambda.S3Events
  • Amazon.Lambda.SNSEvents

Now that we have discussed more detail around C# Support in Lambda, let’s rewrite our DevDayStreamProcessor lambda function with the C# language. For this example, I will use Visual Studio IDE to write the Lambda function, and additionally take advantage of the AWS Lambda Visual Studio plugin to deploy the function. Remember in order to use the AWS Toolkit for Visual Studio with Lambda, you will need to have Visual Studio 2015 Update 3 version and NET Core tools. You can read more about installing Visual Studio 2015 Update 3 and .NET Core here.

To create the C# function using Visual Studio, I start a New Project, select AWS Lambda Project (.NET Core) and name it ServerlessStreamProcessor.

What’s really cool about taking advantage of the AWS Toolkit for Visual Studio to author this function, is that inside of Visual Studio I can use Lambda blueprints to get started in a similar way that I would in using the Lambda console.  Therefore in order to replicate the DevDayStreamProcessor in C#, I will select the Simple Kinesis Function blueprint.

It should be noted that when writing Lambda functions in C#, there is no need to mark the class declaration nor the target handler function as a Lambda function. Additionally, when writing CloudWatch logs you can use the standard C# Console class WriteLine function or use the ILambdaContext LogLine function found as a part of the ILambdaContext interface. With the template for accessing the Kinesis stream in place, I finish writing the C# Lambda function, ServerlessStreamProcessor, utilizing the same variable names as in the Node.js code in DevDayStreamProcessor. Please note the C# Lambda handler function below.

using System.Collections.Generic;
using Amazon.Lambda.Core;
using Amazon.Lambda.KinesisEvents;
using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.DataModel;
using Newtonsoft.Json.Linq;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializerAttribute(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace ServerlessStreamProcessor
{
    public class LambdaTwitterStream
    {
        string twitterID, timeStamp;
        int itemNum = 0;
        
        private static AmazonDynamoDBClient dynamoDBClient = new AmazonDynamoDBClient();
        List<TwitterItem> dataItemsBatch = new List<TwitterItem>();
        
        public void FunctionHandler(KinesisEvent kinesisEvent, ILambdaContext context)
        {
            DynamoDBContext dbContext = new DynamoDBContext(dynamoDBClient);
            context.Logger.LogLine($"Beginning to process {kinesisEvent.Records.Count} records...");
            
            foreach (var record in kinesisEvent.Records)
            {
                context.Logger.LogLine($"Event ID: {record.EventId}");
                context.Logger.LogLine($"Event Name: {record.EventName}");

                // Kinesis data is base64 encoded so decode here
                string tweetData = GetRecordContents(record.Kinesis);
                context.Logger.LogLine($"Decoded Payload: {tweetData}");
                tweetData = @"" + tweetData;
                JObject twitterObj = JObject.Parse(tweetData);
                
                twitterID = twitterObj["id"].ToString();
                timeStamp = DateTime.Now.Millisecond.ToString();
                itemNum++;
                context.Logger.LogLine(timeStamp);
                context.Logger.LogLine($"Twitter ID is: {twitterID}");
                context.Logger.LogLine(itemNum.ToString());

                TwitterItem ddbItem = new TwitterItem()
                { 
                    TwitterID = twitterID,
                    TwitterUser = twitterObj["username"].ToString(),
                    TwitterUserPic = twitterObj["pic"].ToString(),
                    TwitterTime = DateTime.Parse(twitterObj["time"].ToString()).ToUniversalTime().ToString(),
                    Tweet = twitterObj["text"].ToString(),
                    TweetTopic = twitterObj["topic"].ToString(),
                    Tags = twitterObj["hashtags"] != null ? twitterObj["hashtags"].ToString() : String.Empty,
                    Location = twitterObj["loc"] != null ? twitterObj["loc"].ToString() : String.Empty,
                    Country = twitterObj["country"] != null ? twitterObj["country"].ToString() : String.Empty,
                    TimeStamp =  timeStamp,
                    RecordNum = itemNum
                };
                
                dataItemsBatch.Add(ddbItem);
            }

            context.Logger.LogLine(JObject.FromObject(dataItemsBatch).ToString());
            ddbItemsWrite(dataItemsBatch, 0, dbContext, context);
            context.Logger.LogLine("Success - Completed Successfully");
            context.Logger.LogLine("Stream processing complete.");
        }

There are only a few differences that should be noted between our Kinesis stream processor written in C# and our original Node.js code.  Since the input parameter type supported by default in C# Lambda functions is the System.IO.Stream type, the Kinesis base64 string is decoded by using a StreamReader with ASCII encoding in a blueprint provided function, GetRecordContents.

 

private string GetRecordContents(KinesisEvent.Record streamRecord)
{
    using (var reader = new StreamReader(streamRecord.Data, Encoding.ASCII))
    {
        return reader.ReadToEnd();
    }
}

The other thing to note is that in order to write the tweet data to the DynamoDB Table, I added the AWS .NET SDK NuGet package for DynamoDB; AWSSDK.DynamoDBv2 to the Lambda function project via the NuGet package manager within Visual Studio.  I also created a .NET data object, TwitterItem, to map to the data being stored in the DynamoDB table. Using the AWS .NET SDK higher level programming interface, object persistence model for DynamoDB, I created a collection of TwitterItem objects to be written via the BatchWrite object class in our ddbItemsWrite C# function.

private async void ddbItemsWrite(List<TwitterItem> items, int retries, DynamoDBContext ddbContext, ILambdaContext context)
{
BatchWrite<TwitterItem> twitterStreamBatchWrite = ddbContext.CreateBatchWrite<TwitterItem>();
        
        try
        {
            twitterStreamBatchWrite.AddPutItems(items);   
            await twitterStreamBatchWrite.ExecuteAsync();
        }
        catch (Exception ex)
        {
            context.Logger.LogLine($"DDB call failed: {ex.Source} ");
            context.Logger.LogLine($"Exception: {ex.Message}");
            context.Logger.LogLine($"Exception Stacktrace: {ex.StackTrace}");
        }      
}

Another benefit of using AWS Toolkit for Visual Studio to author my C# Lambda function is that I can deploy my Lambda function directly to AWS with a single click.  Selecting my project name in the Solution Explorer and performing a right-click, I get a menu option, Publish to AWS Lambda, which brings up a menu for information to include about my Lambda function for deployment to AWS.

It is important to note that the handler function signature follows the nomenclature of Assembly :: Namespace :: ClassName :: Method, therefore, the signature of our C# Lambda function shown here is: ServerlessStreamProcessor :: ServerlessStreamProcessor.LambdaTwitterStream :: FunctionHandler.  We provide this information to the Upload to AWS Lambda dialog box and select Next to assign a role for the function.

Upon completion, you can test in the Lambda console or in Visual Studio with AWS toolkit provided plugin (shown below) using the sample data of the triggering event source for an iterative approach to developing the Lambda function.

You can learn more about authoring AWS Lambda functions using the C# Language in the AWS Lambda developer guide or by reading the post announcing C# Support on the Compute Blog.

API Gateway Monetization and Developer Portal

If you have been following the microservices momentum, you may be aware of an architectural pattern that calls for using smart endpoints and/or using an API gateway via REST APIs to manage access and exposure of individual services that make up a microservices solution.  Amazon API Gateway enables creation and management of RESTful APIs to expose AWS Lambda functions, external HTTP endpoints, as well as, other AWS services.  In addition, Amazon API Gateway allows clients and external developers to have access to a deployed APIs by via HTTP protocol or a platform/language targeted SDK.

With the introduction of SaaS Subscriptions on AWS Marketplace and the API Gateway integration with the AWS Marketplace, you can now monetize your APIs by allowing customers to directly consume the APIs you create with API Gateway in the AWS Marketplace.  AWS customers can subscribe and be billed for the APIs published on the marketplace with their existing AWS account.  With the integration of API Gateway with the AWS Marketplace, the process to get started is easy on the AWS Marketplace.

To get started, you must ensure that you have enabled the Usage Plan feature in Amazon API Gateway.

Once enabled the next step is to create a Usage Plan, enable throttling (if desired) with targeted rate and burst request thresholds, and finally enable quotas (if you choose) by providing targeted request quota per a set timeframe.

Next, we would choose our APIs and related stage(s) that we wish to be associated with the usage plan. Please note that this is an optional step as you can opt not associate a specific API with your usage plan.

All that is left to do is add or create an API key for the usage plan.  Again, it should be noted that this is also an optional step in creating your usage plan.

Now that we have our usage plan, StreamingPlan, we are ready for the next step in preparation for selling our API on the marketplace. You have the option to create multiple usage plans with varying APIs and limits, and sell these plans as differentiated API products on AWS Marketplace.

In order to enable customers to buy our new API product, however, the AWS Marketplace requires that each API product has an external developer portal to handle subscription requests, provide API information details and ability for the management of usage.

This customer need for an external developer portal for the marketplace birthed the new open source API Gateway developer portal serverless web application implementation.  The goal of the API Gateway developer portal project was to allow customers to follow a few easy steps to create a serverless web application that lists a catalog of your APIs built with API Gateway while allowing for developer signups.

The API Gateway developer portal was built upon AWS Serverless Express; an open source library published by AWS which aids you in utilizing AWS Lambda and Amazon API Gateway in building web applications/services with the Node.js Express framework.  Additionally, the API Gateway developer portal application uses an AWS SAM (Serverless Application Model) template to deploy its serverless resources.  AWS SAM is a simplified CloudFormation template and specification that allows easier management and deployment of serverless applications on AWS.

To build your developer portal using the API Gateway portal, you would start by cloning the aws-api-gateway-developer-portal project from GitHub.

Assuming you have the latest version of the AWS CLI and Node.js installed, you would setup the developer portal by running “npm run setup” on the command line for Mac and Linux OS users. For Windows users, you would run “npm run win-setup” on the command line setup the developer portal.

The result is a functional sample developer portal website running on S3 that you can customize in order to create your own developer portal for your APIs.

The frontend of the sample developer portal website is built with the React JavaScript library, and the backend is an AWS Lambda function running using the aws-serverless-express library. Additionally, a Lambda function with a SNS event source was created as a listener for notification when customers subscribe or unsubscribe to your API via the AWS Marketplace console.  You can learn more about the steps to build, customize, and deploy your API Gateway developer portal web application with this reference project by visiting the AWS Compute blog post which discusses the architecture and implementation in more detail.

 

The next key step in monetizing our API is establishing an account on the AWS Marketplace.  If an account is not already established, registering is simply verifying that you meet the requirement prerequisites provided in the AWS Marketplace Seller Guide and completing a seller registration form on the AWS Marketplace Management Portal.  You can see a snapshot of the start of the seller registration form below.

To list the API, you would fill a product load form describing the API, establish the pricing for the API, and provide t\he IDs of AWS Accounts that will test the API subscription process.  Completing this form would also require you to submit the URL for your API developer portal.

When your seller registration is complete, you will be supplied an AWS Marketplace product code.  You will need to associate your marketplace product code with your API usage plan.  In order to complete this step, you would simply log into the API Gateway console and go to your API usage plan. Go to the Marketplace tab and enter your product code. This tells API Gateway to send measurement data to AWS Marketplace when your API is used.

With your Amazon API Gateway managed API packaged into a usage plan, the accompanying API developer portal created, seller account registration completed, and product code associated with API usage plan; we are now ready to monetize our API on the AWS Marketplace.

Learn more about monetizing your APIs created with API Gateway by checking out the related blog post and reviewing the API Gateway developer guide documentation.

Summary

As you can see, the AWS teams were busy in 2016 working to make the customer experience easier for creating and deploying serverless architectures, as well as, providing mechanisms for customers to generate and monetize their API Gateway managed APIs.

Visit the product documentation for AWS Lambda and Amazon API Gateway to learn more about these services and all the newly released features.

Tara

New – AWS OpsWorks for Chef Automate

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-opsworks-for-chef-automate/

AWS OpsWorks helps you to configure and run applications using Chef. You use a Domain Specific Language (DSL) to write cookbooks that define your application’s architecture and the configuration of each component. The Chef server is an essential part of the configuration process. It stores all of the cookbooks and tracks state information for each of the instances (nodes in Chef terminology).

Because the Chef server is in the critical path when newly launched instances are configured, it must be reliable. Many OpsWorks and Chef users install and maintain this important architectural component themselves. In production-scale environments, this leaves them to handle backups, restores, version upgrades, and so forth.

New AWS OpsWorks for Chef Automate
Early this month we launched AWS OpsWorks for Chef Automate from the AWS re:Invent stage. You can launch the Chef Automate server with just 3 clicks and start using it within minutes. You can use community cookbooks from Chef Supermarket and community tools such as Test Kitchen and Knife.

You can use Chef Automate to manage your infrastructure throughout the life-cycle of your application’s infrastructure. For example, newly launched EC2 instances can automatically connect to the Chef server and run a specified recipe by using an unattended association script (read Adding Nodes Automatically in AWS OpsWorks for Chef Automate to learn more). The registration script can be used to register EC2 instances created dynamically through an Auto Scaling Group and to register on-premises servers.

Take a Look
Let’s launch a Chef Automate server from the OpsWorks Console. Click on Go to OpsWorks for Chef Automate to get started.

Click on Create Chef Automate server, give your server a name, choose a region, and select a suitable EC2 instance type:

Choose one of your SSH key pairs, or opt out of SSH:

Finally, configure your network (VPC), IAM, maintenance window, and backup settings:

Click on Next, review your settings, and then click on Launch! The launch process takes less than 20 minutes. During that time you can download the sign-in credentials for your Chef Automate dashboard along with a Starter Kit:

You can see all of your Chef Automate servers at a glance:

Click on the server name (BorkBorkBork here), and then on Open Chef Automate dashboard, then enter your credentials to log in:

And here’s the dashboard:

You can see and manage your nodes:

Manage your workflows:

And much more!

Behind the scenes, the launch process invokes a AWS CloudFormation template. The template creates an EC2 instance, an Elastic IP Address, and a Security Group.

Available Now
You can launch AWS OpsWorks for Chef Automate today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions. Pricing is based on the number of nodes and the number of hours that they are connected to the server; see the Chef Automate Pricing page for more info. As part of the AWS Free Tier, you can use up to 10 nodes at no charge for 12 months.

Jeff;

SAML Identity Federation: Follow-Up Questions, Materials, Guides, and Templates from an AWS re:Invent 2016 Workshop (SEC306)

Post Syndicated from Quint Van Deman original https://aws.amazon.com/blogs/security/saml-identity-federation-follow-up-questions-materials-guides-and-templates-from-an-aws-reinvent-2016-workshop-sec306/

As part of the re:Source Mini Con for Security Services at AWS re:Invent 2016, we conducted a workshop focused on Security Assertion Markup Language (SAML) identity federation: Choose Your Own SAML Adventure: A Self-Directed Journey to AWS Identity Federation Mastery. As part of this workshop, attendees were able to submit their own federation-focused questions to a panel of AWS experts. In this post, I share the questions and answers from that workshop because this information can benefit any AWS customer interested in identity federation.

I have also made available the full set of workshop materials, lab guides, and AWS CloudFormation templates. I encourage you to use these materials to enrich your exploration of SAML for use with AWS.

Q: SAML assertions are limited to 50,000 characters. We often hit this limit by being in too many groups. What can AWS do to resolve this size-limit problem?

A: Because the SAML assertion is ultimately part of an API call, an upper bound must be in place for the assertion size.

On the AWS side, your AWS solution architect can log a feature request on your behalf to increase the maximum size of the assertion in a future release. The AWS service teams use these feature requests, in conjunction with other avenues of customer feedback, to plan and prioritize the features they deliver. To facilitate this process you need two things: the proposed higher value to which you’d like to see the maximum size raised, and a short written description that would help us understand what this increased limit would enable you to do.

On the AWS customer’s side, we often find that these cases are most relevant to centralized cloud teams that have broad, persistent access across many roles and accounts. This access is often necessary to support troubleshooting or simply as part of an individual’s job function. However, in many cases, exchanging persistent access for just-in-time access enables the same level of access but with better levels of visibility, reduced blast radius, and better adherence to the principle of least privilege. For example, you might implement a fast, efficient, and monitored workflow that allows you to provision a user into the necessary backend directory group for a short duration when needed in lieu of that user maintaining all of those group memberships on a persistent basis. This approach could effectively resolve the limit issue you are facing.

Q: Can we use OpenID Connect (OIDC) for federated authentication and authorization into the AWS Management Console? If so, does it have a similar size limit?

A: Currently, AWS support for OIDC is oriented around providing access to AWS resources from mobile or web applications, not access to the AWS Management Console. This is possible to do, but it requires the construction of a custom identity broker. In this solution, this broker would consume the OIDC identity, use its own logic to authenticate and authorize the user (thus being subject only to any size limits you enforce on the OIDC side), and use the sts:GetFederatedToken call to vend the user an AWS Security Token Service (STS) token for either AWS Management Console use or API/CLI use. During this sts:GetFederatedToken call, you attach a scoping policy with a limit of 2,048 characters. See Creating a URL that Enables Federated Users to Access the AWS Management Console (Custom Federation Broker) for additional details about custom identity brokers.

Q: We want to eliminate permanent AWS Identity and Access Management (IAM) access keys, but we cannot do so because of third-party tools. We are contemplating using HashiCorp Vault to vend permanent keys. Vault lets us tie keys to LDAP identities. Have you seen this work elsewhere? Do you think it will work for us?

A: For third-party tools that can run within Amazon EC2, you should use EC2 instance profiles to eliminate long-term credentials and their associated management (distribution, rotation, etc.). For third-party tools that cannot run within EC2, most customers opt to leverage their existing secrets-management tools and processes for the long-lived keys. These tools are often enhanced to make use of AWS APIs such as iam:GetCredentialReport (rotation information) and iam:{Create,Update,Delete}AccessKey (rotation operations). HashiCorp Vault is a popular tool with an available AWS Quick Start Reference Deployment, but any secrets management platform that is able to efficiently fingerprint the authorized resources and is extensible to work with the previously mentioned APIs will fill this need for you nicely.

Q: Currently, we use an Object Graph Navigation Library (OGNL) script in our identity provider (IdP) to build role Amazon Resource Names (ARNs) for the role attribute in the SAML assertion. The script consumes a list of distribution list display names from our identity management platform of which the user is a member. There is a 60-character limit on display names, which leaves no room for IAM pathing (which has a 512-character limit). We are contemplating a change. The proposed solution would make AWS API calls to get role ARNs from the AWS APIs. Have you seen this before? Do you think it will work? Does the AWS SAML integration support full-length role ARNs that would include up to 512 characters for the IAM path?

A: In most cases, we recommend that you use regular expression-based transformations within your IdP to translate a list of group names to a list of role ARNs for inclusion in the SAML assertion. Without pathing, you need to know the 12-digit AWS account number and the role name in order to be able to do so, which is accomplished using our recommended group-naming convention (AWS-Acct#-RoleName). With pathing (because “/” is not a valid character for group names within most directories), you need an additional source from which to pull this third data element. This could be as simple as an extra dimension in the group name (such as AWS-Acct#-Path-RoleName); however, that would multiply the number of groups required to support the solution. Instead, you would most likely derive the path element from a user attribute, dynamic group information, or even an external information store. It should work as long as you can reliably determine all three data elements for the user.

We do not recommend drawing the path information from AWS APIs, because the logic within the IdP is authoritative for the user’s authorization. In other words, the IdP should know for which full path role ARNs the user is authorized without asking AWS. You might consider using the AWS APIs to validate that the role actually exists, but that should really be an edge case. This is because you should integrate any automation that builds and provisions the roles with the frontend authorization layer. This way, there would never be a case in which the IdP authorizes a user for a role that doesn’t exist. AWS SAML integration supports full-length role ARNs.

Q: Why do AWS STS tokens contain a session token? This makes them incompatible with third-party tools that only support permanent keys. Is there a way to get rid of the session token to make temporary keys that contain only the access_key and the secret_key components?

A: The session token contains information that AWS uses to confirm that the AWS STS token is valid. There is no way to create a token that does not contain this third component. Instead, the preferred AWS mechanisms for distributing AWS credentials to third-party tools are EC2 instance profiles (in EC2), IAM cross-account trust (SaaS), or IAM access keys with secrets management (on-premises). Using IAM access keys, you can rotate the access keys as often as the third-party tool and your secrets management platform allow.

Q: How can we use SAML to authenticate and authorize code instead of having humans do the work? One proposed solution is to use our identity management (IDM) platform to generate X.509 certificates for identities, and then present these certificates to our IdP in order to get valid SAML assertions. This could then be included in an sts:AssumeRoleWithSAML call. Have you seen this working before? Do you think it will work for us?

A: Yes, when you receive a SAML assertion from your IdP by using your desired credential form (user name/password, X.509, etc.), you can use the sts:AssumeRoleWithSAML call to retrieve an AWS STS token. See How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0 for a reference implementation.

Q: As a follow-up to the previous question, how can we get code using multi-factor authentication (MFA)? There is a gauth project that uses NodeJS to generate virtual MFAs. Code could theoretically get MFA codes from the NodeJS gauth server.

A: The answer depends on your choice of IdP and MFA provider. Generically speaking, you need to authenticate (either web-based or code-based) to the IdP using all of your desired factors before the SAML assertion is generated. This assertion can then include details of the authentication mechanism used as an additional attribute in role-assumption conditions within the trust policy in AWS. This lab guide from the workshop provides further details and a how-to guide for MFA-for-SAML.

This blog post clarifies some re:Invent 2016 attendees’ questions about SAML-based federation with AWS. For more information presented in the workshop, see the full set of workshop materials, lab guides, and CloudFormation templates. If you have follow-up questions, start a new thread in the IAM forum.

– Quint

EC2 Systems Manager – Configure & Manage EC2 and On-Premises Systems

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-systems-manager-configure-manage-ec2-and-on-premises-systems/

Last year I introduced you to the EC2 Run Command and showed you how to use it to do remote instance management at scale, first for EC2 instances and then in hybrid and cross-cloud environments. Along the way we added support for Linux instances, making EC2 Run Command a widely applicable and incredibly useful administration tool.

Welcome to the Family
Werner announced the EC2 Systems Manager at AWS re:Invent and I’m finally getting around to telling you about it!

This a new management service that include an enhanced version of EC2 Run Command along with eight other equally useful functions. Like EC2 Run Command it supports hybrid and cross-cloud environments composed of instances and services running Windows and Linux. You simply open up the AWS Management Console, select the instances that you want to manage, and define the tasks that you want to perform (API and CLI access is also available).

Here’s an overview of the improvements and new features:

Run Command – Now allows you to control the velocity of command executions, and to stop issuing commands if the error rate grows too high.

State Manager – Maintains a defined system configuration via policies that are applied at regular intervals.

Parameter Store – Provides centralized (and optionally encrypted) storage for license keys, passwords, user lists, and other values.

Maintenance Window -Specify a time window for installation of updates and other system maintenance.

Software Inventory – Gathers a detailed software and configuration inventory (with user-defined additions) from each instance.

AWS Config Integration – In conjunction with the new software inventory feature, AWS Config can record software inventory changes to your instances.

Patch Management – Simplify and automate the patching process for your instances.

Automation – Simplify AMI building and other recurring AMI-related tasks.

Let’s take a look at each one…

Run Command Improvements
You can now control the number of concurrent command executions. This can be useful in situations where the command references a shared, limited resource such as an internal update or patch server and you want to avoid overloading it with too many requests.

This feature is currently accessible from the CLI and from the API. Here’s a CLI example that limits the number of concurrent executions to 2:

$ aws ssm send-command \
  --instance-ids "i-023c301591e6651ea" "i-03cf0fc05ec82a30b" "i-09e4ed09e540caca0" "i-0f6d1fe27dc064099" \
  --document-name "AWS-RunShellScript" \
  --comment "Run a shell script or specify the commands to run." \
  --parameters commands="date" \
  --timeout-seconds 600 --output-s3-bucket-name "jbarr-data" \
  --region us-east-1 --max-concurrency 2

Here’s a more interesting variant that is driven by tags and tag values by specifying --targets instead of --instance-ids:

$ aws ssm send-command \
  --targets "Key=tag:Mode,Values=Production" ... 

You  can also stop issuing commands if they are returning errors, with the option to specify either a maximum number of errors or a failure rate:

$ aws ssm send-command --max-errors 5 ... 
$ aws ssm send-command --max-errors 5% ...

State Manager
State Manager helps to keep your instances in a defined state, as defined by a document. You create the document, associate it with a set of target instances, and then create an association to specify when and how often the document should be applied. Here’s a document that updates the message of the day file:

And here’s the association (this one uses tags so that it applies to current instances and to others that are launched later and are tagged in the same way):

Specifying targets using tags makes the association future-proof, and allows it to work as expected in dynamic, auto-scaled environments. I can see all of my associations, and I can run the new one by selecting it and clicking on Apply Association Now:

Parameter Store
This feature simplifies storage and management for license keys, passwords, and other data that you want to distribute  to your instances. Each parameter has a type (string, string list, or secure string), and can be stored in encrypted form. Here’s how I create a parameter:

And here’s how I reference the parameter in a command:

Maintenance Window
This feature allows specification of a time window for installation of updates and other system maintenance. Here’s how I create a weekly time window that opens for four hours every Saturday:

After I create the window I need to assign a set of instances to it. I can do this by instance Id or by tag:

And  then I need to register a task to perform during the maintenance window. For example, I can run a Linux shell script:

Software Inventory
This feature collects information about software and settings for a set of instances. To access it, I click on Managed Instances and Setup Inventory:

Setting up the inventory creates an association between an AWS-owned document and a set of instances. I simply choose the targets, set the schedule, and identify the types of items to be inventoried, then click on Setup Inventory:

After the inventory runs, I can select an instance and then click on the Inventory tab in order to inspect the results:

The results can be filtered for further analysis. For example, I can narrow down the list of AWS Components to show only development tools and libraries:

I can also run inventory-powered queries across all of the managed instances. Here’s how I can find Windows Server 2012 R2 instances that are running a version of .NET older than 4.6:

AWS Config Integration
The results of the inventory can be routed to AWS Config  and allow you to track changes to the applications, AWS components, instance information, network configuration, and Windows Updates over time. To access this information, I click on Managed instance information above the Config timeline for the instance:

The three lines at the bottom lead to the inventory information. Here’s the network configuration:

Patch Management
This feature helps you to keep the operating system on your Windows instances up to date. Patches are applied during maintenance windows that you define, and are done with respect to a baseline. The baseline specifies rules for automatic approval of patches based on classification and severity, along with an explicit list of patches to approve or reject.

Here’s my baseline:

Each baseline can apply to one or more patch groups. Instances within a patch group have a Patch Group tag. I named my group Win2016:

Then I associated the value with the baseline:

The next step is to arrange to apply the patches during a maintenance window using the AWS-ApplyPatchBaseline document:

I can return to the list of Managed Instances and use a pair of filters to find out which instances are in need of patches:

Automation
Last but definitely not least, the Automation feature simplifies common AMI-building and updating tasks. For example, you can build a fresh Amazon Linux AMI each month using the AWS-UpdateLinuxAmi document:

Here’s what happens when this automation is run:

Available Now
All of the EC2 Systems Manager features and functions that I described above are available now and you can start using them today at no charge. You pay only for the resources that you manage.

Jeff;

 

Amazon EFS Update – On-Premises Access via Direct Connect

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-efs-update-on-premises-access-via-direct-connect-vpc/

I introduced you to Amazon Elastic File System last year (Amazon Elastic File System – Shared File Storage for Amazon EC2) and announced production readiness earlier this year (Amazon Elastic File System – Production-Ready in Three Regions). Since the launch earlier this year, thousands of AWS customers have used it to set up, scale, and operate shared file storage in the cloud.

Today we are making EFS even more useful with the introduction of simple and reliable on-premises access via AWS Direct Connect. This has been a much-requested feature and I know that it will be useful for migration, cloudbursting, and backup. To use this feature for migration, you simply attach an EFS file system to your on-premises servers, copy your data to it, and then process it in the cloud as desired, leaving your data in AWS for the long term.  For cloudbursting, you would copy on-premises data to an EFS file system, analyze it at high speed using a fleet of Amazon Elastic Compute Cloud (EC2) instances, and then copy the results back on-premises or visualize them in Amazon QuickSight.

You’ll get the same file system access semantics including strong consistency and file locking, whether you access your EFS file systems from your on-premises servers or from your EC2 instances (of course, you can do both concurrently). You will also be able to enjoy the same multi-AZ availability and durability that is part-and-parcel of EFS.

In order to take advantage of this new feature, you will need to use Direct Connect to set up a dedicated network connection between your on-premises data center and an Amazon Virtual Private Cloud. Then you need to make sure that your filesystems have mount targets in subnets that are reachable via the Direct Connect connection:

You also need to add a rule to the mount target’s security group in order to allow inbound TCP and UDP traffic to port 2049 (NFS) from your on-premises servers:

After you create the file system, you can reference the mount targets by their IP addresses, NFS-mount them on-premises, and start copying files. The IP addresses are available from within the AWS Management Console:

The Management Console also provides you with access to step-by-step directions! Simply click on the On-premises mount instructions:

And follow along:

This feature is available today at no extra charge in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and US East (Ohio) Regions.

Jeff;

 

AWS Webinars – January 2017 (Bonus: December Recap)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-webinars-january-2017-bonus-december-recap/

Have you had time to digest all of the announcements that we made at AWS re:Invent? Are you ready to debug with AWS X-Ray, analyze with Amazon QuickSight, or build conversational interfaces using Amazon Lex? Do you want to learn more about AWS Lambda, set up CI/CD with AWS CodeBuild, or use Polly to give your applications a voice?

January Webinars
In our continued quest to provide you with training and education resources, I am pleased to share the webinars that we have set up for January. These are free, but they do fill up and you should definitely register ahead of time. All times are PT and each webinar runs for one hour:

January 16:

January 17:

January 18::

January 19:

January 20

December Webinar Recap
The December webinar series is already complete; here’s a quick recap with links to the recordings:

December 12:

December 13:

December 14:

December 15:

Jeff;

PS – If you want to get a jump start on your 2017 learning objectives, the re:Invent 2016 Presentations and re:Invent 2016 Videos are just a click or two away.