Tag Archives: launch

Amazon Polly Update – Time-Driven Prosody and Asynchronous Synthesis

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-polly-update-time-driven-prosody-and-asynchronous-synthesis/

I hope that you are enjoying the Polly-powered audio that is available for the newest posts on this blog, including the DeepLens Challenge and the Storage Gateway Recap. As part of my blogging process, I now listen to the synthesized speech for my draft blog posts in order to get a better sense for how they flow.

Today we are launching two new features for Amazon Polly:

Time-Driven Prosody – You can now specify the desired duration for the synthesized speech that corresponds to part or all of the input text.

Asynchronous Synthesis – You can now process large blocks of text and store the synthesized speech in Amazon S3 with a single call.

Both of these features are available now and you can start using them today. Let’s take a closer look!

Time-Driven Prosody
Imagine that you are creating a multi-lingual version of a video or a self-running presentation. You write the script, record the video in one language, and then use Amazon Translate and Amazon Polly to create audio tracks in other languages. In order to keep each language in sync with the visual content, you need to exercise fine-grained control over the duration of each segment. That’s where this new feature comes in. You can now specify the maximum desired duration for any desired segments, counting on Polly to adjust the speech rate in order to limit the length of each segment.

The preceding paragraph generates 19 seconds of audio if I use Amazon Polly’s Joanna voice with no other options:

<speak>
  In order to keep each language in sync with the visual content, 
  you need to exercise fine-grained control over the duration of
  each segment. That's where this new feature comes in. You can 
  now specify the maximum desired duration for any desired segments, 
  counting on Polly to adjust the speech rate in order to limit 
  the length of each segment.
</speak>

I can use a <prosody> tag to limit the length to 15 seconds:

<speak>
  <prosody amazon:max-duration="15s">
    In order to keep each language in sync with the visual content, 
    you need to exercise fine-grained control over the duration of
    each segment. That's where this new feature comes in. You can 
    now specify the maximum desired duration for any desired segments, 
    counting on Polly to adjust the speech rate in order to limit 
    the length of each segment.
 </prosody>
</speak>

I can control the duration at a more fine-grained level by using multiple <prosody> tags:

  <prosody amazon:max-duration="10s">
    In order to keep each language in sync with the visual content, 
    you need to exercise fine-grained control over the duration of
    each segment. 
  </prosody>
  <prosody amazon:max-duration="7s">
    That's where this new feature comes in. You can now specify 
    the maximum desired duration for any desired segments, 
    counting on Polly to adjust the speech rate in order to limit 
    the length of each segment.
 </prosody>

The Spanish equivalent (courtesy of Amazon Translate) of my English text is somewhat longer and the speed-up is apparent:

<speak>
  <prosody amazon:max-duration="15s">
    Para mantener cada idioma sincronizado con el contenido
    visual, es necesario ejercer un control detallado sobre
    la duración de cada segmento. Ahí es donde entra esta 
    nueva característica. Ahora puede especificar la 
    duración máxima deseada para los segmentos deseados, 
    contando con que Polly ajuste la velocidad de voz para 
    limitar la longitud de cada segmento.
 </prosody>
</speak>

The text inside of each time-limited <prosody> tag is limited to 1500 characters and nesting is not allowed (the inner tag will be ignored). In order to ensure that the audio remains comprehensible, Polly will speed up the audio by a maximum of 5x.

Asynchronous Synthesis
This feature makes it easier for you to use Polly to generate speech for long-form content such as articles or book chapters by allowing you to process up to 100,000 characters of text at a time using asynchronous requests. The synthesized speech is delivered to the S3 bucket of your choice, with failure notifications optionally routed to the Amazon Simple Notification Service (SNS) topic of your choice. The generated audio can be up to 6 hours long, and is typically ready within minutes. In addition to 100,000 characters of text, each request can include an additional 100,000 characters of Speech Synthesis Markup Language (SSML) markup.

Each asynchronous request creates a new speech synthesis task. You can initiate and manage tasks from the Polly Console, CLI (start-speech-synthesis-task), or API (StartSpeechSynthesisTask).

To test this feature I created a plain-text version of my thoroughly obsolete AWS book and inserted some SSML tags, turning it in to valid XML along the way. Then I open the Polly Console, click Text-to-Speech, paste the XML, and click Synthesize to S3:

I enter the name of my S3 bucket (which must be in region where I plan to create the task), and click Synthesize to proceed:

My task is created:

And I can see it in the list of tasks:

I receive an email when the synthesis is complete:

And the file is in my bucket as expected:

I did not spend a lot of time on the markup, but the results are impressive:

Interestingly enough, most of that chapter is still relevant. The rest of the book has been overtaken by history, and is best left there! Perhaps I’ll write another one sometime.

Anyway, as you can see (and hear) the asynchronous speech synthesis is powerful and easy to use. Give it a shot, build something cool, and tell me about it.

Jeff;

 

 

Amazon EC2 Instance Update – Faster Processors and More Memory

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ec2-instance-update-faster-processors-and-more-memory/

Last month I told you about the Nitro system and explained how it will allow us to broaden the selection of EC2 instances and to pick up the pace as we do so, with an ever-broadening selection of compute, storage, memory, and networking options. This will allow us to give you access to the latest technology very quickly, giving you the ability to choose the instance type that is the best match for your applications.

Today, I would like to tell you about three new instance types that are in the works and that will be available soon:

Z1d – Compute-intensive instances running at up to 4.0 GHz, powered by sustained all-core Turbo Boost. They are ideal for Electronic Design Automation (EDA) and relational database workloads, and are also a great fit for several kinds of HPC workloads.

R5 – Memory-optimized instances running at up to 3.1 GHz powered by sustained all-core Turbo Boost, with up to 50% more vCPUs and 60% more memory than R4 instances.

R5d – Memory-optimized instances equipped with local NVMe storage (up to 3.6 TB for the largest R5d instance), and will be available in the same sizes and with the same specs as the R5 instances.

We are also planning to launch R5 Bare Metal, R5d Bare Metal, and Z1d Bare Metal instances. As is the case with the existing i3.metal instances, you will be able to access low-level hardware features and to run applications that are not licensed or supported in virtualized environments.

Z1d Instances
The Z1d instances are designed for applications that can benefit from extremely high per-core performance. These include:

Electronic Design Automation – As chips become smaller and denser, the amount of compute power needed to design and verify the chips increases non-linearly. Semiconductor customers deploy jobs that span thousands of cores; having access to faster cores reduces turnaround time for each job and can also lead to a measurable reduction in software licensing costs.

HPC – In the financial services world, jobs that run analyses or compute risks also benefit from faster cores. Manufacturing organizations can run their Finite Element Analysis (FEA) and simulation jobs to completion more quickly.

Relational Database – CPU-bound workloads that run on a database that “features” high per-core license fees will enjoy both cost and performance benefits.

Z1d instances use custom Intel® Xeon® Scalable Processors running at up to 4.0 GHz, powered by sustained all-core Turbo Boost. They will be available in 6 sizes, with up to 48 vCPUs, 384 GiB of memory, and 1.8 TB of local NVMe storage. On the network side, they feature ENA networking that will deliver up to 25 Gbps of bandwidth, and are EBS-Optimized by default for up to 14 Gbps of bandwidth. As usual, you can launch them in a Cluster Placement Group to increase throughput and reduce latency. Here are the sizes and specs:

Instance Name vCPUs Memory Local Storage EBS-Optimized Bandwidth Network Bandwidth
z1d.large 2 16 GiB 1 x 75 GB NVMe SSD Up to 2.333 Gbps Up to 10 Gbps
z1d.xlarge 4 32 GiB 1 x 150 GB NVMe SSD Up to 2.333 Gbps Up to 10 Gbps
z1d.2xlarge 8 64 GiB 1 x 300 GB NVMe SSD 2.333 Gbps Up to 10 Gbps
z1d.3xlarge 12 96 GiB 1 x 450 GB NVMe SSD 3.5 Gbps Up to 10 Gbps
z1d.6xlarge 24 192 GiB 1 x 900 GB NVMe SSD 7.0 Gbps 10 Gbps
z1d.12xlarge 48 384 GiB 2 x 900 GB NVMe SSD 14.0 Gbps 25 Gbps

The instances are HVM and VPC-only, and you will need to use an AMI with the appropriate ENA and NVMe drivers. Any AMI that runs on C5 or M5 instances will also run on Z1d instances.

R5 Instances
Building on the earlier generations of memory-intensive instance types (M2, CR1, R3, and R4), the R5 instances are designed to support high-performance databases, distributed in-memory caches, in-memory analytics, and big data analytics. They use custom Intel® Xeon® Platinum 8000 Series (Skylake-SP) processors running at up to 3.1 GHz, again powered by sustained all-core Turbo Boost. The instances will be available in 6 sizes, with up to 96 vCPUs and 768 GiB of memory. Like the Z1d instances, they feature ENA networking and are EBS-Optimized by default, and can be launched in Placement Groups. Here are the sizes and specs:

Instance Name vCPUs Memory EBS-Optimized Bandwidth Network Bandwidth
r5.large 2 16 GiB Up to 3.5 Gbps Up to 10 Gbps
r5.xlarge 4 32 GiB Up to 3.5 Gbps Up to 10 Gbps
r5.2xlarge 8 64 GiB Up to 3.5 Gbps Up to 10 Gbps
r5.4xlarge 16 128 GiB 3.5 Gbps Up to 10 Gbps
r5.12xlarge 48 384 GiB 7.0 Gbps 10 Gbps
r5.24xlarge 96 768 GiB 14.0 Gbps 25 Gbps

Once again, the instances are HVM and VPC-only, and you will need to use an AMI with the appropriate ENA and NVMe drivers.

Learn More
The new EC2 instances announced today highlight our plan to continue innovating in order to better meet your needs! I’ll share additional information as soon as it is available.

Jeff;

 

 

New – EC2 Compute Instances for AWS Snowball Edge

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ec2-compute-instances-for-aws-snowball-edge/

I love factories and never miss an opportunity to take a tour. Over the years, I have been lucky enough to watch as raw materials and sub-assemblies are turned into cars, locomotives, memory chips, articulated buses, and more. I’m always impressed by the speed, precision, repeat-ability, and the desire to automate every possible step. On one recent tour, the IT manager told me that he wanted to be able to set up and centrally manage the global collection of on-premises industrialized PCs that monitor their machinery as easily and as efficiently as he does their EC2 instances and other cloud resources.

Today we are making that manager’s dream a reality, with the introduction of EC2 instances that run on AWS Snowball Edge devices! These ruggedized devices, with 100 TB of local storage, can be used to collect and process data in hostile environments with limited or non-existent Internet connections before shipping the processed data back to AWS for storage, aggregation, and detailed analysis. Here are the instance specs:

Instance Name vCPUs Memory
sbe1.small 1 1 GiB
sbe1.medium 1 2 GiB
sbe1.large 2 4 GiB
sbe1.xlarge 4 8 GiB
sbe1.2xlarge 8 16 GiB
sbe1.4xlarge 16 32 GiB

Each Snowball Edge device is powered by an Intel® Xeon® D processor running at 1.8 GHz, and supports any combination instances that consume up to 24 vCPUs and 32 GiB of memory. You can build and test AMIs (Amazon Machine Images) in the cloud and then preload them onto the device as part of the ordering process (I’ll show you how in just a minute). You can use the EC2-compatible endpoint exposed by each device to programmatically start, stop, resume, and terminate instances. This allows you to use the existing CLI commands and to build tools and scripts to manage fleets of devices. It also allows you to take advantage of your existing EC2 skills and knowledge, and to put them to good use in a new environment.

There are three main setup steps:

  1. Creating a suitable AMI.
  2. Ordering a Snowball Edge Device.
  3. Connecting and Configuring the Device.

Let’s take an in-depth look at the first two steps. Time was tight and I didn’t have time to get hands-on experience with an actual device, so the third step will have to wait for another time.

Creating a Suitable AMI
I have the ability to choose up to 10 AMIs that will be preloaded onto the device. The AMIs must be owned by my AWS account, and must be based on one of the following Marketplace AMIs:

These AMIs have been tested for use on Snowball Edge devices and can be used as a starting point for customization. We will be adding additional options over time, so let us know what you need.

I decided to start with the newest Ubuntu AMI, and launch it on an M5 instance, taking care to specify the SSH keypair that I will eventually use to connect to the instance from my terminal client:

After my instance launches, I connect to it, customize it as desired for use on my device, and then return to the EC2 Console to create an AMI. I select the running instance, choose Create Image from the Actions menu, specify the details, and click Create Image:

The size of the root volume will determine how much of the device’s SSD storage is allocated to the instance when it launches. A total of one TB of space is available for all running instances, so keep your local file storage needs in mind as your analyze your use case and set up your AMIs. Also, Snowball Edge devices cannot make use of additional EBS volumes, so don’t bother including them in your AMI. My AMI is ready within minutes (To learn more about how to create AMIs, read Creating an Amazon EBS-Backed Linux AMI):

Now I am ready to order my first device!

Ordering a Snowball Edge Device
The ordering procedure lets me designate a shipping address and specify how I would like my Snowball device to be configured. I open the AWS Snowball Console and click Create job:

I specify the job type (they all support EC2 compute instances):

Then I select my shipping address, entering a new one if necessary (come and visit me):

Next, I define my job. I give it a name (SJ1), select the 100 TB device, and pick the S3 bucket that will receive data when the device is returned to AWS:

Now comes the fun part! I click Enable compute with EC2 and select the AMIs to be loaded on the Snowball Edge:

I click Add an AMI and find the one that I created earlier:

I can add up to ten AMIs to my job, but will stop at one for this post:

Next, I set up my IAM role and configure encryption:

Then I configure the optional SNS notifications. I can choose to receive notification for a wide variety of job status values:

My job is almost ready! I review the settings and click Create job to create it:

Connecting and Configuring the Device
After I create the job, I wait until my Snowball Edge device arrives. I connect it to my network, power it on, and then unlock it using my manifest and device code, as detailed in Unlock the Snowball Edge. Then I configure my EC2 CLI to use the EC2 endpoint on the device and launch an instance. Since I configured my AMI for SSH access, I can connect to it as if it were an EC2 instance in the cloud.

Things to Know
Here are a couple of things to keep in mind:

Long-Term Usage – You can keep the Snowball Edge devices on your premises and hard at work for as long as you would like. You’ll be billed for a one-time setup fee for each job; after 10 days you will pay an additional, per-day fee for each device. If you want to keep a device for an extended period of time, you can also pay upfront as part of a one or three year commitment.

Dev/Test – You should be able to do much of your development and testing on an EC2 instance running in the cloud; some of our early users are working in this way as part of a “Digital Twin” strategy.

S3 Access – Each Snowball Edge device includes an S3-compatible endpoint that you can access from your on-device code. You can also make use of existing S3 tools and applications.

Now Available
You can start ordering devices today and make use of this exciting new AWS feature right away.

Jeff;

 

 

New – Lifecycle Management for Amazon EBS Snapshots

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-lifecycle-management-for-amazon-ebs-snapshots/

It is always interesting to zoom in on the history of a single AWS service or feature and watch how it has evolved over time in response to customer feedback. For example, Amazon Elastic Block Store (EBS) launched a decade ago and has been gaining more features and functionality every since. Here are a few of the most significant announcements:

Several of the items that I chose to highlight above make EBS snapshots more useful and more flexible. As you may already know, it is easy to create snapshots. Each snapshot is a point-in-time copy of the blocks that have changed since the previous snapshot, with automatic management to ensure that only the data unique to a snapshot is removed when it is deleted. This incremental model reduces your costs and minimizes the time needed to create a snapshot.

Because snapshots are so easy to create and use, our customers create a lot of them, and make great use of tags to categorize, organize, and manage them. Going back to my list, you can see that we have added multiple tagging features over the years.

Lifecycle Management – The Amazon Data Lifecycle Manager
We want to make it even easier for you to create, use, and benefit from EBS snapshots! Today we are launching Amazon Data Lifecycle Manager to automate the creation, retention, and deletion of Amazon EBS volume snapshots. Instead of creating snapshots manually and deleting them in the same way (or building a tool to do it for you), you simply create a policy, indicating (via tags) which volumes are to be snapshotted, set a retention model, fill in a few other details, and let Data Lifecycle Manager do the rest. Data Lifecycle Manager is powered by tags, so you should start by setting up a clear and comprehensive tagging model for your organization (refer to the links above to learn more).

It turns out that many of our customers have invested in tools to automate the creation of snapshots, but have skimped on the retention and deletion. Sooner or later they receive a surprisingly large AWS bill and find that their scripts are not working as expected. The Data Lifecycle Manager should help them to save money and to be able to rest assured that their snapshots are being managed as expected.

Creating and Using a Lifecycle Policy
Data Lifecycle Manager uses lifecycle policies to figure out when to run, which volumes to snapshot, and how long to keep the snapshots around. You can create the policies in the AWS Management Console, from the AWS Command Line Interface (CLI) or via the Data Lifecycle Manager APIs; I’ll use the Console today. Here are my EBS volumes, all suitably tagged with a department:

I access the Lifecycle Manager from the Elastic Block Store section of the menu:

Then I click Create Snapshot Lifecycle Policy to proceed:

Then I create my first policy:

I use tags to specify the volumes that the policy applies to. If I specify multiple tags, then the policy applies to volumes that have any of the tags:

I can create snapshots at 12 or 24 hour intervals, and I can specify the desired snapshot time. Snapshot creation will start no more than an hour after this time, with completion based on the size of the volume and the degree of change since the last snapshot.

I can use the built-in default IAM role or I can create one of my own. If I use my own role, I need to enable the EC2 snapshot operations and all of the DLM (Data Lifecycle Manager) operations; read the docs to learn more.

Newly created snapshots will be tagged with the aws:dlm:lifecycle-policy-id and  aws:dlm:lifecycle-schedule-name automatically; I can also specify up to 50 additional key/value pairs for each policy:

I can see all of my policies at a glance:

I took a short break and came back to find that the first set of snapshots had been created, as expected (I configured the console to show the two tags created on the snapshots):

Things to Know
Here are a couple of things to keep in mind when you start to use Data Lifecycle Manager to automate your snapshot management:

Data Consistency – Snapshots will contain the data from all completed I/O operations, also known as crash consistent.

Pricing – You can create and use Data Lifecyle Manager policies at no charge; you pay the usual storage charges for the EBS snapshots that it creates.

Availability – Data Lifecycle Manager is available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions.

Tags and Policies – If a volume has more than one tag and the tags match multiple policies, each policy will create a separate snapshot and both policies will govern the retention. No two policies can specify the same key/value pair for a tag.

Programmatic Access – You can create and manage policies programmatically! Take a look at the CreateLifecyclePolicy, GetLifecyclePolicies, and UpdateLifeCyclePolicy functions to get started. You can also write an AWS Lambda function in response to the createSnapshot event.

Error Handling – Data Lifecycle Manager generates a “DLM Policy State Change” event if a policy enters the error state.

In the Works – As you might have guessed from the name, we plan to add support for additional AWS data sources over time. We also plan to support policies that will let you do weekly and monthly snapshots, and also expect to give you additional scheduling flexibility.

Jeff;

AWS Storage Gateway Recap – SMB Support, RefreshCache Event, and More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-storage-gateway-recap-smb-support-refreshcache-event-and-more/

To borrow my own words, the AWS Storage Gateway is a service that includes a multi-protocol storage appliance that fits in between your existing application and the AWS Cloud. Your applications see the gateway as a file system, a local disk volume, or a Virtual Tape Library, depending on how it was configured.

Today I would like to share a few recent updates to the File Gateway configuration of the Storage Gateway, and also show you how they come together to enable some new processing models. First, the most recent updates:

SMB Support – The File Gateway already supports access from clients that speak NFS (versions 3 and 4.1 are supported). Last month we added support for the Server Message Block (SMB) protocol. This allows Windows applications that communicate using v2 or v3 of SMB to store files as objects in S3 through the gateway, enabling hybrid cloud use cases such as backup, content distribution, and processing of machine learning and big data workloads. You can control access to the gateway using your existing on-premises Active Directory (AD) domain or a cloud-based domain hosted in AWS Directory Service, or you can use authenticated guest access. To learn more about this update, read AWS Storage Gateway Adds SMB Support to Store and Access Objects in Amazon S3 Buckets.

Cross-Account Permissions – Some of our customers run their gateways in one AWS account and configure them to upload to an S3 bucket owned by another account. This allows them to track departmental storage and retrieval costs using chargeback and showback models. In order to simplify this important use case, you can configure the gateway to provide the bucket owner with full permissions. This avoids a pain point which could arise if the bucket owner was unable to see the objects. To learn how to set this up, read Using a File Share for Cross-Account Access.

Requester Pays – Bucket owners are responsible for storage costs. Owners pay for data transfer costs by default, but also have the option to have the requester pay. To support this use case, the File Gateway now supports S3’s Requester Pays Buckets. Data collectors and aggregators can use this feature to share data with research organizations such as universities and labs without incurring the costs of access themselves. File Gateway provides file based access to the S3 objects, caches recently accessed data locally, helping requesters reduce latency and costs. To learn more, read about Creating an NFS File Share and Creating an SMB File Share.

File Upload Notification – The gateway caches files locally, and uploads them to a designated S3 bucket in the background. Late last year we gave you the ability to request notification (in the form of a CloudWatch Event) when new files have been uploaded. You can use this to initiate cloud-based processing or to implement advanced logging. To learn more, read Getting File Upload Notification and study the NotifyWhenUploaded function.

Cache Refresh Event – You have long had the ability to use the RefreshCache function to make sure that the gateway is aware of objects that have been added, removed, or replaced in the bucket. The new Storage Gateway Cache Refresh Event lets you know that the cache is now in sync with S3, and can be used as a signal to initiate local processing. To learn more, read Getting Refresh Cache Notification.

Hybrid Processing Using File Gateway
You can use the File Upload Notification and Cache Refresh to automate some of your routine hybrid process tasks!

Let’s say that you run a geographically distributed office or retail business, with locations all over the world. Raw data (metrics, cash register receipts, or time sheets) is collected at each location, and then uploaded to S3 using a File Gateway hosted at each location. As the data arrives, you use the File Upload Notifications to process each S3 object, perhaps using an AWS Lambda function that invokes Amazon Athena to run a stock set of queries against each one. The data arrives over the course of a couple of hours, and results accumulate in another bucket. At the end of the reporting period, the intermediate results are processed, custom reports are generated for each branch location, and then stored in another bucket (this bucket, as it turns out, is also associated with a gateway, and each gateway will have cached copies of the prior versions of the reports). After you generate your reports, you can refresh each of the gateway caches, wait for the corresponding notifications, and then send an email to the branch managers to tell them that their new report is available.

Here’s a video (and presentation) with more information about this processing model:

Now Available
All of the features listed above are available now and you can start using them today in all regions where Storage Gateway is available.

Jeff;

AWS re:Invent 2018 is Coming – Are You Ready?

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/reinvent-2018-is-coming-are-you-ready/

As I write this, there are just 138 days until re:Invent 2018. My colleagues on the events team are going all-out to make sure that you, our customer, will have the best possible experience in Las Vegas. After meeting with them, I decided to write this post so that you can have a better understanding of what we have in store, know what to expect, and have time to plan and to prepare.

Dealing with Scale
We started out by talking about some of the challenges that come with scale. Approximately 43,000 people (AWS customers, partners, members of the press, industry analysts, and AWS employees) attended in 2017 and we are expecting an even larger crowd this year. We are applying many of the scaling principles and best practices that apply to cloud architectures to the physical, logistical, and communication challenges that are part-and-parcel of an event that is this large and complex.

We want to make it easier for you to move from place to place, while also reducing the need for you to do so! Here’s what we are doing:

Campus Shuttle – In 2017, hundreds of buses traveled on routes that took them to a series of re:Invent venues. This added a lot of latency to the system and we were not happy about that. In 2018, we are expanding the fleet and replacing the multi-stop routes with a larger set of point-to-point connections, along with additional pick-up and drop-off points at each venue. You will be one hop away from wherever you need to go.

Ride Sharing – We are partnering with Lyft and Uber (both powered by AWS) to give you another transportation option (download the apps now to be prepared). We are partnering with the Las Vegas Monorail and the taxi companies, and are also working on a teleportation service, but do not expect it to be ready in time.

Session Access – We are setting up a robust overflow system that spans multiple re:Invent venues, and are also making sure that the most popular sessions are repeated in more than one venue.

Improved Mobile App – The re:Invent mobile app will be more lively and location-aware. It will help you to find sessions with open seats, tell you what is happening around you, and keep you informed of shuttle and other transportation options.

Something for Everyone
We want to make sure that re:Invent is a warm and welcoming place for every attendee, with business and social events that we hope are progressive and inclusive. Here’s just some of what we have in store:

You can also take advantage of our mother’s rooms, gender-neutral restrooms, and reflection rooms. Check out the community page to learn more!

Getting Ready
Now it is your turn! Here are some suggestions to help you to prepare for re:Invent:

  • Register – Registration is now open! Every year I get email from people I have not talked to in years, begging me for last-minute access after re:Invent sells out. While it is always good to hear from them, I cannot always help, even if we were in first grade together.
  • Watch – We’re producing a series of How to re:Invent webinars to help you get the most from re:Invent. Watch What’s New and Breakout Content Secret Sauce ASAP, and stay tuned for more.
  • Plan – The session catalog is now live! View the session catalog to see the initial list of technical sessions. Decide on the topics of interest to you and to your colleagues, and choose your breakout sessions, taking care to pay attention to the locations. There will be over 2,000 sessions so choose with care and make this a team effort.
  • Pay Attention – We are putting a lot of effort into preparatory content – this blog post, the webinars, and more. Watch, listen, and learn!
  • Train – Get to work on your cardio! You can easily walk 10 or more miles per day, so bring good shoes and arrive in peak condition.

Partners and Sponsors
Participating sponsors are a core part of the learning, networking, and after hours activities at re:Invent.

For APN Partners, re:Invent is the single largest opportunity to interact with AWS customers, delivering both business development and product differentiation. If you are interested in becoming a re:Invent sponsor, read the re:Invent Sponsorship Prospectus.

For re:Invent attendees, I urge you to take time to meet with Sponsoring APN Partners in both the Venetian and Aria Expo halls. Sponsors offer diverse skills, Competencies, services and expertise to help attendees solve a variety of different business challenges. Check out the list of re:Invent Sponsors to learn more.

See You There
Once you are on site, be sure to take advantage of all that re:Invent has to offer.

If you are not sure where to go or what to do next, we’ll have some specially trained content experts to guide you.

I am counting down the days, gearing up to crank out a ton of blog posts for re:Invent, and looking forward to saying hello to friends new and old.

Jeff;

PS – We will be adding new sessions to the session catalog over the summer, so be sure to check back every week!

 

DeepLens Challenge #1 Starts Today – Use Machine Learning to Drive Inclusion

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/deeplens-challenge-1-starts-today-use-machine-learning-to-drive-inclusion/

Are you ready to develop and show off your machine learning skills in a way that has a positive impact on the world? If so, get your hands on an AWS DeepLens video camera and join the AWS DeepLens Challenge!

About the Challenge
Working together with our friends at Intel, we are launching the first in a series of eight themed challenges today, all centered around improving the world in some way. Each challenge will run for two weeks and is designed to help you to get some hands-on experience with machine learning.

We will announce a fresh challenge every two weeks on the AWS Machine Learning Blog. Each challenge will have a real-world theme, a technical focus, a sample project, and a subject matter expert. You have 12 days to invent and implement a DeepLens project that resonates with the theme, and to submit a short, compelling video (four minutes or less) to represent and summarize your work.

We’re looking for cool submissions that resonate with the theme and that make great use of DeepLens. We will watch all of the videos and then share the most intriguing ones.

Challenge #1 – Inclusivity Challenge
The first challenge was inspired by the Special Olympics, which took place in Seattle last week. We invite you to use your DeepLens to create a project that drives inclusion, overcomes barriers, and strengthens the bonds between people of all abilities. You could gauge the physical accessibility of buildings, provide audio guidance using Polly for people with impaired sight, or create educational projects for children with learning disabilities. Any project that supports this theme is welcome.

For each project that meets the entry criteria we will make a donation of $249 (the retail price of an AWS DeepLens) to the Northwest Center, a non-profit organization based in Seattle. This organization works to advance equal opportunities for children and adults of all abilities and we are happy to be able to help them to further their mission. Your work will directly benefit this very worthwhile goal!

As an example of what we are looking for, ASLens is a project created by Chris Coombs of Melbourne, Australia. It recognizes and understands American Sign Language (ASL) and plays the audio for each letter. Chris used Amazon SageMaker and Polly to implement ASLens (you can watch the video, learn more and read the code).

To learn more, visit the DeepLens Challenge page. Entries for the first challenge are due by midnight (PT) on July 22nd and I can’t wait to see what you come up with!

Jeff;

PS – The DeepLens Resources page is your gateway to tutorial videos, documentation, blog posts, and other helpful information.

New – Amazon Linux WorkSpaces

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-amazon-linux-workspaces/

Over two years ago I explained why I Love my Amazon WorkSpace. Today, with well over three years of experience under my belt, I have no reason to return to a local, non-managed desktop. I never have to worry about losing or breaking my laptop, keeping multiple working environments in sync, or planning for disruptive hardware upgrades. Regardless of where I am or what device I am using, I am highly confident that I can log in to my WorkSpace, find the apps and files that I need, and get my work done.

Now with Amazon Linux 2
As a WorkSpaces user, you can already choose between multiple hardware configurations and software bundles. You can choose hardware with the desired amount of compute power (expressed in vCPUs — virtual CPUs) and memory, configure as much storage as you need, and choose between Windows 7 and Windows 10 desktop experiences. If your organization already owns Windows licenses, you can bring them to the AWS Cloud via our BYOL (Bring Your Own License) program.

Today we are giving you another desktop option! You can now launch a WorkSpace that runs Amazon Linux 2, the Amazon Linux WorkSpaces Desktop, Firefox, Evolution, Pidgin, and Libre Office. The Amazon Linux WorkSpaces Desktop is based on MATE. It makes very efficient use of CPU and memory, allowing you to be both productive and frugal. It includes a full set of tools and utilities including a file manager, image editor, and terminal emulator.

Here are a few of the ways that Amazon Linux WorkSpaces can benefit you and your organization:

Development Environment – The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Productivity Environment – Libre Office gives you (or the users that you support) access to a complete suite of productivity tools that are compatible with a wide range of proprietary and open source document formats.

Kiosk Support – You can build and economically deploy applications that run in kiosk mode on inexpensive and durable tablets, with centralized management and support.

Linux Workloads – You can run data science, machine learning, engineering, and other Linux-friendly workloads, taking advantage of AWS storage, analytics, and machine learning services.

There are also some operational and financial benefits. On the ops side, organizations that need to provide their users with a mix of Windows and Linux environments can create a unified operations model with a single set of tools and processes that meet the needs of the entire user community. Financially, this new option makes very efficient use of hardware, and the hourly usage model made possible by the AutoStop running mode can further reduce your costs.

Your WorkSpaces run in a Virtual Private Cloud (VPC), and can be configured to access your existing on-premises resources using a VPN connection across a dedicated line courtesy of AWS Direct Connect. You can access and make use of other AWS resources including Elastic File Systems.

Amazon Linux 2 with Long Term Support (LTS)
As part of today’s launch, we are also announcing that Long Term Support (LTS) is now available for Amazon Linux 2. We announced the first LTS candidate late last year, and are now ready to make the actual LTS version available. We will provide support, update, and bug fixes for all core packages for five years, until June 30, 2023. You can do an in-place upgrade from the Amazon Linux 2 LTS Candidate to the LTS release, but you will need to do a fresh installation if you are migrating from the Amazon Linux AMI.

You can run Amazon Linux 2 on your Amazon Linux WorkSpaces cloud desktops, on EC2 instances, in your data center, and on your laptop! Virtual machine images are available for Docker, VMware ESXi, Microsoft Hyper-V, KVM, and Oracle VM VirtualBox.

The extras mechanism in Amazon Linux 2 gives you access to the latest application software in the form of curated software bundles, packaged into topics that contain all of the dependencies needed for the software to run. Over time, as these applications stabilize and mature, they become candidates for the Amazon Linux 2 core channel, and subject to the Amazon Linux 2 Long Term Support policies. To learn more, read about the Extras Library.

To learn more about Amazon Linux 2, read my post, Amazon Linux 2 – Modern, Stable, and Enterprise-Friendly.

Launching an Amazon Linux WorkSpace
In this section, I am playing the role of the WorkSpaces administrator, and am setting up a Linux WorkSpace for my own use. In a real-world situation I would generally be creating WorkSpaces for other members of my organization.

I can launch an Amazon Linux WorkSpace from the AWS Management Console with a couple of clicks. If I am setting up Linux WorkSpaces for an entire team or division, I can also use the WorkSpaces API or the WorkSpaces CLI. I can use my organization’s existing Active Directory or I can have WorkSpaces create and manage one for me. I could also use the WorkSpaces API to build a self-serve provisioning and management portal for my users.

I’m using a directory created by WorkSpaces, so I’ll enter the identifying information for each user (me, in this case), and then click Next Step:

I select one of the Amazon Linux 2 Bundles, choosing the combination of software and hardware that is the best fit for my needs, and click Next Step:

I choose the AutoStop running mode, indicate that I want my root and user volumes to be encrypted, and tag the WorkSpace, then click Next Step:

I review the settings and click Launch WorkSpaces to proceed:

The WorkSpace starts out in PENDING status and transitions to AVAILABLE within 20 minutes:

Signing In
When the WorkSpace is AVAILABLE, I receive an email with instructions for accessing it:

I click the link and set my password:

And then I download the client (or two) of my choice:

I install and launch the client, enter my registration code, and click Register:

And then I sign in to my Amazon Linux WorkSpace:

And here it is:

The WorkSpace is domain-joined to my Active Directory:

Because this is a managed desktop, I can easily modify the size of the root or the user volumes or switch to hardware with more or less power. This is, safe to say, far easier and more cost-effective than making on-demand changes to physical hardware sitting on your users’ desktops out in the field!

Available Now
You can launch Amazon Linux WorkSpaces in all eleven AWS Regions where Amazon WorkSpaces is already available:

Pricing is up to 15% lower than for comparable Windows WorkSpaces; see the Amazon WorkSpaces Pricing page for more info.

If you are new to WorkSpaces, the Amazon WorkSpaces Free Tier will let you run two AutoStop WorkSpaces for up to 40 hours per month, for two months, at no charge.

Jeff;

PS – If you are in San Francisco, join me at the AWS Loft today at 5 PM to learn more (registration is required).

 

New Collaborative Editing for Amazon WorkDocs – Powered by Hancom Thinkfree Office Online

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-collaborative-editing-for-amazon-workdocs-powered-by-hancom-thinkfree-office-online/

I’ve got some important news for Amazon WorkDocs users. As a result of our partnership with Hancom, you can now edit Microsoft Office documents in your browser without having to install any applications or connect with another web service. You can quickly create a document, share it with team members, and let them make changes and contribute to the finished product. Everyone can see changes in real-time as they work together, regardless of where they are located or what device they are using to access WorkDocs.

This feature is available at no extra charge and you can start using it as soon as your WorkDocs administrator enables it. Let’s take a tour!

Collaborative Editing
I start by creating a document, spreadsheet, or presentation using the New menu. I’ll create a document:

I can create and edit my document from the comfort of my web browser:

Then I save and rename it (a default name is generated using the creation time as a starting point):

Next, I share it with my colleague Manoj so that he can take a look and make any desired edits:

I can see his edits in real-time:

And I can see all of the participants in the collaborative editing session:

WorkDocs creates a new revision after all of the participants have exited the editing session.

I can also create new spreadsheets and presentations and edit existing ones! Here’s a new spreadsheet:

And here’s an existing presentation (I opened one from 2008 just for fun):

Now Available
This feature is available now in the US West (Oregon) Region and will become available in other regions in the next couple of weeks. It is available at no extra charge to all WorkDocs users.

Jeff;

Amazon EC2 Update – Additional Instance Types, Nitro System, and CPU Options

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ec2-update-additional-instance-types-nitro-system-and-cpu-options/

I have a backlog of EC2 updates to share with you. We’ve been releasing new features and instance types at a rapid clip and it is time to catch up. Here’s a quick peek at where we are and where we are going…

Additional Instance Types
Here’s a quick recap of the most recent EC2 instance type announcements:

Compute-Intensive – The compute-intensive C5d instances provide a 25% to 50% performance improvement over the C4 instances. They are available in 5 regions and offer up to 72 vCPUs, 144 GiB of memory, and 1.8 TB of local NVMe storage.

General Purpose – The general purpose M5d instances are also available in 5 regions. They offer up to 96 vCPUs, 384 GiB of memory, and 3.6 TB of local NVMe storage.

Bare Metal – The i3.metal instances became generally available in 5 regions a couple of weeks ago. You can run performance analysis tools that are hardware-dependent, workloads that require direct access to bare-metal infrastructure, applications that need to run in non-virtualized environments for licensing or support reasons, and container environments such as Clear Containers, while you take advantage of AWS features such as Elastic Block Store (EBS), Elastic Load Balancing, and Virtual Private Clouds. Bare metal instances with 6 TB, 9 TB, 12 TB, and more memory are in the works, all designed specifically for SAP HANA and other in-memory workloads.

Innovation and the Nitro System
The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. We will deliver new instance types more quickly than ever in the months to come, with the goal of helping you to build, migrate, and run even more types of workloads.

Local NVMe Storage – The new C5d, M5d, and bare metal EC2 instances feature our Nitro local NVMe storage building block, which is also used in the Xen-virtualized I3 and F1 instances. This building block provides direct access to high-speed local storage over a PCI interface and transparently encrypts all data using dedicated hardware. It also provides hardware-level isolation between storage devices and EC2 instances so that bare metal instances can benefit from local NVMe storage.

Nitro Security Chip – A component that is part of our AWS server designs that continuously monitors and protects hardware resources and independently verifies firmware each time a system boots.

Nitro Hypervisor – A thin, quiescent hypervisor that manages memory and CPU allocation, and delivers performance that is indistinguishable from bare metal for most workloads (Brendan Gregg of Netflix benchmarked it at less than 1%).

Networking – Hardware support for the software defined network inside of each Virtual Private Cloud (VPC), Enhanced Networking, and Elastic Network Adapter.

Elastic Block Storage – Hardware EBS processing including CPU-intensive cryptographic operations.

Moving storage, networking, and security functions to hardware has important consequences for both bare metal and virtualized instance types:

Virtualized instances can make just about all of the host’s CPU power and memory available to the guest operating systems since the hypervisor plays a greatly diminished role.

Bare metal instances have full access to the hardware, but also have the same the flexibility and feature set as virtualized EC2 instances including CloudWatch metrics, EBS, and VPC.

To learn more about the hardware and software that make up the Nitro system, watch Amazon EC2 Bare Metal Instances or C5 Instances and the Evolution of Amazon EC2 Virtualization and take a look at The Nitro Project: Next-Generation EC2 Infrastructure.

CPU Options
This feature provides you with additional control over your EC2 instances and lets you optimize your instance for a particular workload. First, you can specify the desired number of vCPUs at launch time. This allows you to control the vCPU to memory ratio for Oracle and SQL Server workloads that need high memory, storage, and I/O but perform well with a low vCPU count. As a result, you can optimize your vCPU-based licensing costs when you Bring Your Own License (BYOL). Second, you can disable Intel® Hyper-Threading Technology (Intel® HT Technology) on instances that run compute-intensive workloads. These workloads sometimes exhibit diminished performance when Intel HT is enabled. Both of these options are available when you launch an instance using the AWS Command Line Interface (CLI) or one of the AWS SDKs. You simply specify the total number of cores and the number of threads per core using values chosen from the CPU Cores and Threads per CPU Core Per Instance Type table. Here’s how you would launch an instance with 6 CPU cores and Intel® HT Technology disabled:

$ aws ec2 run-instances --image-id ami-1a2b3c4d --instance-type r4.4xlarge --cpu-options "CoreCount=6,ThreadsPerCore=1"

To learn more, read about Optimizing CPU Options.

Help Wanted
The EC2 team is always hiring! Here are a few of their open positions:

Jeff;

Amazon Polly Plugin for WordPress Update – Translate and Vocalize Your Content

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-polly-plugin-for-wordpress-update-translate-and-vocalize-your-content/

Earlier this year I showed you how to Give Your WordPress Blog a Voice with Amazon Polly and walked you through the steps involved in installing, configuring, and using the Amazon Polly for WordPress plugin. Today we are making this plugin even more powerful, adding the ability to translate your content into one or more languages and to produce audio versions of each translation. The translation is implemented using Amazon Translate, a neural machine translation service that is part of our portfolio of machine learning services.

The original version of the plugin works like this:

And the new version works like this:

This version of the plugin supports translation of English-language web content into Spanish, German, French, and Portuguese, with plans to support other languages in the future.

Updating and Configuring the Plugin
My earlier post covered the steps involved in launching an Amazon Lightsail instance and setting up the plugin, and I won’t repeat them here. The first step is to edit my existing IAM policy so that it allows calls to the TranslateText function:

Then I log in to the WordPress Admin dashboard, click Plugins, and see that a new version is available:

I click update now, and wait a few seconds for the update. Then I click Settings to enable translation:

I click Enable translation support and Save Changes, then come back and set up the details. I select all of the available target languages, leave the voices and labels as-is, and click Save Changes to move forward:

Creating Translations and Vocalizations
Now I can create a new post and exercise the plugin. I enter the title and text for the post as usual:

Before moving forward, I can click How much will this cost to convert? to check on costs.

The price seems reasonable to me. I publish the post, and then click Translate to generate audio in 4 other languages. This happens in a matter of seconds:

The published post now includes a player that lets me listen to the original audio or any of the 4 translations:

Here are the audio versions:

English:
Spanish:
German:
French:
Portuguese:

I have lots of customization options. For example, I can enable transcripts of the translated text:

The transcripts are shown in the post:

I can change the labels that are used for each language:

Here are the updated labels:

I can also specify the Polly voice for each target language:

Now Available
The updated plugin is available now and you can start using it today! As you can see, it uses the “magic” of machine translation and text-to-speech to make your web content accessible to a wider audience, in both written and spoken form.

Jeff;

 

AWS DeepLens Now Shipping – Order One Today!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-deeplens-now-shipping-order-one-today/

AWS DeepLens is a video camera that runs deep learning models directly on the device, out in the field. I wrote about the hardware and system software in depth last year; here’s a quick recap:

Hardware – 4 megapixel camera (1080P video), 2D microphone array, Intel Atom® Processor, dual-band Wi-Fi, USB and micro HDMI ports, 8 GB of memory for models and code.

Software – Ubuntu 16.04, AWS Greengrass Core, device-optimized versions of MXNet and Intel® clDNN library, support for other deep learning frameworks.

The response to this AWS re:Invent was immediate and gratifying! Educators, students, and developers signed up for hands-on sessions and started to build and train models right away. Their enthusiasm continued throughout the preview period and into this year’s AWS Summit season, where we did our best to provide all interested parties with access to devices, tools, and training.

Hackathons and Challenges
We made DeepLens devices available to participants in last month’s HackTillDawn. I was fortunate enough to be able to attend the event and to help to choose the three winners. It was amazing to watch the teams, most with no previous machine learning or computer vision experience, dive right in and build interesting, sophisticated applications designed to enhance the attendee experience at large-scale music festivals. The three winners went on to compete at EDC Vegas, where the Grand Prize winner (Find Your Totem) was chosen. Congrats to the team, and have fun at EDC Orlando!

We also ran the AWS DeepLens Challenge, asking participants to build machine learning projects that made use of DeepLens, with bonus points for the use of Amazon SageMaker and/or AWS Lambda. The submissions were as diverse as they were interesting, with applications designed for children, adults, and animals. Details on all of the submissions, including demo videos and source code, are available on the Community Projects page. The three winning applications were ReadToMe (first place), Dee (second place), and SafeHaven (third place).

From what I can tell, DeepLens has proven itself as an excellent learning vehicle. While speaking to the attendees at HackTillDawn, I learned that many of them were eager to get some hands-on experience that they could use to broaden their skillsets and to help them to progress in their careers.

Preview Updates
During the preview period, the DeepLens team has stayed heads-down, focusing on making the device even more capable. Significant additions include:

Gluon Support – Computer vision models can be built using Gluon (an imperative interface to MXNet), trained, imported to DeepLens, and deployed.

SageMaker Import – Models can be built and trained in Amazon SageMaker and then imported to DeepLens.

Model Optimizer – The optimizer runs on the device and optimizes downloaded MXNet models so that they run efficiently on the DeepLens GPU.

Now Shipping
I am happy to report that DeepLens is now shipping and available to order from Amazon.com. You can get one of your very own and start building your own deep learning applications within days. Devices can be shipped to addresses in the United States, with additional destinations in the works.

We are also rounding out the initial feature set with the addition of some important new capabilities:

Expanded Framework Support – DeepLens now supports the TensorFlow and Caffe frameworks.

Expanded MXNet Layer Support – DeepLens now supports the Deconvolution, L2Normalization, and LRN layers provided by MXNet.

Kinesis Video Streams – The video stream from the DeepLens camera can now be used in conjunction with Amazon Kinesis Video Streams. You can stream the raw camera feed to the cloud and then use Amazon Rekognition Video to extract objects, faces, and content from the video.

New Sample Project – DeepLens now includes a sample project for head pose detection (powered by TensorFlow). You can examine this sample to see how the model was constructed; here’s an excerpt from the notebook:

I am looking forward to seeing what you build with your very own DeepLens. Drop me a line and let me know!

Jeff;

Amazon EKS – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/

We announced Amazon Elastic Container Service for Kubernetes and invited customers to take a look at a preview during re:Invent 2017. Today I am pleased to be able to let you know that Amazon EKS is available for use in production form. It has been certified as Kubnernetes conformant, and is ready to run your existing Kubernetes workloads.

Based on the most recent data from the Cloud Native Computing Foundation, we know that AWS is the leading environment for Kubernetes, with 57% of all companies who run Kubernetes choosing to do so on AWS. Customers tell us that Kubernetes is core to their IT strategy, and are already running hundreds of millions of containers on AWS every week. Amazon EKS simplifies the process of building, securing, operating, and maintaining Kubernetes clusters, and brings the benefits of container-based computing to organizations that want to focus on building applications instead of setting up a Kubernetes cluster from scratch.

AWS Inside
Amazon EKS takes advantage of the fact that it is running in the AWS Cloud, making great use of many AWS services and features, while ensuring that everything you already know about Kubernetes remains applicable and helpful. Here’s an overview:

Multi-AZ – The Kubernetes control plane (the API server and the etcd database) are run in high-availability fashion across three AWS Availability Zones. Master nodes are monitored and replaced if they fail, and are also patched and updated automatically.

IAM IntegrationAmazon EKS uses the Heptio Authenticator for authentication. You can make use of IAM roles and avoid the pain that comes with managing yet another set of credentials.

Load Balancer Support – You can route traffic to your worker nodes using the AWS Network Load Balancer, the AWS Application Load Balancer, or the original (classic) Elastic Load Balancer.

EBS – Kubernetes PersistentVolumes (used for cluster storage) are implemented as Amazon Elastic Block Store (EBS) volumes.

Route 53 – The External DNS project allows services in Kubernetes clusters to be accessed via Route 53 DNS records. This simplifies service discovery and supports load balancing.

Auto Scaling – Your clusters can make use of Auto Scaling, growing and shrinking in response to changes in load.

Container Interface – The Container Network Interface for Kubernetes uses Elastic Network Interfaces to provide static IP addresses for Kubernetes Pods.

For a more detailed look at these features, read about Amazon Elastic Container Service for Kubernetes.

Amazon EKS is built around a shared-responsibility model; the control plane nodes are managed by AWS and you run the worker nodes. This gives you high availability and simplifies the process of moving existing workloads to EKS. Here’s a very high-level overview:

 

Creating an Amazon EKS Cluster
To create a cluster, I provision the control plane, provision and connect the worker cluster, and launch my containers. In the example below I will create a new VPC for my worker cluster, but I can also use an existing one, as long as the desired subnets are tagged with the name of my Kubernetes cluster.

Following the directions in the Amazon EKS Getting Started Guide, I begin by creating an IAM role. Kubernetes assumes this role and uses it to create AWS resources such as Elastic Load Balancers. Once created, this role can be used for all of my clusters. I simply create a CloudFormation stack using the template referred to in the Getting Started Guide:

I acknowledge that the stack will create a role, and click Create to proceed:

The role is created in seconds, and the ARN is shown in the stack’s Output tab (I’ll need it later):

Next, I create a VPC (Virtual Private Cloud) using the sample template from the Getting Started Guide, with the following parameters:

The template creates a VPC that has two subnets, along with all of the necessary route tables, gateways, and security groups):

As is the case with the ARN, I will need the ID of the security group later.

Next, I download kubectl and set it up to use the Heptio Authenticator. The authenticator allows kubectl to make use of IAM authentication when it accesses my Kubernetes clusters. Instructions for downloading and setup are in the Getting Started Guide and I follow them as directed.

To wrap up the setup process, I ensure that I am running the latest version of the AWS Command Line Interface (CLI) (If I was running an older version, the eks command would not be available):

With my IAM role, my VPC, and my tooling all in place, I am ready to create my first Amazon EKS cluster!

I log in to the EKS Console using an IAM user that has administrative privileges (root credentials cannot be used due to the way that the Heptio Authenticator works) and click Create cluster:

I enter a name for my cluster (which must match the one that I entered when I created the VPC, because Kubernetes relies on tagging of subnets), along with the subnet IDs and the security group ID, both for the VPC, and click Create:

My control plane cluster starts out in CREATING status, and transitions to ACTIVE in 10 minutes or less:

Now I need to configure kubectl so that it can access my cluster. Before I can do this, I need to use the CLI to retrieve the certificate authority data:

$ aws eks describe-cluster --region us-west-2 --cluster-name jeff1 --query cluster.certificateAuthority.data

This command returns a long string of data that I’ll need in a minute.

I also retrieve the cluster endpoint from the console:

I make sure that I am in my home directory, create sub-directory .kube, and create file config-jeff1 in it. Then I open config-jeff1 in my editor, copy the templated config file from the Getting Started Guide and finalize the cluster endpoint, certificate, and cluster name. My file looks like this:

apiVersion: v1
clusters:
- cluster:
    server: https://FDA1964D96C9EEF2B76684C103F31C67.sk1.us-west-2.eks.amazonaws.com
    certificate-authority-data: "...."
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"

Before I test kubectl, I need to ensure that my CLI is configured to use the same IAM user that I used when I logged in to the console to create the cluster:

And now I can run a quick test to verify that everything is working as expected:

At this point I have set up my master VPC and my Kubernetes control plane. I’m ready to create some worker nodes (EC2 instances). Once again, this is done using a CloudFormation template:

The stack is created in a couple of minutes and sets up IAM roles, security groups, and auto scaling:

Now I need to set up a configurator map so that the worker nodes know how to join the cluster. I download the map, add the ARN of the NodeInstanceRole from the stack, and apply the configuration:

Then I check and see that my nodes are ready:

Running the Guest Book Sample
My Kubnernetes cluster is all set and I can use the Guest Book application to test it out. I create the Kubernetes replication controllers and services:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/redis-master-controller.json
replicationcontroller "redis-master" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/redis-master-service.json
service "redis-master" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/redis-slave-controller.json
replicationcontroller "redis-slave" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/redis-slave-service.json
service "redis-slave" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/guestbook-controller.json
replicationcontroller "guestbook" created
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/v1.10.0/examples/guestbook-go/guestbook-service.json
service "guestbook" created

I list the running services and capture the external IP address & port:

and visit the address in my web browser:

Things to Know
We make upstream contributions to the Kubernetes repo and to projects such as the CNI Plugin, the Heptio AWS Authenticator, and Virtual Kubelet. We are currently looking for Systems Development Engineers, DevOps Engineers, Product Managers, and Solution Architects with Kubernetes experience; check out the full list of open positions to learn more.

Amazon EKS is available today in the US East (N. Virginia) and US West (Oregon) Regions and will be expanding to others very soon. We have a detailed roadmap and plan to crank out plenty of additional features this year.

You pay $0.20 per hour for the EKS Control Plane, and usual EC2, EBS, and Load Balancing prices for resources that run in your account. See the EKS Pricing page for more information.

Jeff;

 

EC2 Instance Update – M5 Instances with Local NVMe Storage (M5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-m5-instances-with-local-nvme-storage-m5d/

Earlier this month we launched the C5 Instances with Local NVMe Storage and I told you that we would be doing the same for additional instance types in the near future!

Today we are introducing M5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for workloads that require a balance of compute and memory resources. Here are the specs:

Instance Name vCPUs RAM Local Storage EBS-Optimized Bandwidth Network Bandwidth
m5d.large 2 8 GiB 1 x 75 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.xlarge 4 16 GiB 1 x 150 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.2xlarge 8 32 GiB 1 x 300 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.4xlarge 16 64 GiB 1 x 600 GB NVMe SSD 2.210 Gbps Up to 10 Gbps
m5d.12xlarge 48 192 GiB 2 x 900 GB NVMe SSD 5.0 Gbps 10 Gbps
m5d.24xlarge 96 384 GiB 4 x 900 GB NVMe SSD 10.0 Gbps 25 Gbps

The M5d instances are powered by Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, including support for AVX-512.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage on the M5d instances:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
M5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent M5 instances.

Jeff;

 

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Build your own weather station with our new guide!

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/build-your-own-weather-station/

One of the most common enquiries I receive at Pi Towers is “How can I get my hands on a Raspberry Pi Oracle Weather Station?” Now the answer is: “Why not build your own version using our guide?”

Build Your Own weather station kit assembled

Tadaaaa! The BYO weather station fully assembled.

Our Oracle Weather Station

In 2016 we sent out nearly 1000 Raspberry Pi Oracle Weather Station kits to schools from around the world who had applied to be part of our weather station programme. In the original kit was a special HAT that allows the Pi to collect weather data with a set of sensors.

The original Raspberry Pi Oracle Weather Station HAT – Build Your Own Raspberry Pi weather station

The original Raspberry Pi Oracle Weather Station HAT

We designed the HAT to enable students to create their own weather stations and mount them at their schools. As part of the programme, we also provide an ever-growing range of supporting resources. We’ve seen Oracle Weather Stations in great locations with a huge differences in climate, and they’ve even recorded the effects of a solar eclipse.

Our new BYO weather station guide

We only had a single batch of HATs made, and unfortunately we’ve given nearly* all the Weather Station kits away. Not only are the kits really popular, we also receive lots of questions about how to add extra sensors or how to take more precise measurements of a particular weather phenomenon. So today, to satisfy your demand for a hackable weather station, we’re launching our Build your own weather station guide!

Build Your Own Raspberry Pi weather station

Fun with meteorological experiments!

Our guide suggests the use of many of the sensors from the Oracle Weather Station kit, so can build a station that’s as close as possible to the original. As you know, the Raspberry Pi is incredibly versatile, and we’ve made it easy to hack the design in case you want to use different sensors.

Many other tutorials for Pi-powered weather stations don’t explain how the various sensors work or how to store your data. Ours goes into more detail. It shows you how to put together a breadboard prototype, it describes how to write Python code to take readings in different ways, and it guides you through recording these readings in a database.

Build Your Own Raspberry Pi weather station on a breadboard

There’s also a section on how to make your station weatherproof. And in case you want to move past the breadboard stage, we also help you with that. The guide shows you how to solder together all the components, similar to the original Oracle Weather Station HAT.

Who should try this build

We think this is a great project to tackle at home, at a STEM club, Scout group, or CoderDojo, and we’re sure that many of you will be chomping at the bit to get started. Before you do, please note that we’ve designed the build to be as straight-forward as possible, but it’s still fairly advanced both in terms of electronics and programming. You should read through the whole guide before purchasing any components.

Build Your Own Raspberry Pi weather station – components

The sensors and components we’re suggesting balance cost, accuracy, and easy of use. Depending on what you want to use your station for, you may wish to use different components. Similarly, the final soldered design in the guide may not be the most elegant, but we think it is achievable for someone with modest soldering experience and basic equipment.

You can build a functioning weather station without soldering with our guide, but the build will be more durable if you do solder it. If you’ve never tried soldering before, that’s OK: we have a Getting started with soldering resource plus video tutorial that will walk you through how it works step by step.

Prototyping HAT for Raspberry Pi weather station sensors

For those of you who are more experienced makers, there are plenty of different ways to put the final build together. We always like to hear about alternative builds, so please post your designs in the Weather Station forum.

Our plans for the guide

Our next step is publishing supplementary guides for adding extra functionality to your weather station. We’d love to hear which enhancements you would most like to see! Our current ideas under development include adding a webcam, making a tweeting weather station, adding a light/UV meter, and incorporating a lightning sensor. Let us know which of these is your favourite, or suggest your own amazing ideas in the comments!

*We do have a very small number of kits reserved for interesting projects or locations: a particularly cool experiment, a novel idea for how the Oracle Weather Station could be used, or places with specific weather phenomena. If have such a project in mind, please send a brief outline to [email protected], and we’ll consider how we might be able to help you.

The post Build your own weather station with our new guide! appeared first on Raspberry Pi.

Flight Sim Company Threatens Reddit Mods Over “Libelous” DRM Posts

Post Syndicated from Andy original https://torrentfreak.com/flight-sim-company-threatens-reddit-mods-over-libellous-drm-posts-180604/

Earlier this year, in an effort to deal with piracy of their products, flight simulator company FlightSimLabs took drastic action by installing malware on customers’ machines.

The story began when a Reddit user reported something unusual in his download of FlightSimLabs’ A320X module. A file – test.exe – was being flagged up as a ‘Chrome Password Dump’ tool, something which rang alarm bells among flight sim fans.

As additional information was made available, the story became even more sensational. After first dodging the issue with carefully worded statements, FlightSimLabs admitted that it had installed a password dumper onto ALL users’ machines – whether they were pirates or not – in an effort to catch a particular software cracker and launch legal action.

It was an incredible story that no doubt did damage to FlightSimLabs’ reputation. But the now the company is at the center of a new storm, again centered around anti-piracy measures and again focused on Reddit.

Just before the weekend, Reddit user /u/walkday reported finding something unusual in his A320X module, the same module that caused the earlier controversy.

“The latest installer of FSLabs’ A320X puts two cmdhost.exe files under ‘system32\’ and ‘SysWOW64\’ of my Windows directory. Despite the name, they don’t open a command-line window,” he reported.

“They’re a part of the authentication because, if you remove them, the A320X won’t get loaded. Does someone here know more about cmdhost.exe? Why does FSLabs give them such a deceptive name and put them in the system folders? I hate them for polluting my system folder unless, of course, it is a dll used by different applications.”

Needless to say, the news that FSLabs were putting files into system folders named to make them look like system files was not well received.

“Hiding something named to resemble Window’s “Console Window Host” process in system folders is a huge red flag,” one user wrote.

“It’s a malware tactic used to deceive users into thinking the executable is a part of the OS, thus being trusted and not deleted. Really dodgy tactic, don’t trust it and don’t trust them,” opined another.

With a disenchanted Reddit userbase simmering away in the background, FSLabs took to Facebook with a statement to quieten down the masses.

“Over the past few hours we have become aware of rumors circulating on social media about the cmdhost file installed by the A320-X and wanted to clear up any confusion or misunderstanding,” the company wrote.

“cmdhost is part of our eSellerate infrastructure – which communicates between the eSellerate server and our product activation interface. It was designed to reduce the number of product activation issues people were having after the FSX release – which have since been resolved.”

The company noted that the file had been checked by all major anti-virus companies and everything had come back clean, which does indeed appear to be the case. Nevertheless, the critical Reddit thread remained, bemoaning the actions of a company which probably should have known better than to irritate fans after February’s debacle. In response, however, FSLabs did just that once again.

In private messages to the moderators of the /r/flightsim sub-Reddit, FSLabs’ Marketing and PR Manager Simon Kelsey suggested that the mods should do something about the thread in question or face possible legal action.

“Just a gentle reminder of Reddit’s obligations as a publisher in order to ensure that any libelous content is taken down as soon as you become aware of it,” Kelsey wrote.

Noting that FSLabs welcomes “robust fair comment and opinion”, Kelsey gave the following advice.

“The ‘cmdhost.exe’ file in question is an entirely above board part of our anti-piracy protection and has been submitted to numerous anti-virus providers in order to verify that it poses no threat. Therefore, ANY suggestion that current or future products pose any threat to users is absolutely false and libelous,” he wrote, adding:

“As we have already outlined in the past, ANY suggestion that any user’s data was compromised during the events of February is entirely false and therefore libelous.”

Noting that FSLabs would “hate for lawyers to have to get involved in this”, Kelsey advised the /r/flightsim mods to ensure that no such claims were allowed to remain on the sub-Reddit.

But after not receiving the response he would’ve liked, Kelsey wrote once again to the mods. He noted that “a number of unsubstantiated and highly defamatory comments” remained online and warned that if something wasn’t done to clean them up, he would have “no option” than to pass the matter to FSLabs’ legal team.

Like the first message, this second effort also failed to have the desired effect. In fact, the moderators’ response was to post an open letter to Kelsey and FSLabs instead.

“We sincerely disagree that you ‘welcome robust fair comment and opinion’, demonstrated by the censorship on your forums and the attempted censorship on our subreddit,” the mods wrote.

“While what you do on your forum is certainly your prerogative, your rules do not extend to Reddit nor the r/flightsim subreddit. Removing content you disagree with is simply not within our purview.”

The letter, which is worth reading in full, refutes Kelsey’s claims and also suggests that critics of FSLabs may have been subjected to Reddit vote manipulation and coordinated efforts to discredit them.

What will happen next is unclear but the matter has now been placed in the hands of Reddit’s administrators who have agreed to deal with Kelsey and FSLabs’ personally.

It’s a little early to say for sure but it seems unlikely that this will end in a net positive for FSLabs, no matter what decision Reddit’s admins take.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Amazon SageMaker Updates – Tokyo Region, CloudFormation, Chainer, and GreenGrass ML

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/sagemaker-tokyo-summit-2018/

Today, at the AWS Summit in Tokyo we announced a number of updates and new features for Amazon SageMaker. Starting today, SageMaker is available in Asia Pacific (Tokyo)! SageMaker also now supports CloudFormation. A new machine learning framework, Chainer, is now available in the SageMaker Python SDK, in addition to MXNet and Tensorflow. Finally, support for running Chainer models on several devices was added to AWS Greengrass Machine Learning.

Amazon SageMaker Chainer Estimator


Chainer is a popular, flexible, and intuitive deep learning framework. Chainer networks work on a “Define-by-Run” scheme, where the network topology is defined dynamically via forward computation. This is in contrast to many other frameworks which work on a “Define-and-Run” scheme where the topology of the network is defined separately from the data. A lot of developers enjoy the Chainer scheme since it allows them to write their networks with native python constructs and tools.

Luckily, using Chainer with SageMaker is just as easy as using a TensorFlow or MXNet estimator. In fact, it might even be a bit easier since it’s likely you can take your existing scripts and use them to train on SageMaker with very few modifications. With TensorFlow or MXNet users have to implement a train function with a particular signature. With Chainer your scripts can be a little bit more portable as you can simply read from a few environment variables like SM_MODEL_DIR, SM_NUM_GPUS, and others. We can wrap our existing script in a if __name__ == '__main__': guard and invoke it locally or on sagemaker.


import argparse
import os

if __name__ =='__main__':

    parser = argparse.ArgumentParser()

    # hyperparameters sent by the client are passed as command-line arguments to the script.
    parser.add_argument('--epochs', type=int, default=10)
    parser.add_argument('--batch-size', type=int, default=64)
    parser.add_argument('--learning-rate', type=float, default=0.05)

    # Data, model, and output directories
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
    parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])

    args, _ = parser.parse_known_args()

    # ... load from args.train and args.test, train a model, write model to args.model_dir.

Then, we can run that script locally or use the SageMaker Python SDK to launch it on some GPU instances in SageMaker. The hyperparameters will get passed in to the script as CLI commands and the environment variables above will be autopopulated. When we call fit the input channels we pass will be populated in the SM_CHANNEL_* environment variables.


from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
    entry_point='example.py',
    train_instance_count=1,
    train_instance_type='ml.p3.2xlarge',
    hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})

# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
    instance_type="ml.m4.xlarge",
    initial_instance_count=1
)

Now, instead of bringing your own docker container for training and hosting with Chainer, you can just maintain your script. You can see the full sagemaker-chainer-containers on github. One of my favorite features of the new container is built-in chainermn for easy multi-node distribution of your chainer training jobs.

There’s a lot more documentation and information available in both the README and the example notebooks.

AWS GreenGrass ML with Chainer

AWS GreenGrass ML now includes a pre-built Chainer package for all devices powered by Intel Atom, NVIDIA Jetson, TX2, and Raspberry Pi. So, now GreenGrass ML provides pre-built packages for TensorFlow, Apache MXNet, and Chainer! You can train your models on SageMaker then easily deploy it to any GreenGrass-enabled device using GreenGrass ML.

JAWS UG

I want to give a quick shout out to all of our wonderful and inspirational friends in the JAWS UG who attended the AWS Summit in Tokyo today. I’ve very much enjoyed seeing your pictures of the summit. Thanks for making Japan an amazing place for AWS developers! I can’t wait to visit again and meet with all of you.

Randall

New – Pay-per-Session Pricing for Amazon QuickSight, Another Region, and Lots More

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-pay-per-session-pricing-for-amazon-quicksight-another-region-and-lots-more/

Amazon QuickSight is a fully managed cloud business intelligence system that gives you Fast & Easy to Use Business Analytics for Big Data. QuickSight makes business analytics available to organizations of all shapes and sizes, with the ability to access data that is stored in your Amazon Redshift data warehouse, your Amazon Relational Database Service (RDS) relational databases, flat files in S3, and (via connectors) data stored in on-premises MySQL, PostgreSQL, and SQL Server databases. QuickSight scales to accommodate tens, hundreds, or thousands of users per organization.

Today we are launching a new, session-based pricing option for QuickSight, along with additional region support and other important new features. Let’s take a look at each one:

Pay-per-Session Pricing
Our customers are making great use of QuickSight and take full advantage of the power it gives them to connect to data sources, create reports, and and explore visualizations.

However, not everyone in an organization needs or wants such powerful authoring capabilities. Having access to curated data in dashboards and being able to interact with the data by drilling down, filtering, or slicing-and-dicing is more than adequate for their needs. Subscribing them to a monthly or annual plan can be seen as an unwarranted expense, so a lot of such casual users end up not having access to interactive data or BI.

In order to allow customers to provide all of their users with interactive dashboards and reports, the Enterprise Edition of Amazon QuickSight now allows Reader access to dashboards on a Pay-per-Session basis. QuickSight users are now classified as Admins, Authors, or Readers, with distinct capabilities and prices:

Authors have access to the full power of QuickSight; they can establish database connections, upload new data, create ad hoc visualizations, and publish dashboards, all for $9 per month (Standard Edition) or $18 per month (Enterprise Edition).

Readers can view dashboards, slice and dice data using drill downs, filters and on-screen controls, and download data in CSV format, all within the secure QuickSight environment. Readers pay $0.30 for 30 minutes of access, with a monthly maximum of $5 per reader.

Admins have all authoring capabilities, and can manage users and purchase SPICE capacity in the account. The QuickSight admin now has the ability to set the desired option (Author or Reader) when they invite members of their organization to use QuickSight. They can extend Reader invites to their entire user base without incurring any up-front or monthly costs, paying only for the actual usage.

To learn more, visit the QuickSight Pricing page.

A New Region
QuickSight is now available in the Asia Pacific (Tokyo) Region:

The UI is in English, with a localized version in the works.

Hourly Data Refresh
Enterprise Edition SPICE data sets can now be set to refresh as frequently as every hour. In the past, each data set could be refreshed up to 5 times a day. To learn more, read Refreshing Imported Data.

Access to Data in Private VPCs
This feature was launched in preview form late last year, and is now available in production form to users of the Enterprise Edition. As I noted at the time, you can use it to implement secure, private communication with data sources that do not have public connectivity, including on-premises data in Teradata or SQL Server, accessed over an AWS Direct Connect link. To learn more, read Working with AWS VPC.

Parameters with On-Screen Controls
QuickSight dashboards can now include parameters that are set using on-screen dropdown, text box, numeric slider or date picker controls. The default value for each parameter can be set based on the user name (QuickSight calls this a dynamic default). You could, for example, set an appropriate default based on each user’s office location, department, or sales territory. Here’s an example:

To learn more, read about Parameters in QuickSight.

URL Actions for Linked Dashboards
You can now connect your QuickSight dashboards to external applications by defining URL actions on visuals. The actions can include parameters, and become available in the Details menu for the visual. URL actions are defined like this:

You can use this feature to link QuickSight dashboards to third party applications (e.g. Salesforce) or to your own internal applications. Read Custom URL Actions to learn how to use this feature.

Dashboard Sharing
You can now share QuickSight dashboards across every user in an account.

Larger SPICE Tables
The per-data set limit for SPICE tables has been raised from 10 GB to 25 GB.

Upgrade to Enterprise Edition
The QuickSight administrator can now upgrade an account from Standard Edition to Enterprise Edition with a click. This enables provisioning of Readers with pay-per-session pricing, private VPC access, row-level security for dashboards and data sets, and hourly refresh of data sets. Enterprise Edition pricing applies after the upgrade.

Available Now
Everything I listed above is available now and you can start using it today!

You can try QuickSight for 60 days at no charge, and you can also attend our June 20th Webinar.

Jeff;