You can always view and manage your Amazon GuardDuty findings on the Findings page in the GuardDuty console or by using GuardDuty APIs with the AWS CLI or SDK. But there’s a quicker and easier way, you can use Amazon Alexa as a conversational interface to review your GuardDuty findings. With Alexa, you can build natural voice experiences and create a more intuitive way of interacting GuardDuty.
In this post, I show you how to deploy a sample custom Alexa skill and use an Alexa-enabled device, such as Amazon Echo, to get information about GuardDuty findings across your AWS accounts and regions. The information provided by this sample skill gives you a broad overview of GuardDuty finding statistics, severities, and descriptions. When you hear something interesting, you can log in to the GuardDuty console or another analysis tool to investigate the findings data.
Note: Although not covered here, you can also deploy this sample skill using Alexa for Business, which you can use to make skills available to your shared devices and enrolled users without having to publish them to the Alexa skills store.
Prerequisites
To complete the steps in this post, make sure you have:
A basic understanding of Alexa Custom Skills, which is helpful for deploying the sample skill described here. If you’re not already familiar with Alexa custom skill concepts and terminology, you might want to review the following documentation resources.
An AWS account with GuardDuty enabled in one or more AWS regions.
Deploy the Lambda function by using the CloudFormation Template.
Create the custom skill in the Alexa developer console.
Test the skill using an Alexa-enabled device.
Deploy the Lambda function with the CloudFormation Template
For this next step, make sure you deploy the template within the AWS account you want to monitor.
To deploy the Lambda function in the N. Virginia region (see the note below), you can use the CloudFormation template provided by clicking the following link: load the supplied template. In the CloudFormation console, on the Select Template page, select Next.
Note: The following AWS regions support hosting custom Alexa skills: US East (N. Virginia), Asia Pacific (Tokyo), EU (Ireland), West (Oregon). If you want to deploy in a region other than N. Virginia, you will first need to upload the custom skill’s Lambda deployment package (zip file with code) to an S3 bucket in the selected region.
After you load the template, provide the following input parameters:
Input parameter
Input parameter description
FLASHREGIONS
Comma separated list of region Ids with NO spaces to include in flash briefing stats. At least one region is required. Make sure GuardDuty is enabled in regions declared.
MAXRESP
Max number of findings to return in a response.
ArtifactsBucket
S3 Bucket where Lambda deployment package resides. Leave the default for N. Virginia.
ArtifactsPrefix
Path in S3 bucket where Lambda deployment package resides. Leave the default for N. Virginia.
On the Specify Details page, enter the input parameters (see above), and then select Next.
On the Options page, accept the default values, and then select Next.
On the Review page, confirm the details, and then select Create. The stack will be created in approximately 2 minutes.
Create the custom skill in the Alexa developer console
In the second part of this solution implementation, you will create the skill in the Amazon Developer Console.
Sign in to the Alexa area of the Amazon Developer Console, select Your Alexa Consoles in the top right, and then select Skills.
Select Create Skill.
For the name, enter Ask Amazon GuardDuty, and then select Next.
In the Choose a model to add to your skill page, select Custom, and then select Create skill.
Select the JSON Editor and paste the contents of the alexa_ask_guardduty_skill.json file into the code editor, and overwrite the existing content. This file contains the intent schema which defines the set of intents the service can accept and process.
Select Save Model, select Build Model, and then wait for the build to complete.
When the model build is complete, on the left side, select Endpoint.
In the Endpoint page, in the Service Endpoint Type section, select AWS Lambda ARN (Amazon Resource Name).
In the Default Region field, copy and paste the value from the CloudFormation Stack Outputs key named AlexaAskGDSkillArn. Leave the default values for other options, and then select Save Endpoints.
Because you’re not publishing this skill, you don’t need to complete the Launch section of the configuration. The skill will remain in the “Development” status and will only be available for Alexa devices linked to the Amazon developer account used to create the skill. Anyone with physical access to the linked Alexa-enabled device can use the custom skill. As a best practice, I recommend that you delete the Lambda trigger created by the CloudFormation template and add a new one with Skill ID verification enabled.
Test the skill using an Alexa-enabled device
Now that you’ve deployed the sample solution, the next step is to test the skill. Make sure you’re using an Alexa-enabled device linked to the Amazon developer account used to create the skill. Before testing, if there are no current GuardDuty findings available, you can generate sample findings in the console. When you generate sample findings, GuardDuty populates your current findings list with one sample finding for each supported finding type.
You can test using the following voice commands:
“Alexa, Open GuardDuty” — Opens the skill and provides a welcome response. You can also use “Alexa, Ask GuardDuty”.
“Get flash briefing” — Provides global and regional counts for low, medium, and high severity findings. The regions declared in the FLASHREGIONS parameter are included. You can also use “Ask GuardDuty to get flash briefing” to bypass the welcome message. You can learn more about GuardDuty severity levels in the documentation.
For the next set of commands, you can specify the region, use region names such as <Virginia>, <Oregon>, <Ireland>, and so on:
“Get statistics for region” — Provides regional counts for low, medium, and high severity findings.
“Get findings for region” — Returns finding information for the requested region. The number of findings returned is configured in the MAXRESP parameter.
“Get <high/medium/low> severity findings for region” – Returns finding information with the minimum severity requested as high, medium, or low. The number of findings returned is configured in the MAXRESP parameter.
“Help” — Provides information about the skill and supported utterances. Also provides current configuration for FLASHREGIONS and MAXRESP.
You can use this sample solution to get GuardDuty statistics and findings through the Alexa conversational interface. You’ll be able to identify findings that require further investigation quickly. This solution’s code is available on GitHub.
Traditionally, devices that were tied to logins tended to indicate that in some way – turn on someone’s xbox and it’ll show you their account name, run Netflix and it’ll ask which profile you want to use. The increasing prevalence of smart devices in the home changes that, in ways that may not be immediately obvious to the majority of people. You can configure a Philips Hue with wall-mounted dimmers, meaning that someone unfamiliar with the system may not recognise that it’s a smart lighting system at all. Without any actively malicious intent, you end up with a situation where the account holder is able to infer whether someone is home without that person necessarily having any idea that that’s possible. A visitor who uses an Amazon Echo is not necessarily going to know that it’s tied to somebody’s Amazon account, and even if they do they may not know that the log (and recorded audio!) of all interactions is available to the account holder. And someone grabbing an egg out of your fridge is almost certainly not going to think that your smart egg tray will trigger an immediate notification on the account owner’s phone that they need to buy new eggs.
Things get even more complicated when there’s multiple account support. Google Home supports multiple users on a single device, using voice recognition to determine which queries should be associated with which account. But the account that was used to initially configure the device remains as the fallback, with unrecognised voices ended up being logged to it. If a voice is misidentified, the query may end up being logged to an unexpected account.
There’s some interesting questions about consent and expectations of privacy here. If someone sets up a smart device in their home then at some point they’ll agree to the manufacturer’s privacy policy. But if someone else makes use of the system (by pressing a lightswitch, making a spoken query or, uh, picking up an egg), have they consented? Who has the social obligation to explain to them that the information they’re producing may be stored elsewhere and visible to someone else? If I use an Echo in a hotel room, who has access to the Amazon account it’s associated with? How do you explain to a teenager that there’s a chance that when they asked their Home for contact details for an abortion clinic, it ended up in their parent’s activity log? Who’s going to be the first person divorced for claiming that they were vegan but having been the only person home when an egg was taken out of the fridge?
To be clear, I’m not arguing against the design choices involved in the implementation of these devices. In many cases it’s hard to see how the desired functionality could be implemented without this sort of issue arising. But we’re gradually shifting to a place where the data we generate is not only available to corporations who probably don’t care about us as individuals, it’s also becoming available to people who own the more private spaces we inhabit. We have social norms against bugging our houseguests, but we have no social norms that require us to explain to them that there’ll be a record of every light that they turn on or off. This feels like it’s going to end badly.
(Thanks to Nikki Everett for conversations that inspired this post)
(Disclaimer: while I work for Google, I am not involved in any of the products or teams described in this post and my opinions are my own rather than those of my employer’s)
Here at Backblaze we have a lot of folks who are all about technology. With the holiday season fast approaching, you might have all of your gift buying already finished — but if not, we put together a list of things that the employees here at Backblaze are pretty excited about giving (and/or receiving) this year.
Smart Homes:
It’s no secret that having a smart home is the new hotness, and many of the items below can be used to turbocharge your home’s ascent into the future:
Raspberry Pi The holidays are all about eating pie — well why not get a pie of a different type for the DIY fan in your life! Wyze Cam An inexpensive way to keep a close eye on all your favorite people…and intruders! Snooz Have trouble falling asleep? Try this portable white noise machine. Also great for the office! Amazon Echo Dot Need a cheap way to keep track of your schedule or play music? The Echo Dot is a great entry into the smart home of your dreams! Google Wifi These little fellows make it easy to Wifi-ify your entire home, even if it’s larger than the average shoe box here in Silicon Valley. Google Wifi acts as a mesh router and seamlessly covers your whole dwelling. Have a mansion? Buy more! Google Home Like the Amazon Echo Dot, this is the Google variant. It’s more expensive (similar to the Amazon Echo) but has better sound quality and is tied into the Google ecosystem. Nest Thermostat This is a smart thermostat. What better way to score points with the in-laws than installing one of these bad boys in their home — and then making it freezing cold randomly in the middle of winter from the comfort of your couch!
Wearables:
Homes aren’t the only things that should be smart. Your body should also get the chance to be all that it can be:
Apple AirPods You’ve seen these all over the place, and the truth is they do a pretty good job of making sounds appear in your ears. Bose SoundLink Wireless Headphones If you like over-the-ear headphones, these noise canceling ones work great, are wireless and lovely. There’s no better way to ignore people this holiday season! Garmin Fenix 5 Watch This watch is all about fitness. If you enjoy fitness. This watch is the fitness watch for your fitness needs. Apple Watch The Apple Watch is a wonderful gadget that will light up any movie theater this holiday season. Nokia Steel Health Watch If you’re into mixing analogue and digital, this is a pretty neat little gadget. Fossil Smart Watch This stylish watch is a pretty neat way to dip your toe into smartwatches and activity trackers. Pebble Time Steel Smart Watch Some people call this the greatest smartwatch of all time. Those people might be named Yev. This watch is great at sending you notifications from your phone, and not needing to be charged every day. Bellissimo!
Random Goods:
A few of the holiday gift suggestions that we got were a bit off-kilter, but we do have a lot of interesting folks in the office. Hopefully, you might find some of these as interesting as they do:
Wireless Qi Charger Wireless chargers are pretty great in that you don’t have to deal with dongles. There are even kits to make your electronics “wirelessly chargeable” which is pretty great! Self-Heating Coffee Mug Love coffee? Hate lukewarm coffee? What if your coffee cup heated itself? Brilliant! Yeast Stirrer Yeast. It makes beer. And bread! Sometimes you need to stir it. What cooler way to stir your yeast than with this industrial stirrer? Toto Washlet This one is self explanatory. You know the old rhyme: happy butts, everyone’s happy!
Voice computing has long been a staple of science fiction, but it has only relatively recently made its way into fairly common mainstream use. Gadgets like mobile phones and “smart” home assistant devices (e.g. Amazon Echo, Google Home) have brought voice-based user interfaces to the masses. The voice processing for those gadgets relies on various proprietary services “in the cloud”, which generally leaves the free-software world out in the cold. There have been FOSS speech-recognition efforts over the years, but Mozilla’s recent announcement of the release of its voice-recognition code and voice data set should help further the goal of FOSS voice interfaces.
As more and more digital home assistants are appearing on the consumer market, it’s not uncommon to see the towering Amazon Echo or sleek Google Home when visiting friends or family. But we, the maker community, are rarely happy unless our tech stands out from the rest. So without further ado, here’s a roundup of some fantastic retrofitted home assistant projects you can recreate and give pride of place in your kitchen, on your bookshelf, or wherever else you’d like to talk to your virtual, disembodied PA.
Turned an 80s Tomy Mr Money into a little Google AIY / Raspberry Pi based assistant.
Matt ‘Circuitbeard’ Brailsford’s Tomy Mr Money Google AIY Assistant is just one of many home-brew home assistants makers have built since the release of APIs for Amazon Alexa and Google Home. Here are some more…
Teddy Ruxpin
Oh Teddy, how exciting and mysterious you were when I unwrapped you back in the mideighties. With your awkwardly moving lips and twitching eyelids, you were the cream of the crop of robotic toys! How was I to know that during my thirties, you would become augmented with home assistant software and suddenly instil within me a fear unlike any I’d felt before? (Save for my lifelong horror of ET…)
Please watch: “DIY Fidget LED Display – Part 1” https://www.youtube.com/watch?v=FAZIc82Duzk -~-~~-~~~-~~-~- There are tons of virtual assistants out on the market: Siri, Ok Google, Alexa, etc. I had this crazy idea…what if I made the virtual assistant real…kinda. I decided to take an old animatronic teddy bear and hack it so that it ran Amazon Alexa.
Several makers around the world have performed surgery on Teddy to install a Raspberry Pi within his stomach and integrate him with Amazon Alexa Voice or Google’s AIY Projects Voice kit. And because these makers are talented, they’ve also managed to hijack Teddy’s wiring to make his lips move in time with his responses to your commands. Freaky…
Speaking of freaky: check out Zack’s Furlexa — an Amazon Alexa Furby that will haunt your nightmares.
Give old tech new life
Devices that were the height of technology when you purchased them may now be languishing in your attic collecting dust. With new and improved versions of gadgets and gizmos being released almost constantly, it is likely that your household harbours a spare whosit or whatsit which you can dismantle and give a new Raspberry Pi heart and purpose.
Take, for example, Martin Mander’s Google Pi intercom. By gutting and thoroughly cleaning a vintage intercom, Martin fashioned a suitable housing the Google AIY Projects Voice kit to create a new home assistant for his house:
This is a 1986 Radio Shack Intercom that I’ve converted into a Google Home style device using a Raspberry Pi and the Google AIY (Artificial Intelligence Yourself) kit that came free with the MagPi magazine (issue 57). It uses the Google Assistant to answer questions and perform actions, using IFTTT to integrate with smart home accessories and other web services.
Not only does this build look fantastic, it’s also a great conversation starter for any visitors who had a similar device during the eighties.
…and then I’ll put that box inside of another box, and then I’ll mail that box to myself, and when it arrives…
A GIF. A harmless, little GIF…and proof of the comms team’s obsession with The Emperor’s New Groove.
You don’t have to be fancy when it comes to housing your home assistant. And often, especially if you’re working with the smaller people in your household, the results of a simple homespun approach are just as delightful.
Here are Hannah and her dad Tom, explaining how they built a home assistant together and fit it inside an old cigar box:
My 7 year old daughter and I decided to play around with the Raspberry Pi and build ourselves an Amazon Echo (Alexa). The video tells you about what we did and the links below will take you to all the sites we used to get this up and running.
And now it’s your turn! I challenge you all (and also myself) to create a home assistant using the Raspberry Pi. Whether you decide to fit Amazon Alexa inside an old shoebox or Google Home inside your sister’s Barbie, I’d love to see what you create using the free home assistant software available online.
Check out these otherhomeassistants for Raspberry Pi, and keep an eye on our blog to see what I manage to create as part of the challenge.
There are only a few things more integrated into my day-to-day life than Alexa. I use my Echo device and the enabled Alexa Skills for turning on lights in my home, checking video from my Echo Show to see who is ringing my doorbell, keeping track of my extensive to-do list on a weekly basis, playing music, and lots more. I even have my family members enabling Alexa skills on their Echo devices for all types of activities that they now cannot seem to live without. My mother, who is in a much older generation (please don’t tell her I said that), uses her Echo and the custom Alexa skill I built for her to store her baking recipes. She also enjoys exploring skills that have the latest health and epicurean information. It’s no wonder then, that when I go to work I feel like something is missing. For example, I would love to be able to ask Alexa to read my flash briefing when I get to the office.
For those of you that would love to have Alexa as your intelligent assistant at work, I have exciting news. I am delighted to announce Alexa for Business, a new service that enables businesses and organizations to bring Alexa into the workplace at scale. Alexa for Business not only brings Alexa into your workday to boost your productivity, but also provides tools and resources for organizations to set up and manage Alexa devices at scale, enable private skills, and enroll users.
Making Workplaces Smarter with Alexa for Business
Alexa for Business brings the Alexa you know and love into the workplace to help all types of workers to be more productive and organized on both personal and shared Echo devices. In the workplace, shared devices can be placed in common areas for anyone to use, and workers can use their personal devices to connect at work and at home.
End users can use shared devices or personal devices. Here’s what they can do from each.
Shared devices
Join meetings in conference rooms: You can simply say “Alexa, start the meeting”. Alexa turns on the video conferencing equipment, dials into your conference call, and gets the meeting going.
Help around the office: access custom skills to help with directions around the office, finding an open conference room, reporting a building equipment problem, or ordering new supplies.
Personal devices
Enable calling and messaging: Alexa helps make phone calls, hands free and can also send messages on your behalf.
Automatically dial into conference calls: Alexa can join any meeting with a conference call number via voice from home, work, or on the go.
Intelligent assistant: Alexa can quickly check calendars, help schedule meetings, manage to-do lists, and set reminders.
Find information: Alexa can help find information in popular business applications like Salesforce, Concur, or Splunk.
Here are some of the controls available to administrators:
Provision & Manage Shared Alexa Devices: You can provision and manage shared devices around your workplace using the Alexa for Business console. For each device you can set a location, such as a conference room designation, and assign public and private skills for the device.
Configure Conference Room Settings: Kick off your meetings with a simple “Alexa, start the meeting.” Alexa for Business allows you to configure your conference room settings so you can use Alexa to start your meetings and control your conference room equipment, or dial in directly from the Amazon Echo device in the room.
Manage Users: You can invite users in your organization to enroll their personal Alexa account with your Alexa for Business account. Once your users have enrolled, you can enable your custom private skills for them to use on any of the devices in their personal Alexa account, at work or at home.
Manage Skills: You can assign public skills and custom private skills your organization has created to your shared devices, and make private skills available to your enrolled users. You can create skills groups, which you can then assign to specific shared devices.
Build Private Skills & Use Alexa for Business APIs: Dig into the Alexa Skills Kit and build your own skills. Then you can make these available to the shared devices and enrolled users in your Alexa for Business account, all without having to publish them in the public Alexa Skills Store. Alexa for Business offers additional APIs, which you can use to add context to your skills and automate administrative tasks.
Let’s take a quick journey into Alexa for Business. I’ll first log into the AWS Console and go to the Alexa for Business service.
Once I log in to the service, I am presented with the Alexa for Business dashboard. As you can see, I have access to manage Rooms, Shared devices, Users, and Skills, as well as the ability to control conferencing, calendars, and user invitations.
First, I’ll start by setting up my Alexa devices. Alexa for Business provides a Device Setup Tool to setup multiple devices, connect them to your Wi-Fi network, and register them with your Alexa for Business account. This is quite different from the setup process for personal Alexa devices. With Alexa for Business, you can provision 25 devices at a time.
Once my devices are provisioned, I can create location profiles for the locations where I want to put these devices (such as in my conference rooms). We call these locations “Rooms” in our Alexa for Business console. I can go to the Room profiles menu and create a Room profile. A Room profile contains common settings for the Alexa device in your room, such as the wake word for the device, the address, time zone, unit of measurement, and whether I want to enable outbound calling.
The next step is to enable skills for the devices I set up. I can enable any skill from the Alexa Skills store, or use the private skills feature to enable skills I built myself and made available to my Alexa for Business account. To enable skills for my shared devices, I can go to the Skills menu option and enable skills. After I have enabled skills, I can add them to a skill group and assign the skill group to my rooms.
Something I really like about Alexa for Business, is that I can use Alexa to dial into conference calls. To enable this, I go to the Conferencing menu option and select Add provider. At Amazon we use Amazon Chime, but you can choose from a list of different providers, or you can even add your own provider if you want to.
Once I’ve set this up, I can say “Alexa, join my meeting”; Alexa asks for my Amazon Chime meeting ID, after which my Echo device will automatically dial into my Amazon Chime meeting. Alexa for Business also provides an intelligent way to start any meeting quickly. We’ve all been in the situation where we walk into a meeting room and can’t find the meeting ID or conference call number. With Alexa for Business, I can link to my corporate calendar, so Alexa can figure out the meeting information for me, and automatically dial in – I don’t even need my meeting ID. Here’s how you do that:
Alexa can also control the video conferencing equipment in the room. To do this, all I need to do is select the skill for the equipment that I have, select the equipment provider, and enable it for my conference rooms. Now when I ask Alexa to join my meeting, Alexa will dial-in from the equipment in the room, and turn on the video conferencing system, without me needing to do anything else.
Let’s switch to enrolled users next.
I’ll start by setting up the User Invitation for my organization so that I can invite users to my Alexa for Business account. To allow a user to use Alexa for Business within an organization, you invite them to enroll their personal Alexa account with the service by sending a user invitation via email from the management console. If I choose, I can customize the user enrollment email to contain additional content. For example, I can add information about my organization’s Alexa skills that can be enabled after they’ve accepted the invitation and completed the enrollment process. My users must join in order to use the features of Alexa for Business, such as auto dialing into conference calls, linking their Microsoft Exchange calendars, or using private skills.
Now that I have customized my User Invitation, I will invite users to take advantage of Alexa for Business for my organization by going to the Users menu on the Dashboard and entering their email address. This will send an email with a link that can be used to join my organization. Users will join using the Amazon account that their personal Alexa devices are registered to. Let’s invite Jeff Barr to join my Alexa for Business organization.
After Jeff has enrolled in my Alexa for Business account, he can discover the private skills I’ve enabled for enrolled users, and he can access his work skills and join conference calls from any of his personal devices, including the Echo in his home office.
Summary
We’ve only scratched the surface in our brief review of the Alexa for Business console and service features. You can learn more about Alexa for Business by viewing the Alexa for Business website, reading the admin and API guides in the AWS documentation, or by watching the Getting Started videos within the Alexa for Business console.
You can learn more about Alexa for Business by viewing the Alexa for Business website, watching the Alexa for Business overview video, reading the admin and API guides in the AWS documentation, or by watching the Getting Started videos within the Alexa for Business console.
This is my interpretation of the KRACK attacks paper that describes a way of decrypting encrypted WiFi traffic with an active attack.
tl;dr: Wow. Everyone needs to be afraid. (Well, worried — not panicked.) It means in practice, attackers can decrypt a lot of wifi traffic, with varying levels of difficulty depending on your precise network setup. My post last July about the DEF CON network being safe was in error.
Details
This is not a crypto bug but a protocol bug (a pretty obvious and trivial protocol bug).
When a client connects to the network, the access-point will at some point send a random “key” data to use for encryption. Because this packet may be lost in transmission, it can be repeated many times.
What the hacker does is just repeatedly sends this packet, potentially hours later. Each time it does so, it resets the “keystream” back to the starting conditions. The obvious patch that device vendors will make is to only accept the first such packet it receives, ignore all the duplicates.
At this point, the protocol bug becomes a crypto bug. We know how to break crypto when we have two keystreams from the same starting position. It’s not always reliable, but reliable enough that people need to be afraid.
Android, though, is the biggest danger. Rather than simply replaying the packet, a packet with key data of all zeroes can be sent. This allows attackers to setup a fake WiFi access-point and man-in-the-middle all traffic.
In a related case, the access-point/base-station can sometimes also be attacked, affecting the stream sent to the client.
Not only is sniffing possible, but in some limited cases, injection. This allows the traditional attack of adding bad code to the end of HTML pages in order to trick users into installing a virus.
This is an active attack, not a passive attack, so in theory, it’s detectable.
Who is vulnerable?
Everyone, pretty much.
The hacker only needs to be within range of your WiFi. Your neighbor’s teenage kid is going to be downloading and running the tool in order to eavesdrop on your packets.
The hacker doesn’t need to be logged into your network.
It affects all WPA1/WPA2, the personal one with passwords that we use in home, and the enterprise version with certificates we use in enterprises.
It can’t defeat SSL/TLS or VPNs. Thus, if you feel your laptop is safe surfing the public WiFi at airports, then your laptop is still safe from this attack. With Android, it does allow running tools like sslstrip, which can fool many users.
Your home network is vulnerable. Many devices will be using SSL/TLS, so are fine, like your Amazon echo, which you can continue to use without worrying about this attack. Other devices, like your Phillips lightbulbs, may not be so protected.
How can I defend myself?
Patch.
More to the point, measure your current vendors by how long it takes them to patch. Throw away gear by those vendors that took a long time to patch and replace it with vendors that took a short time.
High-end access-points that contains “WIPS” (WiFi Intrusion Prevention Systems) features should be able to detect this and block vulnerable clients from connecting to the network (once the vendor upgrades the systems, of course). Even low-end access-points, like the $30 ones you get for home, can easily be updated to prevent packet sequence numbers from going back to the start (i.e. from the keystream resetting back to the start).
At some point, you’ll need to run the attack against yourself, to make sure all your devices are secure. Since you’ll be constantly allowing random phones to connect to your network, you’ll need to check their vulnerability status before connecting them. You’ll need to continue doing this for several years.
Of course, if you are using SSL/TLS for everything, then your danger is mitigated. This is yet another reason why you should be using SSL/TLS for internal communications.
Most security vendors will add things to their products/services to defend you. While valuable in some cases, it’s not a defense. The defense is patching the devices you know about, and preventing vulnerable devices from attaching to your network.
If I remember correctly, DEF CON uses Aruba. Aruba contains WIPS functionality, which means by the time DEF CON roles around again next year, they should have the feature to deny vulnerable devices from connecting, and specifically to detect an attack in progress and prevent further communication.
However, for an attacker near an Android device using a low-powered WiFi, it’s likely they will be able to conduct man-in-the-middle without any WIPS preventing them.
For once, the real story isn’t as bad as it seems. A researcher has figured out how to install malware onto an Echo that causes it to stream audio back to a remote controller, but:
The technique requires gaining physical access to the target Echo, and it works only on devices sold before 2017. But there’s no software fix for older units, Barnes warns, and the attack can be performed without leaving any sign of hardware intrusion.
The way to implement this attack is by intercepting the Echo before it arrives at the target location. But if you can do that, there are a lot of other things you can do. So while this is a vulnerability that needs to be fixed — and seems to have inadvertently been fixed — it’s not a cause for alarm.
Plane spotting, like train spotting, is a hobby enjoyed by many a tech enthusiast. Nick Sypteras has built a voice-controlled plane identifier using a Raspberry Pi and an Amazon Echo Dot.
“Look! Up in the sky! It’s a bird! It’s a plane! No, it’s Superm… hang on … it’s definitely a plane.”
What plane is that?
There’s a great write-up on Nick’s blog describing how he went about this. In addition to the Pi and the Echo, all he needed was a radio receiver to pick up signals from individual planes. So he bought an RTL-SDR USB dongle to pick up ADS-B broadcasts.
Demonstrating an Alexa skill for identifying what planes are flying by my window. Ingredients: – raspberry pi – amazon echo dot – rtl-sdr dongle Explanation here: https://www.nicksypteras.com/projects/teaching-alexa-to-spot-airplanes
With the help of open-source software he can convert aircraft broadcasts into JSON data, which is stored on the Pi. Included in the broadcast is each passing plane’s unique ICAO code. Using this identifier, he looks up model, operator, and registration number in a data set of possible aircraft which he downloaded and stored on the Pi as a Mongo database.
Where is that plane going?
His Python script, with the help of the Beautiful Soup package, parses the FlightRadar24 website to find out the origin and destination of each plane. Nick also created a Node.js server in which all this data is stored in human-readable language to be accessed by Alexa.
Finally, it was a matter of setting up a new skill on the Alexa Skills Kit dashboard so that it would query the Pi in response to the right voice command.
Pretty neat, huh?
Plane spotting is serious business
Nick has made all his code available on GitHub, so head on over if this make has piqued your interest. He mentions that the radio receiver he uses picks up most unencrypted broadcasts, so you could adapt his build for other purposes as well.
Boost your hobby with the Pi
We’ve seen many builds by makers who have pushed their hobby to the next level with the help of the Pi, whether it’s astronomy, high-altitude ballooning, or making music. What hobby do you have that the Pi could improve? Let us know in the comments.
When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.
You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.
This year’s coolest projects!
Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]
Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.
This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:
Helen’s and Oly’s favourite project involved…live bees!
BEEEEEEEEEEES!
Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:
Coolest robots
I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:
Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017
Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.
This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects
And here are some pictures of even more cool robots we saw:
Games, toys, activities
Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!
Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!
We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.
Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo
Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:
Take a look at the gallery of all winners over on Flickr.
The wow factor
Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:
It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.
Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.
Join this awesome community!
If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.
In case you missed it: in yesterday’s post, we released our Raspberry Jam Guidebook, a new Jam branding pack and some more resources to help people set up their own Raspberry Pi community events. Today I’m sharing some insights from Jams I’ve attended recently.
Preston Raspberry Jam
The Preston Jam is one of the most long-established Jams, and it recently ran its 58th event. It has achieved this by running like clockwork: on the first Monday evening of every month, without fail, the Jam takes place. A few months ago I decided to drop in to surprise the organiser, Alan O’Donohoe. The Jam is held at the Media Innovation Studio at the University of Central Lancashire. The format is quite informal, and it’s very welcoming to newcomers. The first half of the event allows people to mingle, and beginners can get support from more seasoned makers. I noticed a number of parents who’d brought their children along to find out more about the Pi and what can be done with it. It’s a great way to find out for real what people use their Pis for, and to get pointers on how to set up and where to start.
About half way through the evening, the organisers gather everyone round to watch a few short presentations. At the Jam I attended, most of these talks were from children, which was fantastic to see: Josh gave a demo in which he connected his Raspberry Pi to an Amazon Echo using the Alexa API, Cerys talked about her Jam in Staffordshire, and Elise told everyone about the workshops she ran at MozFest. All their talks were really well presented. The Preston Jam has done very well to keep going for so long and so consistently, and to provide such great opportunities and support for young people like Josh, Cerys and Elise to develop their digital making abilities (and presentation skills). Their next event is on Monday 1 May.
Manchester Raspberry Jam and CoderDojo
I set up the Manchester Jam back in 2012, around the same time that the Preston one started. Back then, you could only buy one Pi at a time, and only a handful of people in the area owned one. We ran a fairly small event at the local tech community space, MadLab, adopting the format of similar events I’d been to, which was very hands-on and project-based – people brought along their Pis and worked on their own builds. I ran the Jam for a year before moving to Cambridge to work for the Foundation, and I asked one of the regular attendees, Jack, if he’d run it in future. I hadn’t been back until last month, when Clare and I decided to visit.
The Jam is now held at The Shed, a digital innovation space at Manchester Metropolitan University, thanks to Darren Dancey, a computer science lecturer who claims he taught me everything I know (this claim is yet to be peer-reviewed). Jack, Darren, and Raspberry Pi Foundation co-founder and Trustee Pete Lomas put on an excellent event. They have a room for workshops, and a space for people to work on their own projects. It was wonderful to see some of the attendees from the early days still going along every month, as well as lots of new faces. Some of Darren’s students ran a Minecraft Pi workshop for beginners, and I ran one using traffic lights with GPIO Zero and guizero.
The next day, we went along to Manchester CoderDojo, a monthly event for young people learning to code and make things. The Dojo is held at The Sharp Project, and thanks to the broad range of skills of the volunteers, they provide a range of different activities: Raspberry Pi, Minecraft, LittleBits, Code Club Scratch projects, video editing, game making and lots more.
Manchester CoderDojo’s next event is on Sunday 14 May. Be sure to keep an eye on mcrraspjam.org.uk for the next Jam date!
CamJam and Pi Wars
The Cambridge Raspberry Jam is a big event that runs two or three times a year, with quite a different format to the smaller monthly Jams. They have a lecture theatre for talks, a space for workshops, lots of show-and-tell, and even a collection of retailers selling Pis and accessories. It’s a very social event, and always great fun to attend.
The organisers, Mike and Tim, who wrote the foreword for the Guidebook, also run Pi Wars: the annual Raspberry Pi robotics competition. Clare and I went along to this year’s event, where we got to see teams from all over the country (and even one from New Mexico, brought by one of our Certified Educators from Picademy USA, Kerry Bruce) take part in a whole host of robotic challenges. A few of the teams I spoke to have been working on their robots at their local Jams throughout the year. If you’re interested in taking part next year, you can get a team together now and start to make a plan for your 2018 robot! Keep an eye on camjam.me and piwars.org for announcements.
Ely Cathedral has surprisingly good straight line speed for a cathedral. Great job Ely Makers! #PiWars
Raspberry Jam @ Pi Towers
As well as working on supporting other Jams, I’ve also been running my own for the last few months. Held at our own offices in Cambridge, Raspberry Jam @ Pi Towers is a monthly event for people of all ages. We run workshops, show-and-tell and other practical activities. If you’re in the area, our next event is on Saturday 13 May.
In 2013 and 2014, Alan O’Donohoe organised the Raspberry Jamboree, which took place in Manchester to mark the first and second Raspberry Pi birthdays – and it’s coming back next month, this time organised by Claire Dodd Wicher and Les Pounder. It’s primarily an unconference, so the talks are given by the attendees and arranged on the day, which is a great way to allow anyone to participate. There will also be workshops and practical sessions, so don’t miss out! Unless, like me, you’re going to the new Norwich Jam instead…
Start a Jam near you
If there’s no Jam where you live, you can start your own! Download a copy of the brand new Raspberry Jam Guidebook for tips on how to get started. It’s not as hard as you’d think! And we’re on hand if you need any help.
Visiting Jams and hearing from Jam organisers are great ways for us to find out how we can best support our wonderful community. If you run a Jam and you’d like to tell us about what you do, or share your success stories, please don’t hesitate to get in touch. Email me at [email protected], and we’ll try to feature your stories on the blog in future.
If you have been checking out the launches and announcements from the AWS 2017 San Francisco Summit, you may be aware that the Amazon Lex service is now Generally Available, and you can use the service today. Amazon Lex is a fully managed AI service that enables developers to build conversational interfaces into any application using voice and text. Lex uses the same deep learning technologies of Amazon Alexa-powered devices like Amazon Echo. With the release of Amazon Lex, developers can build highly engaging lifelike user experiences and natural language interactions within their own applications. Amazon Lex supports Slack, Facebook Messenger, and Twilio SMS enabling you to easily publish your voice or text chatbots using these popular chat services. There is no better time to try out the Amazon Lex service to add the gift of gab to your applications, and now you have a great reason to get started.
May I have a Drumroll please?
I am thrilled to announce the AWS Chatbot Challenge! The AWS Chatbot Challenge is your opportunity to build a unique chatbot that helps solves a problem or adds value for prospective users. The AWS Chatbot Challenge is brought to you by Amazon Web Services in partnership with Slack.
The Challenge
Your mission, if you choose to accept it is to build a conversational, natural language chatbot using Amazon Lex and leverage Lex’s integration with AWS Lambda to execute logic or data processing on the backend. Your submission can be a new or existing bot, however, if your bot is an existing one it must have been updated to use Amazon Lex and AWS Lambda within the challenge submission period.
You are only limited by your own imagination when building your solution. Therefore, I will share some recommendations to help you to get your creative juices flowing when creating or deploying your bot. Some suggestions that can help you make your chatbot more distinctive are:
Deploy your bot to Slack, Facebook Messenger, or Twilio SMS
Take advantage of other AWS services when building your bot solution.
Incorporate Text-To-speech capabilities using a service like Amazon Polly
Utilize other third-party APIs, SDKs, and services
Leverage Amazon Lex pre-built enterprise connectors and add services like Salesforce, HubSpot, Marketo, Microsoft Dynamics, Zendesk, and QuickBooks as data sources.
There are cost effective ways to build your bot using AWS Lambda. Lambda includes a free tier of one million requests and 400,000 GB-seconds of compute time per month. This free, per month usage, is for all customers and does not expire at the end of the 12 month Free Tier Term. Furthermore, new Amazon Lex customers can process up to 10,000 text requests and 5,000 speech requests per month free during the first year. You can find details here.
Remember, the AWS Free Tier includes services with a free tier available for 12 months following your AWS sign-up date, as well as additional service offers that do not automatically expire at the end of your 12 month term. You can review the details about the AWS Free Tier and related services by going to the AWS Free Tier Details page.
Can We Talk – How It Works
The AWS Chatbot Challenge is open to individuals, and teams of individuals, who have reached the age of majority in their eligible area of residence at the time of competition entry. Organizations that employ 50 or fewer people are also eligible to compete as long at the time of entry they are duly organized or incorporated and validly exist in an eligible area. Large organizations-employing more than 50-in eligible areas can participate but will only be eligible for a non-cash recognition prize.
Chatbot Submissions are judged using the following criteria:
Customer Value: The problem or painpoint the bot solves and the extent it adds value for users
Bot Quality: The unique way the bot solves users’ problems, and the originality, creativity, and differentiation of the bot solution
Bot Implementation: Determination of how well the bot was built and executed by the developer. Also, consideration of bot functionality such as if the bot functions as intended and recognizes and responds to most common phrases asked of it
Prizes
The AWS Chatbot Challenge is awarding prizes for your hard work!
First Prize
$5,000 USD
$2,500 AWS Credits
Two (2) tickets to AWS re:Invent
30 minute virtual meeting with the Amazon Lex team
Winning submission featured on the AWS AI blog
Cool swag
Second Prize
$3,000 USD
$1,500 AWS Credits
One (1) ticket to AWS re:Invent
30 minute virtual meeting with the Amazon Lex team
Winning submission featured on the AWS AI blog
Cool swag
Third Prize
$2,000 USD
$1,000 AWS Credits
30 minute virtual meeting with the Amazon Lex team
Winning submission featured on the AWS AI blog
Cool swag
Challenge Timeline
Submissions Start: April 19, 2017 at 12:00pm PDT
Submissions End:July 18, 2017 at 5:00pm PDT
Winners Announced: August 11, 2017 at 9:00am PDT
Up to the Challenge – Get Started
Are ready to get started on your chatbot and dive into the challenge? Here is how to get started:
Visit the Resources page for links to documentation and resources.
Shoot your demo video that demonstrates your bot in action. Prepare a written summary of your bot and what it does.
Provide a way to access your bot for judging and testing by including a link to your GitHub repo hosting the bot code and all deployment files and testing instructions needed for testing your bot.
Submit your bot on AWSChatbot2017.Devpost.com before July 18, 2017at 5 pm ET and share access to your bot, its Github repo and its deployment files.
Summary
With Amazon Lex you can build conversation into web and mobile applications, as well as use it to build chatbots that control IoT devices, provide customer support, give transaction updates or perform operations for DevOps workloads (ChatOps). Amazon Lex provides built-in integration with AWS Lambda, AWS Mobile Hub, and Amazon CloudWatch and allows for easy integrate with other AWS services so you can use the AWS platform for to build security, monitoring, user authentication, business logic, and storage into your chatbot or application. You can make additional enhancements to your voice or text chatbot by taking advantage of Amazon Lex’s support of chat services like Slack, Facebook Messenger, and Twilio SMS.
Dive into building chatbots and conversational interfaces with Amazon Lex and AWS Lambda with the AWS Chatbot Challenge for a chance to win some cool prizes. Some recent resources and online tech talks about creating bots with Amazon Lex and AWS Lambda that may help you in your bot building journey are:
April is Autism Awareness month and about 1 in 68 children in the U.S. have been identified with autism spectrum disorder (ASD) (CDC 2014). In this post from Troy Larson, a Sr. Devops Cloud Architect here at AWS, you get an introduction to a project he has been working on to help his son Calvin.
I have been asked how the minds at AWS come up with so many different ideas. Sometimes they come from a deeply personal place, where someone sees a way to help others. Pollexy is an amazing example of just that. Read about Pollexy and then watch the video here.
-Ana
Background
As a computer programming parent of a 16-year old non-verbal teenage boy with autism, I have been constantly searching over the years to find ways to use technology to make our lives together safer, happier and more comfortable. At the core of this challenge is the most basic of all human interaction—communication. While Calvin is able to respond to verbal instruction, he is not able to speak responsively. In his entire life, we’ve never had a conversation. He is able to be left alone in his room to play, but most every task or set of tasks requires a human to verbally prompt him along the way. Having other children and responsibilities in the home, at times the intensity of supervision can be negatively impactful on the home dynamic.
Genesis
When I saw the announcement of Amazon Polly and Amazon Lex at re:Invent last year, I immediately started churning on how we could leverage these technologies to assist Calvin. He responds well to human verbal prompts, but would he understand a digital voice? So one Saturday, I setup a Raspberry Pi in his room and closed his door and crouched around the corner with other family members so Calvin couldn’t see us. I connected to the Raspberry Pi and instructed Polly to speak in Joanna’s familiar pacific tone, “Calvin, it’s time to take a potty break. Go out of your bedroom and go to the bathroom.” In a few seconds, we heard his doorknob turn and I poked my head out of my hiding place. Calvin passed by, looking at me quizzically, then went into the bathroom as Joanna had instructed. We all looked at each other in amazement—he had listened and responded perfectly to the completely invisible voice of someone he’d never heard before. After discussing some ideas around this with co-workers, a colleague suggested I enter the IoT and AI Science Fair at our annual AWS Sales Kick-Off meeting. Less than two months after the Polly and Lex announcement and 3500 lines of code later, Pollexy—along with Calvin–debuted at the Science Fair.
Overview
Pollexy (“Polly” + “Lex”) is a Raspberry Pi and mobile-based special needs verbal assistant that lets caretakers schedule audio task prompts and messages both on a recurring schedule and/or on-demand. Caretakers can schedule regular medicine reminder messages or hourly bathroom break messages, for example, and at the same time use their Amazon Echo and mobile device to request a specific message be played immediately. Caretakers can even set it up so that the person needs to confirm that they’ve heard the message. For example, my son won’t pay attention to Pollexy unless Pollexy first asks him to “Push the blue button.” Pollexy will wait until he has pushed the button and then speak the actual message. Other people may be able to respond verbally using Lex, or not require a confirmation at all. Pollexy can be tailored to what works best.
And then most importantly—and most challenging—in a large house, how do we make sure the person is in the room where we play the message? What if we have a special needs adult living in an in-law suite? Are they in the living room or the kitchen? And what about multiple people? What if we have multiple people in different areas of the house, each of whom has a message? Let’s explore the basic elements and tie the pieces together.
Basic Elements of Pollexy
In the spirit of Amazon’s Leadership Principle “Invent and Simplify,” we want to minimize the complexity of the Pollexy architecture. We can break Pollexy down into three types of objects and three components, all of which work together in a way that’s easily explainable.
Object #1: Person
Pollexy can support any number of people. A person is a uniquely identifiable name. We can set basic preferences such as “requires confirmation” and most importantly, we can define a location schedule. This means that we can create an Outlook-like schedule that sets preferences where someone should be in the house.
Object #2: Location
A location is simply a uniquely identifiable location where a device is physically sitting. Based on the user’s location schedule, Pollexy will know which device to contact first, second, third, etc. We can also “mute” devices if needed (naptime, etc.)
Object #3: Message
Obviously, this is the actual message we want to play. Attached to each message is a person and a recurring schedule (only if it’s not a one-time message). We don’t store location with the message, because Pollexy figures out the person’s location when the message is ready to be delivered.
Component #1: Scheduler
Every message needs to be scheduled. This is a command-line tool where you basically say Tell “Calvin” that “you need to brush your teeth” every night at 8 p.m. This message is then stored in DynamoDB, waiting to be picked up by the queueing Lambda function.
Component #2: Queueing Engine
Every minute, a Lambda runs and checks the scheduler to see if there is a message or messages ready to be delivered. If a message is ready, it looks up the person’s location schedule and figures out where they are and then pushes the message or messages into an SQS queue for that location.
Component #3: Speaker Engine
Every minute on the Raspberry Pi device, the speaker engine spins up and checks the SQS for its location. If there are messages, then the speaker engine looks at the user’s preferences and initiates communication to convey the message. If the person doesn’t respond, the speaker engine will check if the person has a secondary location in their schedule and drop the message in the SQS Queue for that location. In the end, a message will either be delivered or eventually just timeout (if someone is out of the house for the day).
Respect and Freedom are the Keys
We often take our personal privacy and respect for granted, so imagine even for a special needs person, the lack of privacy and freedom around having a person constantly in your presence. This is exaggerated for those in the autism spectrum where invasion of personal space can escalate a sense of invasion, turning into anger and frustration. Pollexy becomes their own personal, gentle and never-flustered friend to coach to them along the way, giving them confidence, respect and the sense of privacy and freedom we all want to enjoy.
Want to change the world with Big Data and Analytics? Come join us on the Amazon EMR team in Amazon Web Services!
Meet the Amazon EMR team this Friday April 7th from 5:00 – 7:30 PM at Michael’s at Shoreline in Mountain View. We’ll feature short tech talks by EMR leadership who will talk about the past, present, and future of Apache Hadoop and Spark ecosystem and EMR. You’ll also meet EMR engineers who are eager to discuss the challenges and opportunities involved in building the EMR service and running the latest open-source big data frameworks like Spark and Presto at massive scale. We’ll give out several door prizes, including an Amazon Echo with an Amazon Dot, Kindle, and Fire TV Stick!
Amazon EMR is a web service which enables customers to run massive clusters with distributed big data frameworks like Apache Hadoop, Hive, Tez, Flink, Spark, Presto, HBase and more, with the ability to effortlessly scale up and down as needed. We run a large number of customer clusters, enabling processing on vast datasets.
We are developing innovative new features including our next-generation cluster management system, improvements for real-time processing of big data, and ways to enable customers to more easily interact with their data. We’re looking for top engineers to build them from the ground up.
Here are sample features that we have recently delivered:
With Zelda: Breath of the Wild out on the Nintendo Switch, I made a home automation system based off the Zelda series using the ocarina from The Legend of Zelda: Ocarina of Time. Help Me Make More Awesome Stuff! https://www.patreon.com/sufficientlyadvanced Subscribe! http://goo.gl/xZvS5s Follow Sufficiently Advanced!
Listen!
Released in 1998, The Legend of Zelda: Ocarina of Timeis the best game ever is still an iconic entry in the retro gaming history books.
Very few games have stuck with me in the same way Ocarina has, and I think it’s fair to say that, with the continued success of the Zelda franchise, I’m not the only one who has a special place in their heart for Link, particularly in this musical outing.
Thanks to Cynosure Gaming‘s Ocarina of Time review for the image.
Allen, or Sufficiently Advanced, as his YouTube subscribers know him, has used a Raspberry Pi to detect and recognise key tunes from the game, with each tune being linked (geddit?) to a specific task. By playing Zelda’s Lullaby (E, G, D, E, G, D), for instance, Allen can lock or unlock the door to his house. Other tunes have different functions: Epona’s Song unlocks the car (for Ocarina noobs, Epona is Link’s horse sidekick throughout most of the game), and Minuet of Forest waters the plants.
So how does it work?
It’s a fairly simple setup based around note recognition. When certain notes are played in a specific sequence, the Raspberry Pi detects the tune via a microphone within the Amazon Echo-inspired body of the build, and triggers the action related to the specific task. The small speaker you can see in the video plays a confirmation tune, again taken from the video game, to show that the task has been completed.
As for the tasks themselves, Allen has built a small controller for each action, whether it be a piece of wood that presses down on his car key, a servomotor that adjusts the ambient temperature, or a water pump to hydrate his plants. Each controller has its own small ESP8266 wireless connectivity module that links back to the wireless-enabled Raspberry Pi, cutting down on the need for a ton of wires about the home.
And yes, before anybody says it, we’re sure that Allen is aware that using tone recognition is not the safest means of locking and unlocking your home. This is just for fun.
Do-it-yourself home automation
While we don’t necessarily expect everyone to brush up on their ocarina skills and build their own Zelda-inspired home automation system, the idea of using something other than voice or text commands to control home appliances is a fun one.
You could use facial recognition at the door to start the kettle boiling, or the detection of certain gasses to – ahem!– spray an air freshener.
We love to see what you all get up to with the Raspberry Pi. Have you built your own home automation system controlled by something other than your voice? Share it in the comments below.
As we finish up the month of February, Tina Barr is back with some awesome startups.
-Ana
This month we are bringing you five innovative hot startups:
GumGum – Creating and popularizing the field of in-image advertising.
Jiobit – Smart tags to help parents keep track of kids.
Parsec – Offers flexibility in hardware and location for PC gamers.
Peloton – Revolutionizing indoor cycling and fitness classes at home.
Tendril – Reducing energy consumption for homeowners.
If you missed any of our January startups, make sure to check them out here.
GumGum (Santa Monica, CA) GumGum is best known for inventing and popularizing the field of in-image advertising. Founded in 2008 by Ophir Tanz, the company is on a mission to unlock the value held within the vast content produced daily via social media, editorials, and broadcasts in a variety of industries. GumGum powers campaigns across more than 2,000 premium publishers, which are seen by over 400 million users.
In-image advertising was pioneered by GumGum and has given companies a platform to deliver highly visible ads to a place where the consumer’s attention is already focused. Using image recognition technology, GumGum delivers targeted placements as contextual overlays on related pictures, as banners that fit on all screen sizes, or as In-Feed placements that blend seamlessly into the surrounding content. Using Visual Intelligence, GumGum can scour social media and broadcast TV for all images and videos related to a brand, allowing companies to gain a stronger understanding of their audience and how they are relating to that brand on social media.
GumGum relies on AWS for its Image Processing and Ad Serving operations. Using AWS infrastructure, GumGum currently processes 13 million requests per minute across the globe and generates 30 TB of new data every day. The company uses a suite of services including but not limited to Amazon EC2, Amazon S3, Amazon Kinesis, Amazon EMR, AWS Data Pipeline, and Amazon SNS. AWS edge locations allow GumGum to serve its customers in the US, Europe, Australia, and Japan and the company has plans to expand its infrastructure to Australia and APAC regions in the future.
Jiobit (Chicago, IL) Jiobit was inspired by a real event that took place in a crowded Chicago park. A couple of summers ago, John Renaldi experienced every parent’s worst nightmare – he lost track of his then 6-year-old son in a public park for almost 30 minutes. John knew he wasn’t the only parent with this problem. After months of research, he determined that over 50% of parents have had a similar experience and an even greater percentage are actively looking for a way to prevent it.
Jiobit is the world’s smallest and longest lasting smart tag that helps parents keep track of their kids in every location – indoors and outdoors. The small device is kid-proof: lightweight, durable, and waterproof. It acts as a virtual “safety harness” as it uses a combination of Bluetooth, Wi-Fi, Multiple Cellular Networks, GPS, and sensors to provide accurate locations in real-time. Jiobit can automatically learn routes and locations, and will send parents an alert if their child does not arrive at their destination on time. The talented team of experienced engineers, designers, marketers, and parents has over 150 patents and has shipped dozens of hardware and software products worldwide.
The Jiobit team is utilizing a number of AWS services in the development of their product. Security is critical to the overall product experience, and they are over-engineering security on both the hardware and software side with the help of AWS. Jiobit is also working towards being the first child monitoring device that will have implemented an Alexa Skill via the Amazon Echo device (see here for a demo!). The devices use AWS IoT to send and receive data from the Jio Cloud over the MQTT protocol. Once data is received, they use AWS Lambda to parse the received data and take appropriate actions, including storing relevant data using Amazon DynamoDB, and sending location data to Amazon Machine Learning processing jobs.
Parsec (New York, NY) Parsec operates under the notion that everyone should have access to the best computing in the world because access to technology creates endless opportunities. Founded in 2016 by Benjy Boxer and Chris Dickson, Parsec aims to eliminate the burden of hardware upgrades that users frequently experience by building the technology to make a computer in the cloud available anywhere, at any time. Today, they are using their technology to enable greater flexibility in the hardware and location that PC gamers choose to play their favorite games on. Check out this interview with Benjy and our Startups team for a look at how Parsec works.
Parsec built their first product to improve the gaming experience; gamers no longer have to purchase consoles or expensive PCs to access the entertainment they love. Their low latency video streaming and networking technologies allow gamers to remotely access their gaming rig and play on any Windows, Mac, Android, or Raspberry Pi device. With the global reach of AWS, Parsec is able to deliver cloud gaming to the median user in the US and Europe with less than 30 milliseconds of network latency.
Parsec users currently have two options available to start gaming with cloud resources. They can either set up their own machines with the Parsec AMI in their region or rely on Parsec to manage everything for a seamless experience. In either case, Parsec uses the g2.2xlarge EC2 instance type. Parsec is using Amazon Elastic Block Storage to store games, Amazon DynamoDB for scalability, and Amazon EC2 for its web servers and various APIs. They also deal with a high volume of logs and take advantage of the Amazon Elasticsearch Service to analyze the data.
Be sure to check out Parsec’s blog to keep up with the latest news.
Peloton (New York, NY) The idea for Peloton was born in 2012 when John Foley, Founder and CEO, and his wife Jill started realizing the challenge of balancing work, raising young children, and keeping up with personal fitness. This is a common challenge people face – they want to work out, but there are a lot of obstacles that stand in their way. Peloton offers a solution that enables people to join indoor cycling and fitness classes anywhere, anytime.
Peloton has created a cutting-edge indoor bike that streams up to 14 hours of live classes daily and has over 4,000 on-demand classes. Users can access live classes from world-class instructors from the convenience of their home or gym. The bike tracks progress with in-depth ride metrics and allows people to compete in real-time with other users who have taken a specific ride. The live classes even feature top DJs that play current playlists to keep users motivated.
With an aggressive marketing campaign, which has included high-visibility TV advertising, Peloton made the decision to run its entire platform in the cloud. Most recently, they ran an ad during an NFL playoff game and their rate of requests per minute to their site increased from ~2k/min to ~32.2k/min within 60 seconds. As they continue to grow and diversify, they are utilizing services such as Amazon S3 for thousands of hours of archived on-demand video content, Amazon Redshift for data warehousing, and Application Load Balancer for intelligent request routing.
Tendril (Denver, CO) Tendril was founded in 2004 with the goal of helping homeowners better manage and reduce their energy consumption. Today, electric and gas utilities use Tendril’s data analytics platform on more than 140 million homes to deliver a personalized energy experience for consumers around the world. Using the latest technology in decision science and analytics, Tendril can gain access to real-time, ever evolving data about energy consumers and their homes so they can improve customer acquisition, increase engagement, and orchestrate home energy experiences. In turn, Tendril helps its customers unlock the true value of energy interactions.
AWS helps Tendril run its services globally, while scaling capacity up and down as needed, and in real-time. This has been especially important in support of Tendril’s newest solution, Orchestrated Energy, a continuous demand management platform that calculates a home’s thermal mass, predicts consumer behavior, and integrates with smart thermostats and other connected home devices. This solution allows millions of consumers to create a personalized energy plan for their home based on their individual needs.
2016 is safely in our rear-view mirrors. It’s time to take a look back at the year that was and see what technology had the biggest impact on consumers and businesses alike. We also have an eye to 2017 to see what the future holds.
AI and machine learning in the cloud
Truly sentient computers and robots are still the stuff of science fiction (and the premise of one of 2016’s most promising new SF TV series, HBO’s Westworld). Neural networks are nothing new, but 2016 saw huge strides in artificial intelligence and machine learning, especially in the cloud.
Google, Amazon, Apple, IBM, Microsoft and others are developing cloud computing infrastructures designed especially for AI work. It’s this technology that’s underpinning advances in image recognition technology, pattern recognition in cybersecurity, speech recognition, natural language interpretation and other advances.
Microsoft’s newly-formed AI and Research Group is finding ways to get artificial intelligence into Microsoft products like its Bing search engine and Cortana natural language assistant. Some of these efforts, while well-meaning, still need refinement: Early in 2016 Microsoft launched Tay, an AI chatbot designed to mimic the natural language characteristics of a teenage girl and learn from interacting with Twitter users. Microsoft had to shut Tay down after Twitter users exploited vulnerabilities that caused Tay to begin spewing really inappropriate responses. But it paves the way for future efforts that blur the line between man and machine.
Finance, energy, climatology – anywhere you find big data sets you’re going to find uses for machine learning. On the consumer end it can help your grocery app guess what you might want or need based on your spending habits. Financial firms use machine learning to help predict customer credit scores by analyzing profile information. One of the most intriguing uses of machine learning is in security: Pattern recognition helps systems predict malicious intent and figure out where exploits will come from.
Meanwhile we’re still waiting for Rosie the Robot from the Jetsons. And flying cars. So if Elon Musk has any spare time in 2017, maybe he can get on that.
AR Games
Augmented Reality (AR) games have been around for a good long time – ever since smartphone makers put cameras on them, game makers have been toying with the mix of real life and games.
AR games took a giant step forward with a game released in 2016 that you couldn’t get away from, at least for a little while. We’re talking about Pokémon GO, of course. Niantic, makers of another AR game called Ingress, used the framework they built for that game to power Pokémon GO. Kids, parents, young, old, it seemed like everyone with an iPhone that could run the game caught wild Pokémon, hatched eggs by walking, and battled each other in Pokémon gyms.
For a few weeks, anyway.
Technical glitches, problems with scale and limited gameplay value ultimately hurt Pokémon GO’s longevity. Today the game only garners a fraction of the public interest it did at peak. It continues to be successful, albeit not at the stratospheric pace it first set.
Niantic, the game’s developer, was able to tie together several factors to bring such an explosive and – if you’ll pardon the overused euphemism – disruptive – game to bear. One was its previous work with a game called Ingress, another AR-enhanced game that uses geomap data. In fact, Pokémon GO uses the same geomap data as Ingress, so Niantic had already done a huge amount of legwork needed to get Pokémon GO up and running. Niantic cleverly used Google Maps data to form the basis of both games, relying on already-identified public landmarks and other locations tagged by Ingress players (Ingress has been around since 2011).
Then, of course, there’s the Pokémon connection – an intensely meaningful gaming property that’s been popular with generations of video games and cartoon watchers since the 1990s. The dearth of Pokémon-branded games on smartphones meant an instant explosion of popularity upon Pokémon GO’s release.
2016 also saw the introduction of several new virtual reality (VR) headsets designed for home and mobile use. Samsung Gear VR and Google Daydream View made a splash. As these products continue to make consumer inroads, we’ll see more games push the envelope of what you can achieve with VR and AR.
Hybrid Cloud
Hybrid Cloud services combine public cloud storage (like B2 Cloud Storage) or public compute (like Amazon Web Services) with a private cloud platform. Specialized content and file management software glues it all together, making the experience seamless for the user.
Businesses get the instant access and speed they need to get work done, with the ability to fall back on on-demand cloud-based resources when scale is needed. B2’s hybrid cloud integrations include OpenIO, which helps businesses maintain data storage on-premise until it’s designated for archive and stored in the B2 cloud.
The cost of entry and usage of Hybrid Cloud services have continued to fall. For example, small and medium-sized organizations in the post production industry are finding Hybrid Cloud storage is now a viable strategy in managing the large amounts of information they use on a daily basis. This strategy is enabled by the low cost of B2 Cloud Storage that provides ready access to cloud-stored data.
There are practical deployment and scale issues that have kept Hybrid Cloud services from being used widespread in the largest enterprise environments. Small to medium businesses and vertical markets like Media & Entertainment have found promising, economical opportunities to use it, which bodes well for the future.
Inexpensive 3D printers
3D printing, once a rarified technology, has become increasingly commoditized over the past several years. That’s been in part thanks to the “Maker Movement:” Thousands of folks all around the world who love to tinker and build. XYZprinting is out in front of makers and others with its line of inexpensive desktop da Vinci printers.
The da Vinci Mini is a tabletop model aimed at home users which starts at under $300. You can download and tweak thousands of 3D models to build toys, games, art projects and educational items. They’re built using spools of biodegradable, non-toxic plastics derived from corn starch which dispense sort of like the bobbin on a sewing machine. The da Vinci Mini works with Macs and PCs and can connect via USB or Wi-Fi.
DIY Drones
Quadcopter drones have been fun tech toys for a while now, but the new trend we saw in 2016 was “do it yourself” models. The result was Flybrix, which combines lightweight drone motors with LEGO building toys. Flybrix was so successful that they blew out of inventory for the 2016 holiday season and are backlogged with orders into the new year.
Each Flybrix kit comes with the motors, LEGO building blocks, cables and gear you need to build your own quad, hex or octocopter drone (as well as a cheerful-looking LEGO pilot to command the new vessel). A downloadable app for iOS or Android lets you control your creation. A deluxe kit includes a handheld controller so you don’t have to tie up your phone.
If you already own a 3D printer like the da Vinci Mini, you’ll find plenty of model files available for download and modification so you can print your own parts, though you’ll probably need help from one of the many maker sites to know what else you’ll need to aerial flight and control.
5D Glass Storage
Research at the University of Southampton may yield the next big leap in optical storage technology meant for long-term archival. The boffins at the Optoelectronics Research Centre have developed a new data storage technique that embeds information in glass “nanostructures” on a storage disc the size of a U.S. quarter.
A Blu-Ray Disc can hold 50 GB, but one of the new 5D glass storage discs – only the size of a U.S. quarter – can hold 360 TB – 7200 times more. It’s like a super-stable supercharged version of a CD. Not only is the data inscribed on much smaller structures within the glass, but reflected at multiple angles, hence “5D.”
An upside to this is an absence of bit rot: The glass medium is extremely stable, with a shelf life predicted in billions of years. The downside is that this is still a write-once medium, so it’s intended for long term storage.
This tech is still years away from practical use, but it took a big step forward in 2016 when the University announced the development of a practical information encoding scheme to use with it.
Smart Home Tech
Are you ready to talk to your house to tell it to do things? If you’re not already, you probably will be soon. Google’s Google Home is a $129 voice-activated speaker powered by the Google Assistant. You can use it for everything from streaming music and video to a nearby TV to reading your calendar or to do list. You can also tell it to operate other supported devices like the Nest smart thermostat and Philips Hue lights.
Amazon has its own similar wireless speaker product called the Echo, powered by Amazon’s Alexa information assistant. Amazon has differentiated its Echo offerings by making the Dot – a hockey puck-sized device that connects to a speaker you already own. So Amazon customers can begin to outfit their connected homes for less than $50.
Apple’s HomeKit software kit isn’t a speaker like Amazon Echo or Google Home. It’s software. You use the Home app on your iOS 10-equipped iPhone or iPad to connect and configure supported devices. Use Siri, Apple’s own intelligent assistant, on any supported Apple device. HomeKit turns on lights, turns up the thermostat, operates switches and more.
Smart home tech has been coming in fits and starts for a while – the Nest smart thermostat is already in its third generation, for example. But 2016 was the year we finally saw the “Internet of things” coalescing into a smart home that we can control through voice and gestures in a … well, smart way.
Welcome To The Future
It’s 2017, welcome to our brave new world. While it’s anyone’s guess what the future holds, there are at least a few tech trends that are pretty safe to bet on. They include:
Internet of Things: More smart-connected devices are coming online in the home and at work every day, and this trend will accelerate in 2017 with more and more devices requiring some form of Internet connectivity to work. Expect to see a lot more appliances, devices, and accessories that make use of the API’s promoted by Google, Amazon, and Apple to help let you control everything in your life just using your voice and a smart speaker setup.
Blockchain security: Blockchain is the digital ledger security technology that makes Bitcoin work. Its distribution methodology and validation system help you make certain that no one’s tampered with the records, which make it well-suited for applications besides cryptocurrency, like make sure your smart thermostat (see above) hasn’t been hacked). Expect 2017 to be the year we see more mainstream acceptance, use, and development of blockchain technology from financial institutions, the creation of new private blockchain networks, and improved usability aimed at making blockchain easier for regular consumers to use. Blockchain-based voting is here too. It also wouldn’t surprise us, given all this movement, to see government regulators take a much deeper interest in blockchain, either.
5G: Verizon is field-testing 5G on its wireless network, which it says deliver speeds 30-50 times faster than 4G LTE. We’ll be hearing a lot more about 5G from Verizon and other wireless players in 2017. In fairness, we’re still a few years away from widescale 5G deployment, but field-testing has already started.
Your Predictions?
Enough of our bloviation. Let’s open the floor to you. What do you think were the biggest technology trends in 2016? What’s coming in 2017 that has you the most excited? Let us know in the comments!
While computers that talk are great, computers that listen and respond are even better! If you have used an Amazon Echo, you know how simple, useful, and powerful the Alexa-powered interaction model can be.
Today we are making the same deep learning technologies (ASR – Automatic Speech Recognition NLU – Natural Language Understanding) that power Amazon Alexa available to you for use in your own conversational applications. You can use Amazon Lex to build chatbots and other types of web & mobile applications that support engaging, lifelike interactions. Your bots can provide information, power your application, streamline work activities, or provide a control mechanism for robots, drones, and toys.
Amazon Lex is designed to let you get going quickly. You start out by designing your conversation in the Lex Console, providing Lex with some sample phrases that are used to build a natural language model. Then you publish your Amazon Lex bot and let it process text or voice conversations with your users. Amazon Lex is a fully-managed service so you don’t need to spend time setting up, managing, or scaling any infrastructure.
Amazon Lex lets you use AWS Lambda functions to implement the business logic for your bot, including connections to your enterprise applications and data. In conjunction with the newly announced SaaS integration for AWS Mobile Hub, you can build enterprise productivity bots that provide conversational interfaces to the accounts, contacts, leads, and other enterprise data stored in the SaaS applications that you are already using.
Putting it all together, you now have access to all of the moving parts needed to build fully integrated solutions that start at the mobile app and go all the way to the fulfillment logic.
Amazon Lex Concepts Let’s take a quick look at the principal Amazon Lex concepts:
Bot – A bot contains all of the components of a conversation.
Intent – An intent represents a goal that the bot’s user wants to achieve (buying a plane ticket, scheduling an appointment, or getting a weather forecast, and so forth).
Utterance – An utterance is a spoken or typed phrase that invokes an intent. “I want to book a hotel” or “I want to order flowers” are two simple utterances.
Slots – Each slot is a piece of data that the user must supply in order to fulfill the intent. Slots are typed; a travel bot could have slots for cities, states or airports.
Prompt – A prompt is a question that asks the user to supply some data (for a slot) that is needed to fulfill an intent.
Fulfillment – Fulfillment is the business logic that carries our the user’s intent. Lex supports the use of Lambda functions for fulfillment.
Bots, intents, and slots are versioned so that you can draw clear lines between development, testing, staging, and production, in a multi-developer environment. You can create multiple aliases for each of your bots and maps them to specific versions of the components.
Building a Bot You can define your Lex bot and set up all of these components from the Lex Console. You can start with one of the samples or you can create a custom bot:
You define your utterances and their slots on the next page:
And customize your bot using the settings:
You can test your bot interactively and refine it until it works as desired:
Then you can generate a callback URL for use with Facebook (and others on the way):
I’ll share more details as soon as the re:Invent rush is over and I have time to really dig in.
Pricing and Availability Amazon Lex is available in preview form in the US East (Northern Virginia) Region and you can start building conversational applications today!
After you sign up, you can make 10,000 text requests and 5,000 speech requests each month at no charge for the first year. After that you will pay $4.00 for each 1,000 speech requests and $0.75 for every 1,000 text requests.
Roy Ben-Alta is Sr. Business Development Manager at AWS – Big Data & Machine Learning
We can’t believe that there are just a couple of weeks left before re:Invent 2016. If you are attending this year, you will want to check out our Big Data sessions! Unlike in previous years, these sessions are covered in multiple tracks, such as Big Data & Analytics, Architecture, Databases, and IoT. We will also have—for the first time—two mini-conferences: Big Data and Machine Learning. These resource mini-conferences include full-day technical deep dives on a broad variety of topics, including big data, IoT, machine learning, and more.
This year, we have over 40 sessions!
We have great sessions from Netflix, Chick-fil-A, Under Armour, FINRA, King.com, Beeswax, GE, Toyota Racing Development, Quantcast, Groupon, Amazon.com, Scholastic,Thomson Reuters, DataXu, Sony, EA, and many more. All sessions are recorded and made available on YouTube. Also, all slide decks from the sessions are made available on SlideShare.net after the conference.
Today, I highlight the sessions to be presented as part of the Big Data & Machine Learning mini-conferences, Big Data analytics, and relevant sessions from other tracks. The following sessions are in this year’s session catalog. Choose any link to learn more or to add a session to your schedule.
We are looking forward to meeting you at re:invent.
BDM205 – Big Data Mini-Con State of the Union – Tuesday Join us for this general session where AWS big data experts present an in-depth look at the current state of big data. Learn about the latest big data trends and industry use cases. Hear how other organizations are using the AWS big data platform to innovate and remain competitive. Take a look at some of the most recent AWS big data announcements, as we kick off the Big Data Mini-Con.
MAC206 – Amazon Machine Learning State of the Union Mini-Con – Wednesday With the growing number of business cases for artificial intelligence (AI), machine learning and deep learning continue to drive the development of state-of-the-art technology. We see this manifested in computer vision, predictive modeling, natural language understanding, and recommendation engines. During this full day of sessions and workshops, learn how we use some of these technologies within Amazon, and how you can develop your applications to leverage the benefits of these AI services.
Deep dive customer use case sessions
ARC306 – Event Handling at Scale: Designing an Auditable Ingestion and Persistence Architecture for 10K+ events/second How does McGraw-Hill Education use the AWS platform to scale and reliably receive 10,000 learning events per second? How do we provide near-real-time reporting and event-driven analytics for hundreds of thousands of concurrent learners in a reliable, secure, and auditable manner that is cost effective? MHE designed and implemented a robust solution that integrates AWS API Gateway, AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Elasticsearch Service, Amazon DynamoDB, HDFS, Amazon EMR, Amazon EC2, and other technologies to deliver this cloud-native platform across the US and soon the world. This session describes the challenges we faced, architecture considerations, how we gained confidence for a successful production roll-out, and the behind-the-scenes lessons we learned.
ARC308 – Metering Big Data at AWS: From 0 to 100 Million Records in 1 Second Learn how AWS processes millions of records per second to support accurate metering across AWS and our customers. This session shows how we migrated from traditional frameworks to AWS managed services to support a broad processing pipeline. You gain insights on how we used AWS services to build a reliable, scalable, and fast processing system using Amazon Kinesis, Amazon S3, and Amazon EMR. Along the way, we dive deep into use cases that deal with scaling and accuracy constraints. Attend this session to see AWS’s end-to-end solution that supports metering at AWS.
BDA203 – Billions of Rows Transformed in Record Time Using Matillion ETL for Amazon Redshift Billions of Rows Transformed in Record Time Using Matillion ETL for Amazon Redshift GE Power & Water develops advanced technologies to help solve some of the world’s most complex challenges related to water availability and quality. They had amassed billions of rows of data on on-premises databases but decided to migrate some of their core big data projects to the AWS Cloud. When they decided to transform and store it all in Amazon Redshift, they knew they needed an ETL/ELT tool that could handle this enormous amount of data and safely deliver it to its destination.
In this session, Ryan Oates, Enterprise Architect at GE Water, shares his use case, requirements, outcomes and lessons learned. He also shares the details of his solution stack, including Amazon Redshift and Matillion ETL for Amazon Redshift in AWS Marketplace. You learn best practices on Amazon Redshift ETL supporting enterprise analytics and big data requirements, simply and at scale. You learn how to simplify data loading, transformation and orchestration on to Amazon Redshift and how to build out a real data pipeline.
BDA204 – Leverage the Power of the Crowd To Work with Amazon Mechanical Turk With Amazon Mechanical Turk (MTurk), you can leverage the power of the crowd for a host of tasks ranging from image moderation and video transcription to data collection and user testing. You simply build a process that submits tasks to the Mechanical Turk marketplace and get results quickly, accurately, and at scale. In this session, Russ, from Rainforest QA, shares best practices and lessons learned from his experience using MTurk. The session covers the key concepts of MTurk, getting started as a Requester, and using MTurk via the API. You learn how to set and manage Worker incentives, achieve great Worker quality, and how to integrate and scale your crowdsourced application. By the end of this session, you have a comprehensive understanding of MTurk and know how to get started harnessing the power of the crowd.
BDA205 – Delighting Customers Through Device Data with Salesforce IoT Cloud and AWS IoT The Internet of Things (IoT) produces vast quantities of data that promise a deep, always connected view into customer experiences through their devices. In this connected age, the question is no longer how do you gather customer data, but what do you do with all that data. How do you ingest at massive scale and develop meaningful experiences for your customers? In this session, you’ll learn how Salesforce IoT Cloud works in concert with the AWS IoT engine to ingest and transform all of the data generated by every one of your customers, partners, devices, and sensors into meaningful action. You’ll also see how customers are using Salesforce and AWS together to process massive quantities of data, build business rules with simple, intuitive tools, and engage proactively with customers in real time. Session sponsored by Salesforce.
BDM203 – FINRA: Building a Secure Data Science Platform on AWS Data science is a key discipline in a data-driven organization. Through analytics, data scientists can uncover previously unknown relationships in data to help an organization make better decisions. However, data science is often performed from local machines with limited resources and multiple datasets on a variety of databases. Moving to the cloud can help organizations provide scalable compute and storage resources to data scientists, while freeing them from the burden of setting up and managing infrastructure. In this session, FINRA, the Financial Industry Regulatory Authority, shares best practices and lessons learned when building a self-service, curated data science platform on AWS. A project that allowed us to remove the technology middleman and empower users to choose the best compute environment for their workloads. Understand the architecture and underlying data infrastructure services to provide a secure, self-service portal to data scientists, learn how we built consensus for tooling from of our data science community, hear about the benefits of increased collaboration among the scientists due to the standardized tools, and learn how you can retain the freedom to experiment with the latest technologies while retaining information security boundaries within a virtual private cloud (VPC).
BDM204 – Visualizing Big Data Insights with Amazon QuickSight Amazon QuickSight is a fast BI service that makes it easy for you to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. QuickSight is built to harness the power and scalability of the cloud, so you can easily run analysis on large datasets, and support hundreds of thousands of users. In this session, we’ll demonstrate how you can easily get started with Amazon QuickSight, uploading files, connecting to Amazon S3 and Amazon Redshift and creating analyses from visualizations that are optimized based on the underlying data. After we’ve built our analysis and dashboard, we’ll show you easy it is to share it with colleagues and stakeholders in just a few seconds.
BDM303 – JustGiving: Serverless Data Pipelines, Event-Driven ETL, and Stream Processing Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT), application programming interfaces (API), clickstreams, unstructured and log data sources. However, organizations are also often limited by legacy data warehouses and ETL processes that were designed for transactional data. Building scalable big data pipelines with automated extract-transform-load (ETL) and machine learning processes can address these limitations. JustGiving is the world’s social platform for giving. In this session, we describe how we created several scalable and loosely coupled event-driven ETL and ML pipelines as part of our in-house data science platform called RAVEN. You learn how to leverage AWS Lambda, Amazon S3, Amazon EMR, Amazon Kinesis, and other services to build serverless, event-driven, data and stream processing pipelines in your organization. We review common design patterns, lessons learned, and best practices, with a focus on serverless big data architectures with AWS Lambda.
BDM306 – Netflix: Using Amazon S3 as the fabric of our big data ecosystem Amazon S3 is the central data hub for Netflix’s big data ecosystem. We currently have over 1.5 billion objects and 60+ PB of data stored in S3. As we ingest, transform, transport, and visualize data, we find this data naturally weaving in and out of S3. Amazon S3 provides us the flexibility to use an interoperable set of big data processing tools like Spark, Presto, Hive, and Pig. It serves as the hub for transporting data to additional data stores / engines like Teradata, Amazon Redshift, and Druid, as well as exporting data to reporting tools like Microstrategy and Tableau. Over time, we have built an ecosystem of services and tools to manage our data on S3. We have a federated metadata catalog service that keeps track of all our data. We have a set of data lifecycle management tools that expire data based on business rules and compliance. We also have a portal that allows users to see the cost and size of their data footprint. In this talk, we’ll dive into these major uses of S3, as well as many smaller cases, where S3 smoothly addresses an important data infrastructure need. We also provide solutions and methodologies on how you can build your S3 big data hub.
BDM402 – Best Practices for Data Warehousing with Amazon Redshift In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to deliver high throughput and query performance and you learn from king.com how to design optimal schemas, load data efficiently, and use workload management.
BDM403 – Building a Real-Time Streaming Data Platform on AWS In this session, we discuss best practices for building an end-to-end streaming data application using Amazon Kinesis and the CTO of Beeswax share how they build they built their real-time streaming data platform using Amazon Kinesis streams.
DAT202 – Migrating Your Data Warehouse to Amazon Redshift Amazon Redshift is a fast, simple, cost-effective data warehousing solution, and in this session, we look at the tools and techniques you can use to migrate your existing data warehouse to Amazon Redshift. We then present a case study on Scholastic’s migration to Amazon Redshift. Scholastic, a large 100-year-old publishing company, was running their business with older, on-premise, data warehousing and analytics solutions, which could not keep up with business needs and were expensive. Scholastic also needed to include new capabilities like streaming data and real-time analytics. Scholastic migrated to Amazon Redshift, and achieved agility and faster time to insight while dramatically reducing costs. In this session, Scholastic discusses how they achieved this, including options considered, technical architecture implemented, results, and lessons learned.
DAT204 – How Thermo Fisher Is Reducing Mass Spectrometry Experiment Times from Days to Minutes with MongoDB & AWS Mass spectrometry is the gold standard for determining chemical compositions, with spectrometers often measuring the mass of a compound down to a single electron. This level of granularity produces an enormous amount of hierarchical data that doesn’t fit well into rows and columns. In this talk, learn how Thermo Fisher is using MongoDB Atlas on AWS to allow their users to get near real-time insights from mass spectrometry experiments—a process that used to take days. We also share how the underlying database service used by Thermo Fisher was built on AWS.
DAT205 – Fanatics Migrates Data to Hadoop on the AWS Cloud Using Attunity CloudBeam in AWS Marketplace Keeping a data warehouse current and relevant can be challenging because of the time and effort required to insert new data. The world’s most licensed sports merchandiser, Fanatics, used Attunity CloudBeam in AWS Marketplace to transform their data from Microsoft SQL, Oracle, and other sources to Amazon S3, where they consume the data in Hadoop and Amazon Redshift. Fanatics can now analyze the huge volumes of data from their transactional, e-commerce, and back office systems, and make this data available immediately. In this session, Fanatics shares their use case, requirements, outcomes and lessons learned. You’ll learn best practices on implementing a data lake, using Apache Kafka and how to consistently replicate data to Amazon Redshift and Amazon S3.
DAT308 – Fireside chat with Groupon, Intuit, and LifeLock on solving Big Data database challenges with Redis Redis Labs’ CMO is hosting a fireside chat with leaders from multiple industries including Groupon (e-commerce), Intuit (Finance), and LifeLock (Identity Protection). This conversation-style session covers the Big Data related challenges faced by these leading companies as they scale their applications, ensure high availability, serve the best user experience at lowest latencies, and optimize between cloud and on-premises operations. The introductory level session can appeal to both developer and DevOps functions. Attendees hear about diverse use cases such as recommendations engine, hybrid transactions and analytics operations, and time-series data analysis. The audience learns how the Redis in-memory database platform addresses the above use cases with its multi-model capability and in a cost effective manner to meet the needs of the next generation applications. Session sponsored by Redis Labs.
DAT309 – How Fulfillment by Amazon (FBA) and Scopely Improved Results and Reduced Costs with a Serverless Architecture In this session, we share an overview of leveraging serverless architectures to support high-performance data intensive applications. Fulfillment by Amazon (FBA) built the Seller Inventory Authority Platform (IAP) using Amazon DynamoDB Streams, AWS Lambda functions, Amazon Elasticsearch Service, and Amazon Redshift to improve results and reduce costs. Scopely shares how they used a flexible logging system built on Amazon Kinesis, Lambda, and Amazon ES to provide high-fidelity reporting on hotkeys in Memcached and DynamoDB, and drastically reduce the incidence of hotkeys. Both of these customers are using managed services and serverless architecture to build scalable systems that can meet the projected business growth without a corresponding increase in operational costs
DAT310 – Building Real-Time Campaign Analytics Using AWS Services Quantcast provides its advertising clients the ability to run targeted ad campaigns reaching millions of online users. The real-time bidding for campaigns runs on thousands of machines across the world. When Quantcast wanted to collect and analyze campaign metrics in real time, they turned to AWS to rapidly build a scalable, resilient, and extensible framework. Quantcast used Amazon Kinesis streams to stage data, Amazon EC2 instances to shuffle and aggregate the data, and Amazon DynamoDB and Amazon ElastiCache for building scalable time-series databases. With Elastic Load Balancing and Auto Scaling groups, they can set up distributed microservices with minimal operation overhead. This session discusses their use case, how they architected the application with AWS technologies integrated with their existing home-grown stack, and the lessons they learned.
DAT311 – How Toyota Racing Development Makes Racing Decisions in Real Time with AWS In this session, you learn how Toyota Racing Development (TRD) developed a robust and highly performant real-time data analysis tool for professional racing. In this talk, learn how we structured a reliable, maintainable, decoupled architecture built around Amazon DynamoDB as both a streaming mechanism and a long-term persistent data store. In racing, milliseconds matter and even moments of downtime can cost a race. You’ll see how we used DynamoDB together with Amazon Kinesis Streams and Amazon Kinesis Firehose to build a real-time streaming data analysis tool for competitive racing.
DAT312 – How DataXu scaled its Attribution System to handle billions of events per day with Amazon DynamoDB “Attribution” is the marketing term of art for allocating full or partial credit to advertisements that eventually lead to purchase, sign up, download, or other desired consumer interaction. DataXu shares how we use DynamoDB at the core of our attribution system to store terabytes of advertising history data. The system is cost effective and dynamically scales from 0 to 300K requests per second on demand with predictable performance and low operational overhead.
DAT313 – 6 Million New Registrations in 30 Days: How the Chick-fil-A One App Scaled with AWS Chris leads the team providing back-end services for the massively popular Chick-fil-A One mobile app that launched in June 2016. Chick-fil-A follows AWS best practices for web services and leverages numerous AWS services, including Elastic Beanstalk, DynamoDB, Lambda, and Amazon S3. This was the largest technology-dependent promotion in Chick-fil-A history. To ensure their architecture would perform at unknown and massive scale, Chris worked with AWS Support through an AWS Infrastructure Event Management (IEM) engagement and leaned on automated operations to enable load testing before launch.
DAT316 – How Telltale Games migrated its story analytics from Apache CouchDB to Amazon DynamoDB Every choice made in Telltale Games titles influences how your character develops and how the world responds to you. With millions of users making thousands of choices in a single episode, Telltale Games tracks this data and leverages it to build more relevant stories in real time as the season is developed. In this session, you’ll learn about Telltale Games’ migration from Apache CouchDB to Amazon DynamoDB, the challenges of adjusting capacity to handling spikes in database activity, and how it streamlined its analytics storage to provide new perspectives on player interaction to improve its games.
DAT318 – Migrating from RDBMS to NoSQL: How Sony Moved from MySQL to Amazon DynamoDB In this session, you learn the key differences between a relational database management service (RDBMS) and non-relational (NoSQL) databases like Amazon DynamoDB. You learn about suitable and unsuitable use cases for NoSQL databases. You’ll learn strategies for migrating from an RDBMS to DynamoDB through a 5-phase, iterative approach. See how Sony migrated an on-premises MySQL database to the cloud with Amazon DynamoDB, and see the results of this migration.
GAM301 – How EA Leveraged Amazon Redshift and AWS Partner 47Lining to Gather Meaningful Player Insights In November 2015, Capital Games launched a mobile game accompanying a major feature film release. The back end of the game is hosted in AWS and uses big data services like Amazon Kinesis, Amazon EC2, Amazon S3, Amazon Redshift, and AWS Data Pipeline. Capital Games describe some of their challenges on their initial setup and usage of Amazon Redshift and Amazon EMR. They then go over their engagement with AWS Partner 47lining and talk about specific best practices regarding solution architecture, data transformation pipelines, and system maintenance using AWS big data services. Attendees of this session should expect a candid view of the process to implementing a big data solution. From problem statement identification to visualizing data, with an in-depth look at the technical challenges and hurdles along the way.
LFS303 – How to Build a Big Data Analytics Data Lake For discovery phase research, life sciences companies have to support infrastructure that processes millions to billions of transactions. The advent of a data lake to accomplish such a task is showing itself to be a stable and productive data platform pattern to meet the goal. We discuss how to build a data lake on AWS, using services and techniques such as AWS CloudFormation, Amazon EC2, Amazon S3, IAM, and AWS Lambda. We also review a reference architecture from Amgen that uses a data lake to aid in their Life Science Research.
SVR301 – Real-time Data Processing Using AWS Lambda, Amazon Kinesis In this session, you learn from Thomson Reuters how they leverage AWS for its Product Insight service. The service provides insights to collect usage analytics for Thomson Reuters products. They walk through its architecture and demonstrate how they leverage Amazon Kinesis Streams, Amazon Kinesis Firehose, AWS Lambda, Amazon S3, Amazon Route 53, and AWS KMS for near real-time access to data being collected around the globe. They also outline how applying AWS methodologies benefited its business, such as time-to-market and cross-region ingestion, auto-scaling capabilities, low-latency, security features, and extensibility.
SVR305 – ↑↑↓↓←→←→ BA Lambda Start Ever wished you had a list of cheat codes to unleash the full power of AWS Lambda for your production workload? Come learn how to build a robust, scalable, and highly available serverless application using AWS Lambda. In this session, we discuss hacks and tricks for maximizing your AWS Lambda performance, such as leveraging customer reuse, using the 500 MB scratch space and local cache, creating custom metrics for managing operations, aligning upstream and downstream services to scale along with Lambda, and many other workarounds and optimizations across your entire function lifecycle. You also learn how Hearst converted its real-time clickstream analytics data pipeline from a server-based model to a serverless one. The infrastructure of the data pipeline relied on Amazon EC2 instances and cron jobs to shepherd data through the process. In 2016, Hearst converted its data pipeline architecture to a serverless process that is based on event triggers and the power of AWS Lambda. By moving from a time-based process to a trigger-based process, Hearst improved its pipeline latency times by 50%.
SVR308 – Content and Data Platforms at Vevo: Rebuilding and Scaling from Zero in One Year Vevo has undergone a complete strategic and technical reboot, driven not only by product but also by engineering. Since November 2015, Vevo has been replacing monolithic, legacy content services with a modern, modular, microservices architecture, all while developing new features and functionality. In parallel, Vevo has built its data platform from scratch to power the internal analytics as well as a unique music video consumption experience through a new personalized feed of recommendations — all in less than one year. This has been a monumental effort that was made possible in this short time span largely because of AWS technologies. The content team has been heavily using serverless architectures and AWS Lambda in the form of microservices, taking a similar approach to functional programming, which has helped us speed up the development process and time to market. The data team has been building the data platform by heavily leveraging Amazon Kinesis for data exchange across services, Amazon Aurora for consumer-facing services, Apache Spark on Amazon EMR for ETL + Machine Learning, as well as Amazon Redshift as the core analytics data store..
Machine learning sessions
MAC201 – Getting to Ground Truth with Amazon Mechanical Turk Jump-start your machine learning project by using the crowd to build your training set. Before you can train your machine learning algorithm, you need to take your raw inputs and label, annotate, or tag them to build your ground truth. Learn how to use the Amazon Mechanical Turk marketplace to perform these tasks. We share Amazon’s best practices, developed while training our own machine learning algorithms and walk you through quickly getting affordable and high-quality training data.
MAC202 – Deep Learning in Alexa Neural networks have a long and rich history in automatic speech recognition. In this talk, we present a brief primer on the origin of deep learning in spoken language, and then explore today’s world of Alexa. Alexa is the AWS service that understands spoken language and powers Amazon Echo. Alexa relies heavily on machine learning and deep neural networks for speech recognition, text-to-speech, language understanding, and more. We also discuss the Alexa Skills Kit, which lets any developer teach Alexa new skills.
MAC205 – Deep Learning at Cloud Scale: Improving Video Discoverability by Scaling Up Caffe on AWS Deep learning continues to push state of the art in domains such as video analytics, computer vision, and speech recognition. Deep networks are powered by amazing levels of representational power, feature learning, and abstraction. This approach comes at the cost of a significant increase in required compute power, which makes the AWS cloud an excellent environment for training. Innovators in this space are applying deep learning to a variety of applications. One such innovator, Vilynx, a startup based in Palo Alto, realized that the current pre-roll advertising-based models for mobile video weren’t returning publishers’ desired levels of engagement. In this session, we explain the algorithmic challenges of scaling across multiple nodes, and what Intel is doing on AWS to overcome them. We describe the benefits of using AWS CloudFormation to set up a distributed training environment for deep networks. We also showcase Vilynx’s contributions to video discoverability and explain how Vilynx uses AWS tools to understand video content.
MAC301 – Transforming Industrial Processes with Deep Learning Deep learning has revolutionized computer vision by significantly increasing the accuracy of recognition systems. This session discusses how the Amazon Fulfillment Technologies Computer Vision Research team has harnessed deep learning to identify inventory defects in Amazon’s warehouses. Beginning with a brief overview of how orders on Amazon.com are fulfilled, the session describes a combination of hardware and software that uses computer vision and deep learning that visually examine bins of Amazon inventory to locate possible mismatches between the physical inventory and inventory records. With the growth of deep learning, the emphasis of new system design shifts from clever algorithms to innovative ways to harness available data.
MAC302 – Leveraging Amazon Machine Learning, Amazon Redshift, and an Amazon Simple Storage Service Data Lake for Strategic Advantage in Real Estate The Howard Hughes Corporation partnered with 47Lining to develop a managed enterprise data lake based on Amazon S3. The purpose of the managed EDL is to fuse relevant on-premises and third-party data to enable Howard Hughes to answer its most valuable business questions. Their first analysis was a lead-scoring model that uses Amazon Machine Learning (Amazon ML) to predict propensity to purchase high-end real estate. The model is based on a combined set of public and private data sources, including all publicly recorded real estate transactions in the US for the past 35 years. By changing their business process for identifying and qualifying leads to use the results of data-driven analytics from their managed data lake in AWS, Howard Hughes increased the number of identified qualified leads in their pipeline by over 400% and reduced the acquisition cost per lead by more than 10 times. In this session, you see a practical example of how to use Amazon ML to improve business results, how to architect a data lake with Amazon S3 that fuses on-premises, third-party, and public datasets, and how to train and run an Amazon ML model to attain predictions
MAC303 – Developing Classification and Recommendation Engines with Amazon EMR and Apache Spark Customers are adopting Apache Spark‒a set of open-source distributed machine learning algorithms‒on Amazon EMR for large-scale machine learning workloads, especially for applications that power customer segmentation and content recommendation. By leveraging Spark ML, customers can quickly build and execute massively parallel machine learning jobs. Additionally, Spark applications can train models in streaming or batch contexts and can access data from Amazon S3, Amazon Kinesis, Apache Kafka, Amazon Elasticsearch Service, Amazon Redshift, and other services. This session explains how to quickly and easily create scalable Spark clusters with Amazon EMR, build and share models using Apache Zeppelin notebooks, and create a sample application using Spark Streaming, which updates models with real-time data.
MAC306 – Using MXNet for Recommendation Modeling at Scale For many companies, recommendation systems solve important machine learning problems. But as recommendation systems grow to millions of users and millions of items, they pose significant challenges when deployed at scale. The user-item matrix can have trillions of entries (or more), most of which are zero. To make common ML techniques practical, sparse data requires special techniques. Learn how to use MXNet to build neural network models for recommendation systems that can scale efficiently to large sparse datasets.
MAC307 – Predicting Customer Churn with Amazon Machine Learning In this session, we take a specific business problem—predicting Telco customer churn—and explore the practical aspects of building and evaluating an Amazon Machine Learning model. We explore considerations ranging from assigning a dollar value to applying the model using the relative cost of false positive and false negative errors. We discuss all aspects of putting Amazon ML to practical use, including how to build multiple models to choose from, put models into production, and update them. We also discuss using Amazon Redshift and Amazon S3 with Amazon ML.
Services sessions: Architecture and best practices
BDM201 – Big Data Architectural Patterns and Best Practices on AWS The world is producing an ever-increasing volume, velocity, and variety of big data. Consumers and businesses are demanding up-to-the-second (or even millisecond) analytics on their fast-moving data, in addition to classic batch processing. AWS delivers many technologies for solving big data problems. But what services should you use, why, when, and how? In this session, we simplify big data processing as a data bus comprising various stages: ingest, store, process, and visualize. Next, we discuss how to choose the right technology in each stage based on criteria such as data structure, query latency, cost, request rate, item size, data volume, durability, and so on. Finally, we provide reference architecture, design patterns, and best practices for assembling these technologies to solve your big data problems at the right cost
BDM301 – Best Practices for Apache Spark on Amazon EMR Organizations need to perform increasingly complex analysis on data — streaming analytics, ad-hoc querying, and predictive analytics — in order to get better customer insights and actionable business intelligence. Apache Spark has recently emerged as the framework of choice to address many of these challenges. In this session, we show you how to use Apache Spark on AWS to implement and scale common big data use cases such as real-time data processing, interactive data science, predictive analytics, and more. We talk about common architectures, best practices to quickly create Spark clusters using Amazon EMR, and ways to integrate Spark with other big data services in AWS.
BDM302 – Real-Time Data Exploration and Analytics with Amazon Elasticsearch Service and Kibana Elasticsearch is a fully featured search engine used for real-time analytics, and Amazon Elasticsearch Service makes it easy to deploy Elasticsearch clusters on AWS. With Amazon ES, you can ingest and process billions of events per day, and explore the data using Kibana to discover patterns. In this session, we use Apache web logs as example and show you how to build an end-to-end analytics solution. First, we cover how to configure an Amazon ES cluster and ingest data into it using Amazon Kinesis Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data. Then we demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we dive deep into the Elasticsearch query DSL and review approaches for generating custom, ad-hoc reports.
BDM304 – Analyzing Streaming Data in Real-time with Amazon Kinesis Analytics As more and more organizations strive to gain real-time insights into their business, streaming data has become ubiquitous. Typical streaming data analytics solutions require specific skills and complex infrastructure. However, with Amazon Kinesis Analytics, you can analyze streaming data in real time with standard SQL—there is no need to learn new programming languages or processing frameworks. In this session, we dive deep into the capabilities of Amazon Kinesis Analytics using real-world examples. We’ll present an end-to-end streaming data solution using Amazon Kinesis Streams for data ingestion, Amazon Kinesis Analytics for real-time processing, and Amazon Kinesis Firehose for persistence. We review in detail how to write SQL queries using streaming data and discuss best practices to optimize and monitor your Amazon Kinesis Analytics applications. Lastly, we discuss how to estimate the cost of the entire system.
BDM401 – Deep Dive: Amazon EMR Best Practices & Design Patterns Amazon EMR is one of the largest Hadoop operators in the world. In this session, we introduce you to Amazon EMR design patterns such as using Amazon S3 instead of HDFS, taking advantage of both long and short-lived clusters, and other Amazon EMR architectural best practices. We talk about how to scale your cluster up or down dynamically and introduce you to ways you can fine-tune your cluster. We also share best practices to keep your Amazon EMR cluster cost-efficient. Finally, we dive into some of our recent launches to keep you current on our latest features.
DAT304 – Deep Dive on Amazon DynamoDB Explore Amazon DynamoDB capabilities and benefits in detail and learn how to get the most out of your DynamoDB database. We go over best practices for schema design with DynamoDB across multiple use cases, including gaming, AdTech, IoT, and others. We explore designing efficient indexes, scanning, and querying, and go into detail on a number of recently released features, including JSON document support, DynamoDB Streams, and more. We also provide lessons learned from operating DynamoDB at scale, including provisioning DynamoDB for IoT.
Workshops
BDM202 – Workshop: Building Your First Big Data Application with AWS Want to get ramped up on how to use Amazon’s big data web services and launch your first big data application on AWS? Join us in this workshop as we build a big data application in real-time using Amazon EMR, Amazon Redshift, Amazon Kinesis, Amazon DynamoDB, and Amazon S3. We review architecture design patterns for big data solutions on AWS and give you access to a take-home lab so that you can rebuild and customize the application yourself.
IOT306 – IoT Visualizations and Analytics In this workshop, we focus on visualizations of IoT data using ELK, Amazon Elasticsearch Service, Logstash, and Kibana or Amazon Kinesis. We dive into how these visualizations can give you new capabilities and understanding when interacting with your device data from the context they provide on the world around them.
MAC401 – Scalable Deep Learning Using MXNet Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding, and recommendation engines. One of the key reasons for this progress is the availability of highly flexible and developer friendly deep learning frameworks. During this workshop, members of the Amazon Machine Learning team provide a short background on Deep Learning focusing on relevant application domains and an introduction to using the powerful and scalable Deep Learning framework, MXNet. At the end of this tutorial, you’ll gain hands-on experience targeting a variety of applications including computer vision and recommendation engines as well as exposure to how to use preconfigured Deep Learning AMIs and CloudFormation Templates to help speed your development.
STG312 – Workshop: Working with AWS Snowball – Accelerating Data Ingest into the Cloud This workshop provides customers with the opportunity to work hands-on with the AWS Snowball service, with attendees broken out into small teams to perform various on-premises to cloud data transfer scenarios using actual Snowball devices. These scenarios include migrating backup & archive data to S3-IA and Amazon Glacier, HDFS cluster migration to S3 for use with Amazon EMR and Amazon Redshift, and leveraging the Snowball API & SDK to build AWS Snowball service integration into a custom application. The session opens with an overview of the service, objectives, and guidance on where to find resources. Attendees should bring their own laptops and should have a basic familiarity with AWS storage services (S3 and Amazon Glacier). Prerequisites: Participants should have an AWS account established and available for use during the workshop. Please bring your own laptop.
While adverts for the Echo represent owners calling out to Alexa with a request or question — “Alexa, what is the time?”, “Alexa, order me a pizza”, “Alexa, how do you get red wine out of the carpet?” — any digital maker using the free API from the Amazon Developer team had to include a button within their build, putting a slight dampener on the futuristic vibe of the disembodied Alexa. (We know about this dampening effect, because a bunch of you complained vocally about it.)
With the update removing the press-a-button limitation, anyone using the AVS can now ‘wake’ Alexa with a ‘wake word’, calling out to Alexa, Echo, or Amazon. Thankfully, at least in my household, this choice of wake word means the device won’t be listening whenever anyone calls my name.
Winners of the challenge received various awards including Amazon vouchers, Echos, and trophies. A full list of winners can be seen here, but we thought you’d like to see some of the most noteworthy builds, like Roxie the Voice-Activated Pitching Robot by Terren Peterson:
Coffee Machine: Amazon Alexa & Raspberry Pi, my Internet of Voice project. If you want to develop a project like this, read the following site for instructions: https://www.hackster.io/bastiaan-slee/coffee-machine-amazon-alexa-raspberry-pi-cbc613
One thing I’m looking forward to is integrating the AVS into situations where hands-free truly is the only option. Not only will we begin to see an increase of Alexa-pimped cars, bikes, and drones, but I also see great advances in the use of the service for those with accessibility issues, such as those with mobility concerns or visual impairments. The Smart Cap, winner of the Intermediate Alexa Skill Set category, is a great example. Get in touch if you create something yourself!
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.