Tag Archives: MQTT

AWS Hot Startups – February 2017

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-february-2017-2/

As we finish up the month of February, Tina Barr is back with some awesome startups.


This month we are bringing you five innovative hot startups:

  • GumGum – Creating and popularizing the field of in-image advertising.
  • Jiobit – Smart tags to help parents keep track of kids.
  • Parsec – Offers flexibility in hardware and location for PC gamers.
  • Peloton – Revolutionizing indoor cycling and fitness classes at home.
  • Tendril – Reducing energy consumption for homeowners.

If you missed any of our January startups, make sure to check them out here.

GumGum (Santa Monica, CA)
GumGum logo1GumGum is best known for inventing and popularizing the field of in-image advertising. Founded in 2008 by Ophir Tanz, the company is on a mission to unlock the value held within the vast content produced daily via social media, editorials, and broadcasts in a variety of industries. GumGum powers campaigns across more than 2,000 premium publishers, which are seen by over 400 million users.

In-image advertising was pioneered by GumGum and has given companies a platform to deliver highly visible ads to a place where the consumer’s attention is already focused. Using image recognition technology, GumGum delivers targeted placements as contextual overlays on related pictures, as banners that fit on all screen sizes, or as In-Feed placements that blend seamlessly into the surrounding content. Using Visual Intelligence, GumGum can scour social media and broadcast TV for all images and videos related to a brand, allowing companies to gain a stronger understanding of their audience and how they are relating to that brand on social media.

GumGum relies on AWS for its Image Processing and Ad Serving operations. Using AWS infrastructure, GumGum currently processes 13 million requests per minute across the globe and generates 30 TB of new data every day. The company uses a suite of services including but not limited to Amazon EC2, Amazon S3, Amazon Kinesis, Amazon EMR, AWS Data Pipeline, and Amazon SNS. AWS edge locations allow GumGum to serve its customers in the US, Europe, Australia, and Japan and the company has plans to expand its infrastructure to Australia and APAC regions in the future.

For a look inside GumGum’s startup culture, check out their first Hackathon!

Jiobit (Chicago, IL)
Jiobit Team1
Jiobit was inspired by a real event that took place in a crowded Chicago park. A couple of summers ago, John Renaldi experienced every parent’s worst nightmare – he lost track of his then 6-year-old son in a public park for almost 30 minutes. John knew he wasn’t the only parent with this problem. After months of research, he determined that over 50% of parents have had a similar experience and an even greater percentage are actively looking for a way to prevent it.

Jiobit is the world’s smallest and longest lasting smart tag that helps parents keep track of their kids in every location – indoors and outdoors. The small device is kid-proof: lightweight, durable, and waterproof. It acts as a virtual “safety harness” as it uses a combination of Bluetooth, Wi-Fi, Multiple Cellular Networks, GPS, and sensors to provide accurate locations in real-time. Jiobit can automatically learn routes and locations, and will send parents an alert if their child does not arrive at their destination on time. The talented team of experienced engineers, designers, marketers, and parents has over 150 patents and has shipped dozens of hardware and software products worldwide.

The Jiobit team is utilizing a number of AWS services in the development of their product. Security is critical to the overall product experience, and they are over-engineering security on both the hardware and software side with the help of AWS. Jiobit is also working towards being the first child monitoring device that will have implemented an Alexa Skill via the Amazon Echo device (see here for a demo!). The devices use AWS IoT to send and receive data from the Jio Cloud over the MQTT protocol. Once data is received, they use AWS Lambda to parse the received data and take appropriate actions, including storing relevant data using Amazon DynamoDB, and sending location data to Amazon Machine Learning processing jobs.

Visit the Jiobit blog for more information.

Parsec (New York, NY)
Parsec logo large1
Parsec operates under the notion that everyone should have access to the best computing in the world because access to technology creates endless opportunities. Founded in 2016 by Benjy Boxer and Chris Dickson, Parsec aims to eliminate the burden of hardware upgrades that users frequently experience by building the technology to make a computer in the cloud available anywhere, at any time. Today, they are using their technology to enable greater flexibility in the hardware and location that PC gamers choose to play their favorite games on. Check out this interview with Benjy and our Startups team for a look at how Parsec works.

Parsec built their first product to improve the gaming experience; gamers no longer have to purchase consoles or expensive PCs to access the entertainment they love. Their low latency video streaming and networking technologies allow gamers to remotely access their gaming rig and play on any Windows, Mac, Android, or Raspberry Pi device. With the global reach of AWS, Parsec is able to deliver cloud gaming to the median user in the US and Europe with less than 30 milliseconds of network latency.

Parsec users currently have two options available to start gaming with cloud resources. They can either set up their own machines with the Parsec AMI in their region or rely on Parsec to manage everything for a seamless experience. In either case, Parsec uses the g2.2xlarge EC2 instance type. Parsec is using Amazon Elastic Block Storage to store games, Amazon DynamoDB for scalability, and Amazon EC2 for its web servers and various APIs. They also deal with a high volume of logs and take advantage of the Amazon Elasticsearch Service to analyze the data.

Be sure to check out Parsec’s blog to keep up with the latest news.

Peloton (New York, NY)
Peloton image 3
The idea for Peloton was born in 2012 when John Foley, Founder and CEO, and his wife Jill started realizing the challenge of balancing work, raising young children, and keeping up with personal fitness. This is a common challenge people face – they want to work out, but there are a lot of obstacles that stand in their way. Peloton offers a solution that enables people to join indoor cycling and fitness classes anywhere, anytime.

Peloton has created a cutting-edge indoor bike that streams up to 14 hours of live classes daily and has over 4,000 on-demand classes. Users can access live classes from world-class instructors from the convenience of their home or gym. The bike tracks progress with in-depth ride metrics and allows people to compete in real-time with other users who have taken a specific ride. The live classes even feature top DJs that play current playlists to keep users motivated.

With an aggressive marketing campaign, which has included high-visibility TV advertising, Peloton made the decision to run its entire platform in the cloud. Most recently, they ran an ad during an NFL playoff game and their rate of requests per minute to their site increased from ~2k/min to ~32.2k/min within 60 seconds. As they continue to grow and diversify, they are utilizing services such as Amazon S3 for thousands of hours of archived on-demand video content, Amazon Redshift for data warehousing, and Application Load Balancer for intelligent request routing.

Learn more about Peloton’s engineering team here.

Tendril (Denver, CO)
Tendril logo1
Tendril was founded in 2004 with the goal of helping homeowners better manage and reduce their energy consumption. Today, electric and gas utilities use Tendril’s data analytics platform on more than 140 million homes to deliver a personalized energy experience for consumers around the world. Using the latest technology in decision science and analytics, Tendril can gain access to real-time, ever evolving data about energy consumers and their homes so they can improve customer acquisition, increase engagement, and orchestrate home energy experiences. In turn, Tendril helps its customers unlock the true value of energy interactions.

AWS helps Tendril run its services globally, while scaling capacity up and down as needed, and in real-time. This has been especially important in support of Tendril’s newest solution, Orchestrated Energy, a continuous demand management platform that calculates a home’s thermal mass, predicts consumer behavior, and integrates with smart thermostats and other connected home devices. This solution allows millions of consumers to create a personalized energy plan for their home based on their individual needs.

Tendril builds and maintains most of its infrastructure services with open sources tools running on Amazon EC2 instances, while also making use of AWS services such as Elastic Load Balancing, Amazon API Gateway, Amazon CloudFront, Amazon Route 53, Amazon Simple Queue Service, and Amazon RDS for PostgreSQL.

Visit the Tendril Blog for more information!

— Tina Barr

Introducing the AWS IoT Button Enterprise Program

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/introducing-the-aws-iot-button-enterprise-program/

The AWS IoT Button first made its appearance on the IoT scene in October of 2015 at AWS re:Invent with the introduction of the AWS IoT service.  That year all re:Invent attendees received the AWS IoT Button providing them the opportunity to get hands-on with AWS IoT.  Since that time AWS IoT button has been made broadly available to anyone interested in the clickable IoT device.

During this past AWS re:Invent 2016 conference, the AWS IoT button was launched into the enterprise with the AWS IoT Button Enterprise Program.  This program is intended to help businesses to offer new services or improve existing products at the click of a physical button.  With the AWS IoT Button Enterprise Program, enterprises can use a programmable AWS IoT Button to increase customer engagement, expand applications and offer new innovations to customers by simplifying the user experience.  By harnessing the power of IoT, businesses can respond to customer demand for their products and services in real-time while providing a direct line of communication for customers, all via a simple device.



AWS IoT Button Enterprise Program

Let’s discuss how the new AWS IoT Button Enterprise Program works.  Businesses start by placing a bulk order of the AWS IoT buttons and provide a custom label for the branding of the buttons.  Amazon manufactures the buttons and pre-provisions the IoT button devices by giving each a certificate and unique private key to grant access to AWS IoT and ensure secure communication with the AWS cloud.  This allows for easier configuration and helps customers more easily get started with the programming of the IoT button device.

Businesses would design and build their IoT solution with the button devices and creation of device companion applications.  The AWS IoT Button Enterprise Program provides businesses some complimentary assistance directly from AWS to ensure a successful deployment.  The deployed devices then would only need to be configured with Wi-Fi at user locations in order to function.



For enterprises, there are several use cases that would benefit from the implementation of an IoT button solution. Here are some ideas:

  • Reordering services or custom products such as pizza or medical supplies
  • Requesting a callback from a customer service agent
  • Retail operations such as a call for assistance button in stores or restaurants
  • Inventory systems for capturing products amounts for inventory
  • Healthcare applications such as alert or notification systems for the disabled or elderly
  • Interface with Smart Home systems to turn devices on and off such as turning off outside lights or opening the garage door
  • Guest check-in/check-out systems


AWS IoT Button

At the heart of the AWS IoT Button Enterprise Program is the AWS IoT Button.  The AWS IoT button is a 2.4GHz Wi-Fi with WPA2-PSK enabled device that has three click types: Single click, Double click, and Long press.  Note that a Long press click type is sent if the button is pressed for 1.5 seconds or longer.  The IoT button has a small LED light with color patterns for the status of the IoT button.  A blinking white light signifies that the IoT button is connecting to Wi-Fi and getting an IP address, while a blinking blue light signifies that the button is in wireless access point (AP) mode.  The data payload that is sent from the device when pressed contains the device serial number, the battery voltage, and the click type.

Currently, there are 3 ways to get started building your AWS IoT button solution.  The first option is to use the AWS IoT Button companion mobile app.  The mobile app will create the required AWS IoT resources, including the creation of the TLS 1.2 certificates, and create an AWS IoT rule tied to AWS Lambda.  Additionally, it will enable the IoT button device via AWS IoT to be an event source that invokes a new AWS Lambda function of your choosing from the Lambda blueprints.  You can download the aforementioned mobile apps for Android and iOS below.


The second option is to use the AWS Lambda Blueprint Wizard as an easy way to start using your AWS IoT Button. Like the mobile app, the wizard will create the required AWS IoT resources for you and add an event source to your button that invokes a new Lambda function.

The third option is to follow the step by step tutorial in the AWS IoT getting started guide and leverage the AWS IoT console to create these resources manually.

Once you have configured your IoT button successfully and created a simple one-click solution using one of the aforementioned getting started guides, you should be ready to start building your own custom IoT button solution.   Using a click of a button, your business will be able to build new services for customers, offer new features for existing services, and automate business processes to operate more efficiently.

The basic technical flow of an AWS IoT button solution is as follows:

  • A button is clicked and secure connection is established with AWS IoT with TLS 1.2
  • The button data payload is sent to AWS IoT Device Gateway
  • The rules engine evaluates received messages (JSON) published into AWS IoT and performs actions or trigger AWS Services based defined business rules.
  • The triggered AWS Service executes or action is performed
  • The device state can be read, stored and set with Device Shadows
  • Mobile and Web Apps can receive and update data based upon action

Now that you have general knowledge about the AWS IoT button, we should jump into a technical walk-through of building an AWS IoT button solution.


AWS IoT Button Solution Walkthrough

We will dive more deeply into building an AWS IoT Button solution with a quick example of a use case for providing one-click customer service options for a business.

To get started, I will go to the AWS IoT console, register my IoT button as a Thing and create a Thing type.  In the console, I select the Registry and then Things options in console menu.

The name of my IoT thing in this example will be TEW-AWSIoTButton.  If you desire to categorize the IoT things, you can create a Thing type and assign a type to similar IoT ‘things’.  I will categorize my IoT thing, TEW-AWSIoTButton, as an IoTButton thing type with a One-click-device attribute key and select Create thing button.

After my AWS IoT button device, TEW-AWSIoTButton, is registered in the Thing Registry, the next step is to acquire the required X.509 certificate and keys.  I will have AWS IoT generate the certificate for this device, but the service allows for to use your own certificates.  Authenticating the connection with the X.509 certificates helps to protect the data exchange between your device and AWS IoT service.

When the certificates are generated with AWS IoT, it is important that you download and save all of the files created since the public and private keys will not be available after you leave the download page. Additionally, do not forget to download the root CA for AWS IoT from the link provided on the page with your generated certificates.

The newly created certificate will be inactive, therefore, it is vital that you activate the certificate prior to use.  AWS IoT uses the TLS protocol to authenticate the certificates using the TLS protocol’s client authentication mode.  The certificates enable asymmetric keys to be used with devices, and AWS IoT service will request and validate the certificate’s status and the AWS account against a registry of certificates.  The service will challenge for proof of ownership of the private key corresponding to the public key contained in the certificate.  The final step in securing the AWS IoT connection to my IoT button is to create and/or attach an IAM policy for authorization.

I will choose the Attach a policy button and then select Create a Policy option in order to build a specific policy for my IoT button.  In Name field of the new IoT policy, I will enter IoTButtonPolicy for the name of this new policy. Since the AWS IoT Button device only supports button presses, our AWS IoT button policy will only need to add publish permissions.  For this reason, this policy will only allow the iot:Publish action.


For the Resource ARN of the IoT policy, the AWS IoT buttons typically follow the format pattern of: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ iotbutton /ButtonSerialNumber.  This means that the Resource ARN for this IoT button policy will be:

I should note that if you are creating an IAM policy for an IoT device that is not an AWS IoT button, the Resource ARN format pattern would be as follows: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ YourTopic/ OptionalSubTopic/

The created policy for our AWS IoT Button, IoTButtonPolicy, looks as follows:

The next step is to return to the AWS IoT console dashboard, select Security and then Certificates menu options.  I will choose the certificate created in the aforementioned steps.

Then on the selected certificate page, I will select the Actions dropdown on the far right top corner.  In order to add the IoTButtonPolicy IAM policy to the certificate, I will click the Attach policy option.


I will repeat all of the steps mentioned above but this time I will add the TEW-AWSIoTButton thing by selecting the Attach thing option.

All that is left is to add the certificate and private key to the physical AWS IoT button and connect the AWS IoT Button to Wi-Fi in order to have the IoT button be fully functional.

Important to note: For businesses that have signed up to participate in the AWS IoT Button Enterprise Program, all of these aforementioned steps; Button logo branding, AWS IoT thing creation, obtaining certificate & key creation, and adding certificates to buttons, are completed for them by Amazon and AWS.  Again, this is to help make it easier for enterprises to hit the ground running in the development of their desired AWS IoT button solution.

Now, going back to the AWS IoT button used in our example, I will connect the button to Wi-Fi by holding the button until the LED blinks blue; this means that the device has gone into wireless access point (AP) mode.

In order to provide internet connectivity to the IoT button and start configuring the device’s connection to AWS IoT, I will connect to the button’s Wi-Fi network which should start with Button ConfigureMe. The first time the connection is made to the button’s Wi-Fi, a password will be required.  Enter the last 8 characters of the device serial number shown on the back of the physical AWS IoT button device.

The AWS IoT button is now configured and ready to build a system around it. The next step will be to add the actions that will be performed when the IoT button is pressed.  This brings us to the AWS IoT Rules engine, which is used to analyze the IoT device data payload coming from the MQTT topic stream and/or Device Shadow, and trigger AWS Services actions.  We will set up rules to perform varying actions when different types of button presses are detected.

Our AWS IoT button solution will be a simple one, we will set up two AWS IoT rules to respond to the IoT button being clicked and the button’s payload being sent to AWS IoT.  In our scenario, a single button click will represent that a request is being sent by a customer to a fictional organization’s customer service agent.  A double click, however, will represent that a text will be sent containing a customer’s fictional current account status.

The first AWS IoT rule created will receive the IoT button payload and connect directly to Amazon SNS to send an email only if the rule condition is fulfilled that the button click type is SINGLE. The second AWS IoT rule created will invoke a Lambda function that will send a text message containing customer account status only if the rule condition is fulfilled that the button click type is DOUBLE.

In order to create the AWS IoT rule that will send an email to subscribers of an SNS topic for requesting a customer service agent’s help, we will go to Amazon SNS and create a SNS topic.

I will create an email subscription to the topic with the fictional subscribed customer service email, which in this case is just my email address.  Of course, this could be several customer service representatives that are subscribed to the topic in order to receive emails for customer assistance requests.

Now returning to the AWS IoT console, I will select the Rules menu and choose the Create rule option. I first provide a name and description for the rule.

Next, I select the SQL version to be used for the AWS IoT rules engine.  I select the latest SQL version, however, if I did not choose to set a version, the default version of 2015-10-08 will be used. The rules engine uses a SQL-like syntax with statements containing the SELECT, FROM, and WHERE clauses.  I want to return a literal string for the message, which is not apart of the IoT button data payload.  I also want to return the button serial number as the accountnum, which are not apart of the payload.  Since the latest version, 2016-03-23, supports literal objects, I will be able to send a custom payload to Amazon SNS.

I have created the rule, all that is left is to add a rule action to perform when the rule is analyzed.  As I mentioned above, an email should be sent to customer service representatives when this rule is triggered by a single IoT button press.  Therefore, my rule action will be the Send a message as an SNS push notification to the SNS topic that I created to send an email to our fictional customer service reps aka me. Remember that the use of an IAM role is required to provide access to SNS resources; if you are using the console you have the option to create a new role or update an existing role to provide the correct permissions.  Also, since I am doing a custom message and pushing to SNS, I select the Message format type to be RAW.

Our rule has been created, now all that is left is for us to test that an email is successfully sent when the AWS IoT button is pressed once, and therefore the data payload has a click type of SINGLE.

A single press of our AWS IoT Button and the custom message is published to the SNS Topic, and the email shown below was sent to the subscribed customer service agents email addresses; in this example, to my email address.


In order to create the AWS IoT rule that will send a text via Lambda and a SNS topic for the scenario in which customers request account status to be sent when the IoT Button is pressed twice.  We will start by creating an AWS IoT rule with an AWS Lambda action.  To create this IoT rule, we first need to create a Lambda function and the SNS Topic with a SNS text based subscription.

First, I will go to the Amazon SNS console and create a SNS Topic. After the topic is created, I will create a SNS text subscription for our SNS topic and add a number that will receive the text messages. I will then copy the SNS Topic ARN for use in my Lambda function. Please note, that I am creating the SNS Topic in a different region than previously created SNS topic to use a region with support for sending SMS via SNS. In the Lambda function, I will need to ensure the correct region for the SNS Topic is used by including the region as a parameter of the constructor of the SNS object. The created SNS topic, aws-iot-button-topic-text is shown below.


We now will go to the AWS Lambda console and create a Lambda function with an AWS IoT trigger, an IoT Type as IoT Button, and the requested Device Serial Number will be the serial number on the back of our AWS IoT Button. There is no need to generate the certificate and keys in this step because the AWS IoT button is already configured with certificates and keys for secure communication with AWS IoT.

The next is to create the Lambda function,  IoTNotifyByText, with the following code that will receive the IoT button data payload and create a message to publish to Amazon SNS.

'use strict';

console.log('Loading function');
var AWS = require("aws-sdk");
var sns = new AWS.SNS({region: 'us-east-1'});

exports.handler = (event, context, callback) => {
    // Load the message as JSON object 
    var iotPayload = JSON.stringify(event, null, 2);
    // Create a text message from IoT Payload 
    var snsMessage = "Attention: Customer Info for Account #: " + event.accountnum + " Account Status: In Good Standing " + 
    "Balance is: 1234.56"
    // Log payload and SNS message string to the console and for CloudWatch Logs 
    console.log("Received AWS IoT payload:", iotPayload);
    console.log("Message to send: " + snsMessage);
    // Create params for SNS publish using SNS Topic created for AWS IoT button
    // Populate the parameters for the publish operation using required JSON format
    // - Message : message text 
    // - TopicArn : the ARN of the Amazon SNS topic  
    var params = {
        Message: snsMessage,
        TopicArn: "arn:aws:sns:us-east-1:xxxxxxxxxxxx:aws-iot-button-topic-text"
     sns.publish(params, context.done);

All that is left is for us to do is to alter the AWS IoT rule automatically created when we created a Lambda function with an AWS IoT trigger. Therefore, we will go to the AWS IoT console and select Rules menu option. We will find and select the IoT button rule created by Lambda which usually has a name with a suffix that is equal to the IoT button device serial number.


Once the rule is selected, we will choose the Edit option beside the Rule query statement section.

We change the Select statement to return the serial number as the accountnum and click Update button to save changes to the AWS IoT rule.

Time to Test. I click the IoT button twice and wait for the green LED light to appear, confirming a successful connection was made and a message was published to AWS IoT. After a few seconds, a text message is received on my phone with the fictitious customer account information.


This was a simple example of how a business could leverage the AWS IoT Button in order to build business solutions for their customers.  With the new AWS IoT Button Enterprise Program which helps businesses in obtaining the quantities of AWS IoT buttons needed, as well as, providing AWS IoT service pre-provisioning and deployment support; Businesses can now easily get started in building their own customized IoT solution.

Available Now

The original 1st generation of the AWS IoT button is currently available on Amazon.com, and the 2nd generation AWS IoT button will be generally available in February.  The main difference in the IoT buttons are the amount of battery life and/or clicks available for the button.  Please note that right now if you purchase the original AWS IoT button, you will receive $20 in AWS credits when you register.

Businesses can sign up today for the AWS IoT Button Enterprise Program currently in Limited Preview. This program is designed to enable businesses to expand their existing applications or build new IoT capabilities with the cloud and a click of an IoT button device.  You can read more about the AWS IoT button and learn more about building solutions with a programmable IoT button on the AWS IoT Button product page.  You can also dive deeper into the AWS IoT service by visiting the AWS IoT developer guide, the AWS IoT Device SDK documentation, and/or the AWS Internet of Things Blog.



Harry Potter and the Real-life Weasley Clock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/harry-potter-real-life-weasley-clock/

Pat Peters (such a wonderful Marvel-sounding name) recently shared his take on the Weasley Clock, a device that hangs on the wall of The Burrow, the rickety home inhabited by the Weasley family in the Harry Potter series.

Mrs. Weasley glanced at the grandfather clock in the corner. Harry liked this clock. It was completely useless if you wanted to know the time, but otherwise very informative. It had nine golden hands, and each of them was engraved with one of the Weasley family’s names. There were no numerals around the face, but descriptions of where each family member might be. “Home,” “school,” and “work” were there, but there was also “traveling,” “lost,” “hospital,” “prison,” and, in the position where the number twelve would be on a normal clock, “mortal peril.”

The clock in the movie has misplaced “mortal peril”, but aside from that it looks a lot like what we’d imagined from the books.

There’s a reason why more and more Harry Potter-themed builds are appearing online. The small size of devices such as the Raspberry Pi and Arduino allow for a digital ‘brain’ to live within an ordinary object, allowing control over it that you could easily confuse with magic…if you allow yourself to believe in such things.

So with last week’s Real-life Daily Prophet doing so well, it’s only right to share another Harry Potter-inspired project.

Harry Potter Weasley Clock

The clock serves not to tell the time but, rather, to indicate the location of Molly, Arthur and the horde of Weasley children. And using the OwnTracks GPS app for smartphones, Pat’s clock does exactly the same thing.

Pat Peters Weasley Clock Raspberry Pi

Pat has posted the entire build on instructables, allowing every budding witch and wizard (and possibly a curious Muggle or two) the chance to build their own Weasley Clock.

This location clock works through a Raspberry Pi that subscribes to an MQTT broker that our phone’s publish events to. Our phones (running the OwnTracks GPS app) send a message to the broker anytime we cross into or out of one of our waypoints that we have set up in OwnTracks, which then triggers the Raspberry Pi to run a servo that moves the clock hand to show our location.

There are no words for how much we love this. Here at Pi Towers we definitely have a soft spot for Harry Potter-themed builds, so make sure to share your own with us in the comments below, or across our social media channels on Facebook, Twitter, Instagram, YouTube and G+.

The post Harry Potter and the Real-life Weasley Clock appeared first on Raspberry Pi.

Derive Insights from IoT in Minutes using AWS IoT, Amazon Kinesis Firehose, Amazon Athena, and Amazon QuickSight

Post Syndicated from Ben Snively original https://aws.amazon.com/blogs/big-data/derive-insights-from-iot-in-minutes-using-aws-iot-amazon-kinesis-firehose-amazon-athena-and-amazon-quicksight/

Ben Snively is a Solutions Architect with AWS

Speed and agility are essential with today’s analytics tools. The quicker you can get from idea to first results, the more you can experiment and innovate with your data, perform ad-hoc analysis, and drive answers to new business questions.

Serverless architectures help in this respect by taking care of the non-differentiated heavy lifting in your environment―tasks such as managing servers, clusters, and device endpoints – allowing you to focus on assembling your IoT system, analyzing data, and building meaningful reports quickly and efficiently.

In this post, I show how you can build a business intelligence capability for streaming IoT device data using AWS serverless and managed services. You can be up and running in minutes―starting small, but able to easily grow to millions of devices and billions of messages.

AWS serverless services

AWS services offer a quicker time to intelligence. The following is a high-level diagram showing how the services in this post are configured:


AWS IoT easily and securely connects devices through the MQQT and HTTPS  protocols. The IoT Rules Engine continuously processes incoming messages, enabling your devices to interact with other AWS services.

Amazon S3 is object storage that provides you a highly reliable, secure, and scalable storage for all your data, big or small.  It is designed to deliver 99.999999999% durability, and scale past trillions of objects.

Amazon Kinesis Firehose allows you to capture and automatically load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service, enabling near-real-time business intelligence and the use of dashboards.

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using SQL.  Simply point to your data in Amazon S3, define the schema, and start querying using standard SQL, with most results delivered in seconds.

Amazon QuickSight is a fast, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from your data. You can easily run SQL queries using Athena on data stored in S3, and build business dashboards within QuickSight.

Walkthrough: Heartrate sensors

In this walkthrough, you run a script to mimic multiple sensors publishing messages on an IoT MQTT topic, with one message published every second. The events get sent to AWS IoT, where an IoT rule is configured.  The IoT rule captures all messages and sends them to Firehose.  From there, Firehose writes the messages in batches to objects stored in S3.  In S3, you set up a table in Athena and use QuickSight to analyze the IoT data.

Configuring Firehose to write to S3

Create a Firehose delivery stream using the following field values.

S3 Bucket<Create one>
S3 Prefixiot_to_bi_example/

Configuring the AWS IoT rule

Create a new AWS IoT rule with the following field values.

Topic Filter/health/#
Add ActionSend messages to an Amazon Kinesis Firehose stream (from previous section).
Select Separator“\n (newline)”

Generating sample data

Running the heartrate.py python script generates fictitious IoT messages from multiple userid values.  The IoT rule sends the message to Firehose, which writes the data out to S3.

The script assumes that it has access to AWS CLI credentials and that boto3 is installed on the machine running the script.

Download and run the following Python script:


The script generates random data that looks like the following:

{"heartRate": 67, "userId": "Brady", "rateType": "NORMAL", "dateTime": "2016-12-18 14:01:39"}
{"heartRate": 78, "userId": "Bailey", "rateType": "NORMAL", "dateTime": "2016-12-18 14:01:41"}
{"heartRate": 94, "userId": "Beatrice", "rateType": "NORMAL", "dateTime": "2016-12-18 14:01:42"}
{"heartRate": 73, "userId": "Ben", "rateType": "NORMAL", "dateTime": "2016-12-18 14:01:54"}

Run the script for a few hours and then end the task or process.

Configuring Athena

Log into the Athena console. In the Query Editor, create a table using the following query:

CREATE EXTERNAL TABLE heartrate_iot_data (
    heartRate int,
    userId string,
    rateType string,
    dateTime timestamp)
ROW FORMAT  serde 'org.apache.hive.hcatalog.data.JsonSerDe'
with serdeproperties( 'ignore.malformed.json' = 'true' )
LOCATION 's3://<CREATED-BUCKET>/iot_to_bi_example/'

After the query completes, you see the new table in the left-hand pane:


Run the following command to see the different userid values that were generated by the test script:

SELECT userid, COUNT(userid) FROM heartrate_iot_data GROUP BY userid

Your counts depend on how long you left the script to run, but you should have the following names listed:


Analyzing the data

Now, analyze this data using Athena via QuickSight.

Set up a data source

Log into QuickSight and choose Manage data, New data set. Choose Athena as a new data source.


Name your data source “AthenaDataSource”. Select the default schema and the heartrate_iot_data table.


On the left, choose Visualize.

NOTE: The Edit/Preview data button allows you to prepare the dataset differently prior to visualizing it.  Examples include being able to format and transform your data, create alias data fields, change data types, and filter or join your data.

Build an analysis

On the left, choose userid; you’ll see the same names and counts that you saw when you ran the Athena SQL query earlier.


Choose ratetype to query over the IoT data for the number of userids that have had HIGH versus NORMAL heartrates.


Focus on “HIGH” heart rates by interacting with the graph. Open one of the bars under the HIGH section and choose Focus only on HIGH.


This configures a filter to include only “HIGH” records.


On the left, choose Visualize. Clear ratetype (as you are now filtering by it), choose heartrate, and switch to AVERAGE instead of SUM.


Selecting the datetime X axis title allows you to change the time units easily. Change it to an hour.

You can easily change the parameters and see the users who haven’t had any HIGH heartrate data (in this dataset: Bonny, Brady, and Branden). 



With these few easy steps, you’ve assembled a fully managed set of services for collecting sensor data from devices, batching the data into S3, and building intelligence from that data using QuickSight and Athena.

This was all done without launching a single server or cluster; throughout, you’ve been able to focus on the data and analytics, rather than the management of the infrastructure.

In this example, you kept the data in JSON.  Converting it to Parquet would have been much more performant. Setting up partitions for the data would also restrict the amount of data scanned.

If you have questions or suggestions, please comment below.

About the Author


ben_snively_90Ben Snively is a Public Sector Specialist Solutions Architect. He works with government, non-profit and education customers on big data and analytical projects, helping them build solutions using AWS. In his spare time he adds IoT sensors throughout his house and runs analytics on it.




Integrating IoT Events into Your Analytic Platform


AWS Greengrass – Ubiquitous, Real-World Computing

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-greengrass-ubiquitous-real-world-computing/

Computing and data processing within the confines of a data center or office is easy. You can generally count on good connectivity and a steady supply of electricity, and you have access to as much on-premises or cloud-based storage and compute power as you need. The situation is much different out in the real world. Connectivity can be intermittent, unreliable, and limited in speed and scale. Power is at a premium, putting limits on how much storage and compute power can be brought in to play.

Lots of interesting and potentially valuable data is present out in the field, if only it could be collected, processed, and turned into actionable intelligence. This data could be located miles below the surface of the Earth in a mine or an oil well, in a sensitive & safety-critical location like a hospital or a factory, or even on another planet (hello, Curiosity).

Our customers are asking for a way to use the scale and power of the AWS Cloud to help them to do local processing under these trying conditions. First, they want to build systems that measure, sense, and act upon the data locally. Then they want to bring cloud-like, local intelligence to bear on the data, implementing local actions that are interdependent and coordinated. To make this even more challenging they want to do make use of any available local processing and storage resources, while also connecting to specialized sensors and peripherals.

Introducing AWS Greengrass
I’d like to tell you about AWS Greengrass. This new service is designed to allow you to address the challenges that I outlined above by extending the AWS programming model to small, simple, field-based devices.

Greengrass builds on AWS IoT and AWS Lambda, and can also access other AWS services. it is built for offline operation and greatly simplifies the implementation of local processing. Code running in the field can collect, filter, and aggregate freshly collected data and then push it up to the cloud for long-term storage and further aggregation. Further, code running in the field can also take action very quickly, even in cases where connectivity to the cloud is temporarily unavailable.

If you are already developing embedded systems for small devices, you will now be able to make use of modern, cloud-aware development tools and workflows. You can write and test your code in the cloud and then deploy it locally. You can write Python code that responds to device events and you can make use of MQTT-based pub/sub messaging for communication.

Greengrass has two constituent parts, the Greengrass Core (GGC) and the IoT Device SDK. Both of these components run on your own hardware, out in the field.

Greengrass Core is designed to run on devices that have at least 128 MB of memory and an x86 or ARM CPU running at 1 GHz or better, and can take advantage of additional resources if available. It runs Lambda functions locally, interacts with the AWS Cloud, manages security & authentication, and communicates with the other devices under its purview.

The IoT Device SDK is used to build the applications that run on the devices that connect to the device that hosts the core (generally via a LAN or other local connection). These applications will capture data from sensors, subscribe to MQTT topics, and use AWS IoT device shadows to store and retrieve state information.

Using AWS GreenGrass
You will be able to set up and manage many aspects of Greengrass through the AWS Management Console, the AWS APIs, and the AWS Command Line Interface (CLI).

You will be able to register new hub devices, configure the desired set of Lambda functions, and create a deployment package for delivery to the device. From there, you will associate the lightweight devices with the hub.

Now in Preview
We are launching a limited preview of AWS Greengrass today and you can [[sign up now]] if you would like to participate.

Each AWS customer will be able to use up 3 devices for one year at no charge. Beyond that, the monthly cost for each active Greengrass Core is $0.16 ($1.49 per year) for up to 10,000 devices.



Internet of Things Pregnancy Test

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/iot-pregnancy-test/

“The project idea came from this tongue-in-cheek Twitter post. But hey, why not try to make one? I read somewhere that the world needs more ridiculous smart devices”. These are the words of Eric Tsai, a maker of connected devices and home automation, on his website etsai.net. And while this website boasts many interesting and useful projects (plus a watermelon sea monster), it was his tongue-in-cheek ‘Too Much IoT’ project on hackster.io that had us chuckling. The aforementioned Twitter post?

Eric Tsai on Twitter

IoT pregnancy test: connects to phone via BLE, instantly tweets test results. Also text msg result to contacts name “mom”.

Eric did indeed create an IoT pregnancy test: it’s a Bluetooth-enabled digital pregnancy test that sends data to a Raspberry Pi which, in turn, sends a tweet of the result (as well as a special text message to your Mum).

To start his project, Eric turned to an unsuspecting customer service representative, quizzing her on the composition of a digital pregnancy test:

Does your test have an ON button? What kind of battery does it use? Do the different characters on the LCD overlap, or are they separated? Well, I mean, is “Pregnant” the same as “Not Pregnant”, but just without the “Not”? You need a serial number? No, I don’t have a serial number, I haven’t purchased it yet. Oh, well… I like to be prepared.

And after that, a search of YouTube provided the information he needed in order to hack the test.

IoT Pregnancy Test

After working his way through the internals of the test, the LCD pins, and the energy consumption, Eric was able to hijack the correct components and solder new wiring to gain control of the ‘not’ and ‘pregnant’ portions of the display, along with the clock function and, obviously, the ground.

IoT Pregnancy Test

Eric added header pins to the test, allowing him to connect a Light Blue Bean, which in turn provided Bluetooth and I/O functionality. He explains:

The Light Blue Bean provides the real world I/O and Bluetooth connectivity. The digital I/O monitors the “Not” and “Pregnant” pins on the LCD and compares them to the “clock”. When they don’t match, it means the corresponding icon is being displayed.

The Raspberry Pi runs Node-RED to receive data from the Bean. Using Twilio, the Pi can tweet, text, or email the information to any predetermined recipient, advising them of the test results.

IoT Pregnancy Test

As a home automation pro, Eric already had several lights in his house set up via MQTT and OpenHAB, so it wasn’t hard for him to incorporate them into the project, triggering them to light up to indicate a positive result.

Though the project was originally started as a joke, it’s clear to see that Eric enjoyed the process and learnt from the experience. And if that wasn’t enough, he also used the hackster.io project page as a means of announcing the news of his own impending bundle of joy. So congratulations to Eric and his family, and thank you for this brilliant project!

Too Much IoT on Twitter

Pregnant! Yay!

Connected Pregnancy Test – Long Version

A pregnancy test that tweets and text messages test results. https://www.hackster.io/projects/15076/



The post Internet of Things Pregnancy Test appeared first on Raspberry Pi.

Implementing a Serverless AWS IoT Backend with AWS Lambda and Amazon DynamoDB

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/implementing-a-serverless-aws-iot-backend-with-aws-lambda-and-amazon-dynamodb/

Ed Lima

Ed Lima
Cloud Support Engineer

Does your IoT device fleet scale to hundreds or thousands of devices? Do you find it somewhat challenging to retrieve the details for multiple devices? AWS IoT provides a platform to connect those devices and build a scalable solution for your Internet of Things workloads.

Out of the box, the AWS IoT console gives you your own searchable device registry with access to the device state and information about device shadows. You can enhance and customize the service using AWS Lambda and Amazon DynamoDB to build a serverless backend with a customizable device database that can be used to store useful information about the devices as well as helping to track what devices are activated with an activation code, if required.

You can use DynamoDB to extend the AWS IoT internal device registry to help manage the device fleet, as well as storing specific additional data about each device. Lambda provides the link between AWS IoT and DynamoDB allowing you to add, update, and query your new device database backend.

In this post, you learn how to use AWS IoT rules to trigger specific device registration logic using Lamba in order to populate a DynamoDB table. You then use a second Lambda function to search the database for a specific device serial number and a randomly generated activation code to activate the device and register the email of the device owner in the same table. After you’re done, you’ll have a fully functional serverless IoT backend, allowing you to focus on your own IoT solution and logic instead of managing the infrastructure to do so.


You must have the following before you can create and deploy this framework:

  • An AWS account
  • An IAM user with permissions to create AWS resources (AWS IoT things and rules, Lambda functions, DynamoDB tables, IAM policies and roles, etc.)
  • JS and the AWS SDK for JavaScript installed locally to test the deployment

Building a backend

In this post, I assume that you have some basic knowledge about the services involved. If not, you can review the documentation:

For this use case, imagine that you have a fleet of devices called “myThing”. These devices can be anything: a smart lightbulb, smart hub, Internet-connected robot, music player, smart thermostat, or anything with specific sensors that can be managed using AWS IoT.

When you create a myThing device, there is some specific information that you want to be available in your database, namely:

  • Client ID
  • Serial number
  • Activation code
  • Activation status
  • Device name
  • Device type
  • Owner email
  • AWS IoT endpoint

The following is a sample payload with details of a single myThing device to be sent to a specific MQTT topic, which triggers an IoT rule. The data is in a format that AWS IoT can understand, good old JSON. For example:

  "clientId": "ID-91B2F06B3F05",
  "serialNumber": "SN-D7F3C8947867",
  "activationCode": "AC-9BE75CD0F1543D44C9AB",
  "activated": "false",
  "device": "myThing1",
  "type": "MySmartIoTDevice",
  "email": "[email protected]",
  "endpoint": "<endpoint prefix>.iot.<region>.amazonaws.com"

The rule then invokes the first Lambda function, which you create now. Open the Lambda console, choose Create a Lambda function , and follow the steps. Here’s the code:

console.log('Loading function');
var AWS = require('aws-sdk');
var dynamo = new AWS.DynamoDB.DocumentClient();
var table = "iotCatalog";

exports.handler = function(event, context) {
    //console.log('Received event:', JSON.stringify(event, null, 2));
   var params = {
        "serialNumber": event.serialNumber,
        "clientId": event.clientId,
        "device": event.device,
        "endpoint": event.endpoint,
        "type": event.type,
        "certificateId": event.certificateId,
        "activationCode": event.activationCode,
        "activated": event.activated,
        "email": event.email

    console.log("Adding a new IoT device...");
    dynamo.put(params, function(err, data) {
        if (err) {
            console.error("Unable to add device. Error JSON:", JSON.stringify(err, null, 2));
        } else {
            console.log("Added device:", JSON.stringify(data, null, 2));

The function adds an item to a DynamoDB database called iotCatalog based on events like the JSON data provided earlier. You now need to create the database as well as making sure the Lambda function has permissions to add items to the DynamoDB table, by configuring it with the appropriate execution role.

Open the DynamoDB console, choose Create table and follow the steps. For this table, use the following details.

The serial number uniquely identifies your device; if, for instance, it is a smart hub that has different client devices connecting to it, use the client ID as the sort key.

The backend is good to go! You just need to make the new resources work together; for that, you configure an IoT rule to do so.

On the AWS IoT console, choose Create a resource and Create a rule , and use the following settings to point the rule to your newly-created Lambda function, also called iotCatalog.

After creating the rule, AWS IoT adds permissions on the background to allow it to trigger the Lambda function whenever a message is published to the MQTT topic called registration. You can use the following Node.js deployment code to test:

var AWS = require('aws-sdk');
AWS.config.region = 'ap-northeast-1';

var crypto = require('crypto');
var endpoint = "<endpoint prefix>.iot.<region>.amazonaws.com";
var iot = new AWS.Iot();
var iotdata = new AWS.IotData({endpoint: endpoint});
var topic = "registration";
var type = "MySmartIoTDevice"

//Create 50 AWS IoT Things
for(var i = 1; i < 51; i++) {
  var serialNumber = "SN-"+crypto.randomBytes(Math.ceil(12/2)).toString('hex').slice(0,15).toUpperCase();
  var clientId = "ID-"+crypto.randomBytes(Math.ceil(12/2)).toString('hex').slice(0,12).toUpperCase();
  var activationCode = "AC-"+crypto.randomBytes(Math.ceil(20/2)).toString('hex').slice(0,20).toUpperCase();
  var thing = "myThing"+i.toString();
  var thingParams = {
    thingName: thing
  iot.createThing(thingParams).on('success', function(response) {
    //Thing Created!
  }).on('error', function(response) {

  //Publish JSON to Registration Topic

  var registrationData = '{\n \"serialNumber\": \"'+serialNumber+'\",\n \"clientId\": \"'+clientId+'\",\n \"device\": \"'+thing+'\",\n \"endpoint\": \"'+endpoint+'\",\n\"type\": \"'+type+'\",\n \"activationCode\": \"'+activationCode+'\",\n \"activated\": \"false\",\n \"email\": \"[email protected]\" \n}';

  var registrationParams = {
    topic: topic,
    payload: registrationData,
    qos: 0

  iotdata.publish(registrationParams, function(err, data) {
    if (err) console.log(err, err.stack); // an error occurred
    // else Published Successfully!

//Checking all devices were created

iot.listThings().on('success', function(response) {
  var things = response.data.things;
  var myThings = [];
  for(var i = 0; i < things.length; i++) {
    if (things[i].thingName.includes("myThing")){

  if (myThings.length = 50){
    console.log("myThing1 to 50 created and registered!");
}).on('error', function(response) {

console.log("Registration data on the way to Lambda and DynamoDB");

The code above creates 50 IoT things in AWS IoT and generate random client IDs, serial numbers, and activation codes for each device. It then publishes the device data as a JSON payload to the IoT topic accordingly, which in turn triggers the Lambda function:

And here it is! The function was triggered successfully by your IoT rule and created your database of IoT devices with all the custom information you need. You can query the database to find your things and any other details related to them.

In the AWS IoT console, the newly-created things are also available in the thing registry.

Now you can create certificates, policies, attach them to each “myThing” AWS IoT Thing then install each certificate as you provision the physical devices.

Activation and registration logic

However, you’re not done yet…. What if you want to activate a device in the field with the pre-generated activation code as well as register the email details of whoever activated the device?

You need a second Lambda function for that, with the same execution role from the first function (Basic with DynamoDB). Here’s the code:

console.log('Loading function');

var AWS = require('aws-sdk');
var dynamo = new AWS.DynamoDB.DocumentClient();
var table = "iotCatalog";

exports.handler = function(event, context) {
    //console.log('Received event:', JSON.stringify(event, null, 2));

   var params = {
        "serialNumber": event.serialNumber,
        "clientId": event.clientId,

    console.log("Gettings IoT device details...");
    dynamo.get(params, function(err, data) {
    if (err) {
        console.error("Unable to get device details. Error JSON:", JSON.stringify(err, null, 2));
    } else {
        console.log("Device data:", JSON.stringify(data, null, 2));
        if (data.Item.activationCode == event.activationCode){
            console.log("Valid Activation Code! Proceed to register owner e-mail and update activation status");
            var params = {
                    "serialNumber": event.serialNumber,
                    "clientId": event.clientId,
                UpdateExpression: "set email = :val1, activated = :val2",
                    ":val1": event.email,
                    ":val2": "true"
            dynamo.update(params, function(err, data) {
                if (err) {
                    console.error("Unable to update item. Error JSON:", JSON.stringify(err, null, 2));
                } else {
                    console.log("Device now active!", JSON.stringify(data, null, 2));
                    context.succeed("Device now active! Your e-mail is now registered as device owner, thank you for activating your Smart IoT Device!");
        } else {
            context.fail("Activation Code Invalid");

The function needs just a small subset of the data used earlier:

  "clientId": "ID-91B2F06B3F05",
  "serialNumber": "SN-D7F3C8947867",
  "activationCode": "AC-9BE75CD0F1543D44C9AB",
  "email": "[email protected]"

Lambda uses the hash and range keys (serialNumber and clientId) to query the database and compare the database current pre-generated activation code to a code that is supplied by the device owner along with their email address. If the activation code matches the one from the database, the activation status and email details are updated in DynamoDB accordingly. If not, the user gets an error message stating that the code is invalid.

You can turn it into an API with Amazon API Gateway. In order to do so, go to the Lambda function and add an API endpoint, as follows.

Now test the access to the newly-created API endpoint, using a tool such as Postman.

If an invalid code is provided, the requester gets an error message accordingly.

Back in the database, you can confirm the record was updated as required.


After you finish the tutorial, delete all the newly created resources (IoT things, Lambda functions, and DynamoDB table). Alternatively, you can keep the Lambda function code for future reference, as you won’t incur charges unless the functions are invoked.


As you can see, by leveraging the power of the AWS IoT Rules Engine, you can take advantage of the seamless integration with AWS Lambda to create a flexible and scalable IoT backend powered by Amazon DynamoDB that can be used to manage your growing Internet of Things fleet.

You can also configure an activation API to make use of the newly-created backend and activate devices as well as register email contact details from the device owner; this information could be used to get in touch with your users regarding marketing campaigns or newsletters about new products or new versions of your IoT products.

If you have questions or suggestions, please comment below.

New – Just-in-Time Certificate Registration for AWS IoT

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-just-in-time-certificate-registration-for-aws-iot/

We launched AWS IoT at re:Invent (read AWS IoT – Cloud Services for Connected Devices for an introduction) and made it generally available last December.

Earlier this year my colleague Olawale Oladehin showed you how to Use Your Own Certificate with AWS IoT. Before that, John Renshaw talked about Predictive Maintenance with AWS IoT and Amazon Machine Learning.

Just-in-Time Registration
Today we are making AWS IoT even more flexible by giving you the ability to do Just-in-Time registration of device certificates. This expands on the feature described by Olawale, and simplifies the process of building systems that make use of millions of connected devices. Instead of having to build a separate database to track the certificates and the associated devices, you can now arrange to automatically register new certificates as part of the initial  communication between the device and AWS IoT.

In order to do this, you start with a CA (Certificate Authority) certificate that you later use to sign the per-device certificates (this is a great example of the chain of trust model that is fundamental to the use of digital certificates).

Putting this new feature to use is pretty easy, but you do have to take care of some important details. Here are the principal steps:

  1. Register & activate the CA certificate that will sign the other certificates.
  2. Use the certificate to generate and sign certificates for each device.
  3. Have the device present the certificate to AWS IoT and then activate it.

The final step can be implemented using a AWS Lambda function. The function simply listens on a designated MQTT topic using an AWS IoT Rule Engine Action. A notification will be sent to the topic each time a new certificate is presented to AWS IoT. The function can then activate the device certificate and take care of any other initialization or registration required by your application.

Learn More
To learn more about this important new feature and to review all of the necessary steps in detail, read about Just in Time Registration of Device Certificates on AWS IoT on The Internet of Things on AWS Blog.