Tag Archives: Internet of Things

IoT gets a machine learning boost, from edge to cloud

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/iot-gets-a-machine-learning-boost-from-edge-to-cloud/

Today, it’s easy to run Edge Impulse machine learning on any operating system, like Raspberry Pi OS, and on every cloud, like Microsoft’s Azure IoT. Evan Rust, Technology Ambassador for Edge Impulse, walks us through it.

Building enterprise-grade IoT solutions takes a lot of practical effort and a healthy dose of imagination. As a foundation, you start with a highly secure and reliable communication between your IoT application and the devices it manages. We picked our favorite integration, the Microsoft Azure IoT Hub, which provides us with a cloud-hosted solution backend to connect virtually any device. For our hardware, we selected the ubiquitous Raspberry Pi 4, and of course Edge Impulse, which will connect to both platforms and extend our showcased solution from cloud to edge, including device authentication, out-of-box device management, and model provisioning.

From edge to cloud – getting started 

Edge machine learning devices fall into two categories: some are able to run very simple models locally, and others have more advanced capabilities that allow them to be more powerful and have cloud connectivity. The second group is often expensive to develop and maintain, as training and deploying models can be an arduous process. That’s where Edge Impulse comes in to help to simplify the pipeline, as data can be gathered remotely, used effortlessly to train models, downloaded to the devices directly from the Azure IoT Hub, and then run – fast.

This reference project will serve you as a guide for quickly getting started with Edge Impulse on Raspberry Pi 4 and Azure IoT, to train a model that detects lug nuts on a wheel and sends alerts to the cloud.

Setting up the hardware

Hardware setup for Edge Impulse Machine Learning
Raspberry Pi 4 forms the base for the Edge Impulse machine learning setup

To begin, you’ll need a Raspberry Pi 4 with an up-to-date Raspberry Pi OS image which can be found here. After flashing this image to an SD card and adding a file named wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=<Insert 2 letter ISO 3166-1 country code here>

network={
	ssid="<Name of your wireless LAN>"
	psk="<Password for your wireless LAN>"
}

along with an empty file named ssh (both within the /boot directory), you can go ahead and power up the board. Once you’ve successfully SSH’d into the device with 

$ ssh [email protected]<IP_ADDRESS>

and the password raspberry, it’s time to install the dependencies for the Edge Impulse Linux SDK. Simply run the next three commands to set up the NodeJS environment and everything else that’s required for the edge-impulse-linux wizard:

$ curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
$ sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
$ npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm

Since this project deals with images, we’ll need some way to capture them. The wizard supports both the Pi Camera modules and standard USB webcams, so make sure to enable the camera module first with 

$ sudo raspi-config

if you plan on using one. With that completed, go to the Edge Impulse Studio and create a new project, then run the wizard with 

$ edge-impulse-linux

and make sure your device appears within the Edge Impulse Studio’s device section after logging in and selecting your project.

Edge Impulse Machine Learning screengrab

Capturing your data

Training accurate machine learning models requires feeding plenty of varied data, which means a lot of images are required. For this use case, I captured around 50 images of a wheel that had lug nuts on it. After I was done, I headed to the Labeling queue in the Data Acquisition page and added bounding boxes around each lug nut within every image, along with every wheel.

Edge Impulse Machine Learning screengrab

To add some test data, I went back to the main Dashboard page and clicked the Rebalance dataset button, which moves 20% of the training data to the test data bin. 

Training your models

So now that we have plenty of training data, it’s time to do something with it, namely train a model. The first block in the impulse is an Image Data block, and it scales each image to a size of 320 by 320 pixels. Next, image data is fed to the Image processing block which takes the raw RGB data and derives features from it.

Edge Impulse Machine Learning screengrab

Finally, these features are sent to the Transfer Learning Object Detection model which learns to recognize the objects. I set my model to train for 30 cycles at a learning rate of .15, but this can be adjusted to fine-tune the accuracy.

As you can see from the screenshot below, the model I trained was able to achieve an initial accuracy of 35.4%, but after some fine-tuning, it was able to correctly recognize objects at an accuracy of 73.5%.

Edge Impulse Machine Learning screengrab

Testing and deploying your models

In order to verify that the model works correctly in the real world, we’ll need to deploy it to our Raspberry Pi 4. This is a simple task thanks to the Edge Impulse CLI, as all we have to do is run 

$ edge-impulse-linux-runner

which downloads the model and creates a local webserver. From here, we can open a browser tab and visit the address listed after we run the command to see a live camera feed and any objects that are currently detected. 

Integrating your models with Microsoft Azure IoT 

With the model working locally on the device, let’s add an integration with an Azure IoT Hub that will allow our Raspberry Pi to send messages to the cloud. First, make sure you’ve installed the Azure CLI and have signed in using az login. Then get the name of the resource group you’ll be using for the project. If you don’t have one, you can follow this guide on how to create a new resource group. After that, return to the terminal and run the following commands to create a new IoT Hub and register a new device ID:

$ az iot hub create --resource-group <your resource group> --name <your IoT Hub name>
$ az extension add --name azure-iot
$ az iot hub device-identity create --hub-name <your IoT Hub name> --device-id <your device id>

Retrieve the connection string with 

$ az iot hub device-identity connection-string show --device-id <your device id> --hub-name <your IoT Hub name>
Edge Impulse Machine Learning screengrab

and set it as an environment variable with 

$ export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>" 

in your Raspberry Pi’s SSH session, as well as 

$ pip install azure-iot-device

to add the necessary libraries. (Note: if you do not set the environment variable or pass it in as an argument, the program will not work!) The connection string contains the information required for the device to establish a connection with the IoT Hub service and communicate with it. You can then monitor output in the Hub with 

$ az iot hub monitor-events --hub-name <your IoT Hub name> --output table

 or in the Azure Portal.

To make sure it works, download and run this example to make sure you can see the test message. For the second half of deployment, we’ll need a way to customize how our model is used within the code. Thankfully, Edge Impulse provides a Python SDK for this purpose. Install it with 

$ sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev
$ pip3 install edge_impulse_linux -i https://pypi.python.org/simple

There’s some simple code that can be found here on Github, and it works by setting up a connection to the Azure IoT Hub and then running the model.

Edge Impulse Machine Learning screengrab

Once you’ve either downloaded the zip file or cloned the repo into a folder, get the model file by running

$ edge-impulse-linux-runner --download modelfile.eim

inside of the folder you just created from the cloning process. This will download a file called modelfile.eim. Now, run the Python program with 

$ python lug_nut_counter.py ./modelfile.eim -c <LUG_NUT_COUNT>

where <LUG_NUT_COUNT> is the correct number of lug nuts that should be attached to the wheel (you might have to use python3 if both Python 2 and 3 are installed).

Now whenever a wheel is detected the number of lug nuts is calculated. If this number falls short of the target, a message is sent to the Azure IoT Hub.

And by only sending messages when there’s something wrong, we can prevent an excess amount of bandwidth from being taken due to empty payloads.

The possibilities are endless

Imagine utilizing object detection for an industrial task such as quality control on an assembly line, or identifying ripe fruit amongst rows of crops, or detecting machinery malfunction, or remote, battery-powered inferencing devices. Between Edge Impulse, hardware like Raspberry Pi, and the Microsoft Azure IoT Hub, you can design endless models and deploy them on every device, while authenticating each and every device with built-in security.

You can set up individual identities and credentials for each of your connected devices to help retain the confidentiality of both cloud-to-device and device-to-cloud messages, revoke access rights for specific devices, transmit code and services between the cloud and the edge, and benefit from advanced analytics on devices running offline or with intermittent connectivity. And if you’re really looking to scale your operation and enjoy a complete dashboard view of the device fleets you manage, it is also possible to receive IoT alerts in Microsoft’s Connected Field Service from Azure IoT Central – directly.

Feel free to take the code for this project hosted here on GitHub and create a fork or add to it.

The complete project is available here. Let us know your thoughts at [email protected]. There are no limits, just your imagination at work.

The post IoT gets a machine learning boost, from edge to cloud appeared first on Raspberry Pi.

Building a Data Pipeline for Tracking Sporting Events Using AWS Services

Post Syndicated from Ashwini Rudra original https://aws.amazon.com/blogs/architecture/building-a-data-pipeline-for-tracking-sporting-events-using-aws-services/

In an evolving world that is increasingly connected, data-centric, and fast-paced, the sports industry is no exception. Amazon Web Services (AWS) has been helping customers in the sports industry gain real-time insights through analytics. You can re-invent and reimagine the fan experience by tracking sports actions and activities. In this blog post, we will highlight common architectural and design patterns for building a data pipeline to track sporting events in real time.

The sports industry is largely comprised of two subsegments: participatory and spectator sports. Participatory sports, for example fitness, golf, boating, and skiing, comprise the largest share of the market. Spectator sports, such as teams/clubs/leagues, individual sports, and racing, are expected to be the fastest growing segment. Sports teams/leagues/clubs comprise the largest share of the Spectator sports segment, and is growing most rapidly.

IoT data pipeline architecture overview

Let’s discuss the infrastructure in three parts:

  1. Infrastructure at the arena itself
  2. Processing data using AWS services
  3. Leveraging this analysis using a graphics overlay (this can be especially useful for broadcasters, OTT channels, and arena users)

Data-gathering devices

Radio-frequency identification (RFID) chips or IoT devices can be worn by players or embedded in the playing equipment. These devices emit 20–50 messages per second. These messages are collected and output using JSON. This information may include player coordinate positions, player speed, statistics, health information, or more. To process the game, leagues, coaches, or broadcasters can analyze this data using analytics tools and/or machine learning.

Figure 1. Data pipeline architecture using AWS Services

Figure 1. Data pipeline architecture using AWS Services

Processing data, feature engineering, and model training at AWS

Use serverless services from AWS when possible in order to keep your solution scalable and cost-efficient. This also helps with operational overhead for teams. You can use the Kinesis family of services for stream ingestion and processing. The streaming data from hundreds to thousands of IoT sources (from equipment and clothing) can be fed to Amazon Kinesis Data Streams (KDS). KDS and Amazon Kinesis Data Firehose provide a buffering mechanism for streaming data before it lands on Amazon Simple Storage Service (S3). With Amazon Kinesis Data Analytics, you can process and analyze Kinesis stream data using powerful SQL, Apache Flink, or Beam. Kinesis Data Analytics also supports building applications in SQL, Java, Scala, and Python. With this service, you can quickly author and run powerful SQL code against Amazon Kinesis Streams as your source. This way you can perform time series analytics, feed real-time dashboards, and create real-time metrics. Read more about Amazon Kinesis Data Analytics for SQL Applications.

You might want to transform or enhance the streaming data before it is delivered to Amazon S3. Amazon Kinesis Data Firehose can be used with an AWS Lambda function to do the transformation. Let’s say you have a player prediction timestamp that you want to represent in a different time format to different ML algorithms. Lambda can process and transform this data. Kinesis Data Firehose will deliver the transformed and raw data to the destination (Amazon S3). This can occur after the specific buffering size or when the buffering interval is reached, whichever happens first.

For more complex transformations, AWS Glue can be used. For example, once the data lands in Amazon S3, you can start preparing and aggregating the training dataset using Amazon SageMaker Data Wrangler. As part of the feature engineering process, you can do the following:

  • Transform the data
  • Delete unneeded columns
  • Impute missing values
  • Perform label encoding
  • Use the quick model option to get a sense of which features are adding predictive power as you progress with your data preparation

All the data preparation and feature engineering tasks can be performed from Data Wrangler’s single visual interface.

Once data is prepared in Amazon S3, Amazon SageMaker can be used for model training. In soccer, you can predict a goal percentage based on the player’s position, acceleration, and past performance history.  SageMaker provides several built-in algorithms that can be trained. For real-time predictions, Amazon API Gateway provides an API layer to clients like an OTT, broadcasting service, or a web browser. API Gateway can invoke a Lambda function, with logic to call a SageMaker endpoint and persist the output to the database. This data can be used later on for further analysis or to fine-tune your models.

Figure 2. Deliver real-time prediction using SageMaker

Figure 2. Deliver real-time prediction using Amazon SageMaker

Computer vision-based object detection techniques can be very useful in Sports. These techniques use deep learning algorithms to predict the pass probability, game player face-off, or win prediction. For the sports industry, object detection technology like these are crucial. They obviate the need for sensors. Real-time object identification can be used to:

  • Generate new advanced analytics regarding player and team performance
  • Aid game officials in making correct calls
  • Provide fans an improved and more data-rich viewing experience

Read Football tracking in the NFL with Amazon SageMaker for more information on how to track using broadcast video data. Using SageMaker, you can train object detection models that analyze thousands of images. You can then locate and classify the football itself, and distinguish it from background objects.

Creating a graphics overlay

When you have the ML inference data and video ingestion ready, you may want to represent this data on your broadcasted video. The graphic overlay feature lets you insert an image (a BMP, PNG, or TGA file) at a specified time. It is displayed as a static overlay on the underlying video for a specified duration. The motion graphic overlay feature lets you insert an animation (a MOV or SWF file, or a series of PNG files) on the underlying video. This can be displayed at a specified time for a specified duration.

For example, a player’s motion prediction can be inserted on video during a game, through a RESTful API call of ML inferences. You can use AWS Elemental Live to achieve this. Read about AWS Elemental Live Graphic Overlay at AWS documentation.

Reducing latency

You may want to reduce latency for analytics such as for player health and safety. Use video, data, or machine learning processing at the arena using AWS Outposts. You can also use AWS Wavelength along with 5G infrastructure. For more information, read Catch Important Moments in Sports with 5G and AWS Wavelength.

Summary

In this blog, we’ve highlighted how customers in the sports industry are using AWS to increase the quality of the game, and enhance the sports fan’s experience. The following benefits can be achieved by building a data pipeline for tracking sporting events using AWS services:

  • Amazon Kinesis collects, processes, and analyzes in-game streaming data in real time. This way both teams and fans get timely insights and can react quickly to new information.
  • The serverless nature of this architecture enables a cost-effective, scalable, and operationally efficient environment for customers.
  • Amazon Machine Learning services like Amazon SageMaker can be used to enrich the fan viewing experience. It presents in-game predictions such as who will score next, or which team will win the game.

Visit our AWS Sports Partnerships page for more information on how AWS is changing the game.

Audit Your Supply Chain with Amazon Managed Blockchain

Post Syndicated from Edouard Kachelmann original https://aws.amazon.com/blogs/architecture/audit-your-supply-chain-with-amazon-managed-blockchain/

For manufacturing companies, visibility into complex supply chain processes is critical to establishing resilient supply chain management. Being able to trace events within a supply chain is key to verifying the origins of parts for regulatory requirements, tracing parts back to suppliers if issues arise, and for contacting buyers if there is a product/part recall.

Traditionally, companies will create their own ledger that can be reviewed and shared with third parties for future audits. However, this process takes time and requires verifying the data’s authenticity. In this blog, we offer a solution to audit your supply chain. Our solution allows supply chain participants to safeguard product authenticity and prevent fraud, increase profitability by driving operational efficiencies, and enhance visibility to minimize disputes across parties.

Benefits of blockchain

Blockchain technology offers a new approach for tracking supply chain events. Blockchains are immutable ledgers that allow you to cryptographically prove that, since being written, each transaction remains unchanged. For a supply chain, this immutability is beneficial from a process standpoint. Auditing a supply chain becomes much simpler when you are certain that no one has altered the manufacturing, transportation, storage, or usage history of a given part or product in the time since a failure occurred.

In addition to providing an immutable system of record, many blockchain protocols can run programmable logic written as code in a decentralized manner. This code is often referred to as a “smart contract,” which enables multi-party business logic to run on the blockchain. This means that implementing your supply chain on a blockchain allows members of the network (like retailers, suppliers, etc.) to process transactions that only they are authorized to process.

Benefits of Amazon Managed Blockchain

Amazon Managed Blockchain allows customers to join either private Hyperledger Fabric networks or the Public Ethereum network. On Managed Blockchain, you are relieved of the undifferentiated heavy lifting associated with creating, configuring, and managing the underlying infrastructure for a Hyperledger Fabric network. Instead, you can focus your efforts on mission-critical value drivers like building consortia or developing use case specific components. This allows you to create and manage a scalable Hyperledger Fabric network that multiple organizations can join from their AWS account.

IoT-enabled supply chain architecture

Organizations within the Industrial Internet of Things (IIoT) space want solutions that allow them to monitor and audit their supply chain for strict quality control and accurate product tracking. Using AWS IoT will allow you to realize operational efficiency at scale. The IoT-enabled equipment on their production plant floor records data such as load, pressure, temperature, humidity, and assembly metrics through multiple sensors. Data can be transmitted in real time directly to the cloud or through an on-premises AWS Internet of Things (IoT) gateway (such as any AWS IoT Greengrass compatible hardware) into AWS IoT for storage and analytics. These devices or IoT gateway will then send MQTT messages to the AWS IoT Core endpoint.

This solution provides a pipeline to ingest data provided by IoT. It stores this data in a private blockchain network that is only accessible within member organizations. This is your immutable single source of truth for future audits. In this solution, the Hyperledger Fabric network on Managed Blockchain includes two members, but it can be extended to additional organizations that are part of the supply chain as needed.

Reference architecture for an IoT-enabled supply chain consisting of a retailer and a manufacturer

Figure 1. Reference architecture for an IoT-enabled supply chain consisting of a retailer and a manufacturer

The components of this solution are:

  • IoT enabled sensors – These sensors are directly mounted on each piece of factory equipment throughout the supply chain. They publish data to the IoT gateway. For testing purposes, you can start with the IoT Device Simulator solution to create and simulate hundreds of connected devices.
  • AWS IoT Greengrass (optional) – This gateway provides a secure way to seamlessly connect your edge devices to any AWS service. It also enables local processing, messaging, data management, machine learning (ML) inference, and offers pre-built components such as protocol conversion to MQTT if your sensors only have an OPCUA or Modbus interface.
  • AWS IoT Core – AWS IoT Core subscribes to IoT topics published by the IoT devices or gateway and ingests data into the AWS Cloud for analysis and storage.
  • AWS IoT rule – Rules give your devices the ability to interact with AWS services. Rules are analyzed and actions are performed based on the MQTT topic stream. Here, we initiate a serverless Lambda function to extract, transform, and publish data to the Fabric Client. We could use another rule for HTTPS endpoint to directly address requests to a private API Gateway.
  • Amazon API Gateway – The API Gateway provides a REST interface to invoke the AWS Lambda function for each of the API routes deployed. API Gateway allows you to handle request authorization and authentication, before passing the request on to Lambda.
  • AWS Lambda for the Fabric Client – Using AWS Lambda with the Hyperledger Fabric SDK installed as a dependency, you can communicate with your Hyperledger Fabric Peer Node(s) to write and read data from the blockchain. The peer nodes run smart contracts (referred to as chaincode in Hyperledger Fabric), endorse transactions, and store a local copy of the ledger.
  • Managed Blockchain – Managed Blockchain is a fully managed service for creating and managing blockchain networks and network resources using open-source frameworks. In our solution, an endpoint within the customer virtual private cloud (VPC) is used for the Fabric Client. It interacts with your Hyperledger Fabric network on Managed Blockchain components that run within a VPC for your Managed Blockchain network.
    • Peer node – A peer node endorses blockchain transactions and stores the blockchain ledger. In production, we recommend creating a second peer node in another Availability Zone to serve as a fallback if the first peer becomes unavailable.
    • Certificate Authority – Every user who interacts with the blockchain must first register and enroll with their certificate authority.

Choosing a Hyperledger Fabric edition

Edition Network size Max. # of members Max. # of peer nodes per member Max # of channels per network Transaction throughput and availability
Starter Test or small production 5 2 3 Lower
Standard Large production 14 3 8 Higher

Our solution allows multiple parties to write and query data on a private Hyperledger Fabric blockchain managed by Amazon Managed Blockchain. This enhances consumer experience by reducing the overall effort and complexity with getting insight into supply chain transactions.

Conclusion

In this post, we showed you how Managed Blockchain, as well as other AWS services such as AWS IoT, can provide value to your business. The IoT-enabled supply chain architecture gives you a blueprint to realize that value. The value not only stems from the benefits of having a trustworthy and transparent supply chain, but also from the reliable, secure and scalable services that AWS provides.

Further reading

Enhancing Existing Building Systems with AWS IoT Services

Post Syndicated from Lewis Taylor original https://aws.amazon.com/blogs/architecture/enhancing-existing-building-systems-with-aws-iot-services/

With the introduction of cloud technology and by extension the rapid emergence of Internet of Things (IoT), the barrier to entry for creating smart building solutions has never been lower. These solutions offer commercial real estate customers potential cost savings and the ability to enhance their tenants’ experience. You can differentiate your business from competitors by offering new amenities and add new sources of revenue by understanding more about your buildings’ operations.

There are several building management systems to consider in commercial buildings, such as air conditioning, fire, elevator, security, and grey/white water. Each system continues to add more features and become more automated, meaning that control mechanisms use all kinds of standards and protocols. This has led to fragmented building systems and inefficiency.

In this blog, we’ll show you how to use AWS for the Edge to bring these systems into one data path for cloud processing. You’ll learn how to use AWS IoT services to review and use this data to build smart building functions. Some common use cases include:

  • Provide building facility teams a holistic view of building status and performance, alerting them to problems sooner and helping them solve problems faster.
  • Provide a detailed record of the efficiency and usage of the building over time.
  • Use historical building data to help optimize building operations and predict maintenance needs.
  • Offer enriched tenant engagement through services like building control and personalized experiences.
  • Allow building owners to gather granular usage data from multiple buildings so they can react to changing usage patterns in a single platform.

Securely connecting building devices to AWS IoT Core

AWS IoT Core supports connections with building devices, wireless gateways, applications, and services. Devices connect to AWS IoT Core to send and receive data from AWS IoT Core services and other devices. Buildings often use different device types, and AWS IoT Core has multiple options to ingest data and enabling connectivity within your building. AWS IoT Core is made up of the following components:

  • Device Gateway is the entry point for all devices. It manages your device connections and supports HTTPS and MQTT (3.1.1) protocols.
  • Message Broker is an elastic and fully managed pub/sub message broker that securely transmits messages (for example, device telemetry data) to and from all your building devices.
  • Registry is a database of all your devices and associated attributes and metadata. It allows you to group devices and services based upon attributes such as building, software version, vendor, class, floor, etc.

The architecture in Figure 1 shows how building devices can connect into AWS IoT Core. AWS IoT Core supports multiple connectivity options:

  • Native MQTT – Multiple building management systems or device controllers have MQTT support immediately.
  • AWS IoT Device SDK – This option supports MQTT protocol and multiple programming languages.
  • AWS IoT Greengrass – The previous options assume that devices are connected to the internet, but this isn’t always possible. AWS IoT Greengrass extends the cloud to the building’s edge. Devices can connect directly to AWS IoT Greengrass and send telemetry to AWS IoT Core.
  • AWS for the Edge partner products – There are several partner solutions, such as Ignition Edge from Inductive Automation, that offer protocol translation software to normalize in-building sensor data.
Data ingestion options from on-premises devices to AWS

Figure 1. Data ingestion options from on-premises devices to AWS

Challenges when connecting buildings to the cloud

There are two common challenges when connecting building devices to the cloud:

  • You need a flexible platform to aggregate building device communication data
  • You need to transform the building data to a standard protocol, such as MQTT

Building data is made up of various protocols and formats. Many of these are system-specific or legacy protocols. To overcome this, we suggest processing building device data at the edge, extracting important data points/values before transforming to MQTT, and then sending the data to the cloud.

Transforming protocols can be complex because they can abstract naming and operation types. AWS IoT Greengrass and partner products such as Ignition Edge make it possible to read that data, normalize the naming, and extract useful information for device operation. Combined with AWS IoT Greengrass, this gives you a single way to validate the building device data and standardize its processing.

Using building data to develop smart building solutions

The architecture in Figure 2 shows an in-building lighting system. It is connected to AWS IoT Core and reports on devices’ status and gives users control over connected lights.

The architecture in Figure 2 has two data paths, which we’ll provide details on in the following sections, but here’s a summary:

  1. The “cold” path gathers all incoming data for batch data analysis and historical dashboarding.
  2. The “warm” bidirectional path is for faster, real-time data. It gathers devices’ current state data. This path is used by end-user applications for sending control messages, real-time reporting, or initiating alarms.
Figure 2. Architecture diagram of a building lighting system connected to AWS IoT Core

Figure 2. Architecture diagram of a building lighting system connected to AWS IoT Core

Cold data path

The cold data path gathers all lighting device telemetry data, such as power consumption, operating temperature, health data, etc. to help you understand how the lighting system is functioning.

Building devices can often deliver unstructured, inconsistent, and large volumes of data. AWS IoT Analytics helps clean up this data by applying filters, transformations, and enrichment from other data sources before storing it. By using Amazon Simple Storage Service (Amazon S3), you can analyze your data in different ways. Here we use Amazon Athena and Amazon QuickSight for building operational dashboard visualizations.

Let’s discuss a real-world example. For building lighting systems, understanding your energy consumption is important for evaluating energy and cost efficiency. Data ingested into AWS IoT Core can be stored long term in Amazon S3, making it available for historical reporting. Athena and QuickSight can quickly query this data and build visualizations that show lighting state (on or off) and annual energy consumption over a set period of time. You can also overlay this data with sunrise and sunset data to provide insight into whether you are using your lighting systems efficiently. For example, adjusting the lighting schedule accordingly to the darker winter months versus the brighter summer months.

Warm data path

In the warm data path, AWS IoT Device Shadow service makes the device state available. Shadow updates are forwarded by an AWS IoT rule into downstream services such an AWS IoT Event, which tracks and monitors multiple devices and data points. Then it initiates actions based on specific events. Further, you could build APIs that interact with AWS IoT Device Shadow. In this architecture, we have used AWS AppSync and AWS Lambda to enable building controls via a tenant smartphone application.

Let’s discuss a real-world example. In an office meeting room lighting system, maintaining a certain brightness level is important for health and safety. If that space is unoccupied, you can save money by turning the lighting down or off. AWS IoT Events can take inputs from lumen sensors, lighting systems, and motorized blinds and put them into a detector model. This model calculates and prompts the best action to maintain the room’s brightness throughout the day. If the lumen level drops below a specific brightness threshold in a room, AWS IoT Events could prompt an action to maintain an optimal brightness level in the room. If an occupancy sensor is added to the room, the model can know if someone is in the room and maintain the lighting state. If that person leaves, it will turn off that lighting. The ongoing calculation of state can also evaluate the time of day or weather conditions. It would then select the most economical option for the room, such as opening the window blinds rather than turning on the lighting system.

Conclusion

In this blog, we demonstrated how to collect and aggregate the data produced by on-premises building management platforms. We discussed how augmenting this data with the AWS IoT Core platform allows for development of smart building solutions such as building automation and operational dashboarding. AWS products and services can enable your buildings to be more efficient while and also provide engaging tenant experiences. For more information on how to get started please check out our getting started with AWS IoT Core developer guide.

Building serverless applications with streaming data: Part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-serverless-applications-with-streaming-data-part-2/

Part 1 introduces the Alleycat application that allows bike racers to compete with each other virtually on home exercise bikes. I explain the application’s functionality, how to deploy to your AWS account, and provide an architectural review.

This series is about building serverless solutions in streaming data workloads. These are traditionally challenging to build, since data can be streamed from thousands or even millions of devices continuously.

In the example scenario, there are 40,000 users and up to 1,000 competitors may race at any given time. The workload must continuously ingest and buffer this data, then process and analyze the information to provide analytics and leaderboard content for the frontend application.

In this post, I focus on data ingestion. I compare the two different methods used in Alleycat, and discuss other approaches available. This post refers to Amazon Kinesis Data Streams, the AWS SDK, and AWS IoT Core in the solutions.

To set up the example, visit the GitHub repo and follow the instructions in the README.md file. Note that this walkthrough uses services that are not covered by the AWS Free Tier and incur cost.

Using AWS IoT Core to ingest streaming data

AWS IoT Core enables publish-subscribe capabilities for large numbers of client applications. Clients can send data to the backend using the AWS IoT Device SDK, which uses the MQTT standard for IoT messaging. After processing, the backend can publish aggregation and status messages back to the frontend via AWS IoT Core. This service fans out the messages to clients using topics.

When using this approach, note the Quality of Service (QoS) options available. By default, the SDK uses QoS level 0, which means the device does not confirm the message is received. This is intended for workloads that can lose messages occasionally without impacting performance. In Alleycat, if performance metrics are sometimes lost, this does not likely impact the overall end user experience.

For workloads requiring higher reliability, use QoS level 1, which causes the SDK to resend the message until an acknowledgement is received. While there is no additional charge for using QoS level 1, it generally increases the number of messages, which increases the overall cost. You are not charged for the PUBACK acknowledgement message – for more details, read more about AWS IoT Core pricing.

Frontend

In this scenario, the Alleycat frontend application is running on a physical exercise bike. The user selects a racer ID and exercise class and chooses Start Race to join the current virtual race for that class.

Start race UI

Every second, the frontend sends a message containing the cadence and resistance metrics and the current second in the race for the local racer. This message is created as a JSON object in the Home.vue component and sent to the ‘alleycat-publish’ topic:

      const message = {
        uuid: uuidv4(),
        event: this.event,
        deviceTimestamp: Date.now(),
        second: this.currentSecond,
        raceId: RACE_ID,
        name: this.racer.name,
        racerId: this.racer.id,
        classId: this.selectedClassId,
        cadence: this.racer.getCurrentCadence(),
        resistance: this.racer.getCurrentResistance
      }

The IoT.vue component contains the logic for this integration and uses the AWS IoT Device SDK to send and receive messages. On startup, the frontend connects to AWS IoT Core and publishes the messages using an MQTT client:

    bus.$on('publish', (data) => {
      console.log('Publish: ', data)
      mqttClient.publish(topics.publish, JSON.stringify(data))
    })

The SDK automatically attempts to retry in the event of a network disconnection and exposes an error handler to allow custom logic if other errors occur.

Backend

The resources used in the backend are defined using the AWS Serverless Application Model (AWS SAM) and configured in the core setup templates:

Reference architecture

Messages are published to topics in AWS IoT Core, which act as channels of interest. The message broker uses topic names and topic filters to route messages between publishers and subscribers. Incoming messages are routed using rules. Alleycat’s IoT rule routes all incoming messages to a Kinesis stream:

  IotTopicRule:
    Type: AWS::IoT::TopicRule
    Properties:
      RuleName: 'alleycatIngest'
      TopicRulePayload:
        RuleDisabled: 'false'
        Sql: "SELECT * FROM 'alleycat-publish'"
        Actions:
        - Kinesis:
            StreamName: 'alleycat'
            PartitionKey: "${timestamp()}"
            RoleArn: !GetAtt IoTKinesisRole.Arn

  IoTKinesisRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - iot.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Path: /
      Policies:
        - PolicyName: IoTKinesisPutPolicy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action: 'kinesis:PutRecord'
                Resource: !GetAtt KinesisStream.Arn

Using the AWS::IoT::TopicRule resource, you can optionally define an error action. This allows you to store messages in a durable location, such as an Amazon S3 bucket, if an error occurs. Errors can occur if a rule does not have permission to access a destination or throttling occurs in a target.

Rules can route matching messages to up to 10 targets. For debugging purposes, you can also enable Amazon CloudWatch Logs, which can help in troubleshoot failed message deliveries. The AWS IoT Core Message Broker allows up to 20,000 publish requests per second – if you need a higher limit for your workload, submit a request to AWS Support.

Using the AWS SDK to ingest streaming data

The Alleycat frontend creates traffic for a single user but there is also a simulator application that can generate messages for up to 1,000 riders. Instead of routing messages using an MQTT client, the simulator uses the AWS SDK to put messages directly into the Kinesis data stream.

The SDK provides a service interface object for Kinesis and two API methods for putting messages into streams: putRecord and putRecords. The first option accepts only a single message but the second enables batching of up to 500 messages per request. This is the preferred option for adding multiple messages, compared with calling putRecord multiple times.

The putRecords API takes parameters as a JSON array of messages:

const params = {
   StreamName: 'alley-cat',
   [{
      "Data":"{\"event\":\"update\",\"deviceTimestamp\":1620824038331,\"second\":3,\"raceId\":5402746,\"name\":\"Hayden\",\"racerId\":0,\"classId\":1,\"cadence\":79.8,\"resistance\":79}",
      "PartitionKey":"1620824038331"
   },
   {
      "Data":"{\"event\":\"update\",\"deviceTimestamp\":1620824038331,\"second\":3,\"raceId\":5402746,\"name\":\"Hubert\",\"racerId\":1,\"classId\":1,\"cadence\":60.4,\"resistance\":60.6}",
      "PartitionKey":"1620824038331"
   }
]}

The SDK automatically base64 encodes the Data attribute, which in this case is the JSON string output from JSON.stringify. In the JavaScript SDK, the putRecords API can return a promise, allowing the code to await the operation:

const result = await kinesis.putRecords(params).promise()

Shards and partition keys

Kinesis data streams consist of one or more shards, which are sequences of data records with a fixed capacity. Each shard can support up to 1,000 records per second for writes, up to maximum total data write rate of 1MB per second. The total capacity of a stream is the total of its shards.

When you send messages to a stream, the partitionKey attribute determines which shard it is routed to. The example application configures a Kinesis data stream with a single shard so the partitionKey attribute has no effect – all messages are routed to the same shard. However, many production applications have more than one shard and use the partitionKey to assign messages to shards.

The partitionKey is hashed by the Kinesis service to route to a shard. This diagram shows how partitionKey values from data producers are hashed by an MD5 function and mapped to individual shards:

MD5 hash process

While you cannot designate a specific shard ID in a message, you can influence the assignment depending on your choice of partitionKey:

  • Random: Using a randomized value results in random hash so messages are randomly sent to different shards. This effectively load balances messages across all available shards.
  • Time-based: A timestamp value may cause groups of messages sent to a single shard, if the messages arrive at the same time. The identical timestamp results in an identical hash.
  • Application-specific: if Alleycat used the classID as a partitionKey, racers in each class would always be routed to the same shard. This could be useful for downstream aggregation logic but would limit the capacity of messages per classID.

Optimizing capacity in a shard

Each shard can ingest data at a rate of 1 MB per second or 1,000 records per second, whichever limit is reached first. Since the payload maximum is 1MB, this could equate to one 1MB message per second. If the payload is larger, you must divide it into smaller pieces to avoid an error. For 1,000 messages, each payload must be under 1 KB on average to fit within the allowed capacity.

The combination of the two payload limits can result in different capacity profiles for a shard:

Capacity profiles in a shard

  1. The data payloads are evenly sized and use the 1 MB per second capacity.
  2. Data payload sizes vary, so the number of messages that can be packed into 1 MB varies per second.
  3. There are a large number of very messages, consuming all 1,000 messages per second. However, the total data capacity used is significantly less than 1 MB.

In the Alleycat application, the average payload size is around 170 bytes. When producing 1,000 messages a second, the workload is only using about 20% of the 1 MB per second limit. Since PUT payload size is a factor in Kinesis pricing, messages that are much smaller than 25 KB are less cost-efficient. Compare these two messaging patterns for the Alleycat application:

Producer message patterns

  1. In this default mode, a smaller message is published once per second. This reduces overall latency but results in higher overall messaging cost.
  2. The client application batches outgoing messages and sends to Kinesis every 5 seconds. This results in lower cost and better packing of messages, but introduces additional latency.

There is a tradeoff between cost and latency when optimizing a shard’s capacity and the decision depends upon the needs of your workload. If the client buffers messages, this adds latency on the client side. This is acceptable in many workloads that collect metrics for archival or asynchronous reporting purchases. However, for low-latency applications like Alleycat, it provides a better experience for the application user to send messages as soon as they are available.

Conclusion

This post focuses on ingesting data into Kinesis Data Streams. I explain the two approaches used by the Alleycat frontend and the simulator application and highlight other approaches that you can use. I show how messages are routed to shards using partition keys. Finally, I explore additional factors to consider when ingesting data, to improve efficiency and reduce cost.

Part 3 covers using Amazon Kinesis Data Firehose for transforming, aggregating, and loading streaming data into data stores. This is used to provide the historical, second-by-second leaderboard for the frontend application.

For more serverless learning resources, visit Serverless Land.

Router Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/router-security.html

This report is six months old, and I don’t know anything about the organization that produced it, but it has some alarming data about router security.

Conclusion: Our analysis showed that Linux is the most used OS running on more than 90% of the devices. However, many routers are powered by very old versions of Linux. Most devices are still powered with a 2.6 Linux kernel, which is no longer maintained for many years. This leads to a high number of critical and high severity CVEs affecting these devices.

Since Linux is the most used OS, exploit mitigation techniques could be enabled very easily. Anyhow, they are used quite rarely by most vendors except the NX feature.

A published private key provides no security at all. Nonetheless, all but one vendor spread several private keys in almost all firmware images.

Mirai used hard-coded login credentials to infect thousands of embedded devices in the last years. However, hard-coded credentials can be found in many of the devices and some of them are well known or at least easy crackable.

However, we can tell for sure that the vendors prioritize security differently. AVM does better job than the other vendors regarding most aspects. ASUS and Netgear do a better job in some aspects than D-Link, Linksys, TP-Link and Zyxel.

Additionally, our evaluation showed that large scale automated security analysis of embedded devices is possible today utilizing just open source software. To sum it up, our analysis shows that there is no router without flaws and there is no vendor who does a perfect job regarding all security aspects. Much more effort is needed to make home routers as secure as current desktop of server systems.

One comment on the report:

One-third ship with Linux kernel version 2.6.36 was released in October 2010. You can walk into a store today and buy a brand new router powered by software that’s almost 10 years out of date! This outdated version of the Linux kernel has 233 known security vulnerabilities registered in the Common Vulnerability and Exposures (CVE) database. The average router contains 26 critically-rated security vulnerabilities, according to the study.

We know the reasons for this. Most routers are designed offshore, by third parties, and then private labeled and sold by the vendors you’ve heard of. Engineering teams come together, design and build the router, and then disperse. There’s often no one around to write patches, and most of the time router firmware isn’t even patchable. The way to update your home router is to throw it away and buy a new one.

And this paper demonstrates that even the new ones aren’t likely to be secure.

Chinese Supply-Chain Attack on Computer Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/chinese-supply-chain-attack-on-computer-systems.html

Bloomberg News has a major story about the Chinese hacking computer motherboards made by Supermicro, Levono, and others. It’s been going on since at least 2008. The US government has known about it for almost as long, and has tried to keep the attack secret:

China’s exploitation of products made by Supermicro, as the U.S. company is known, has been under federal scrutiny for much of the past decade, according to 14 former law enforcement and intelligence officials familiar with the matter. That included an FBI counterintelligence investigation that began around 2012, when agents started monitoring the communications of a small group of Supermicro workers, using warrants obtained under the Foreign Intelligence Surveillance Act, or FISA, according to five of the officials.

There’s lots of detail in the article, and I recommend that you read it through.

This is a follow on, with a lot more detail, to a story Bloomberg reported on in fall 2018. I didn’t believe the story back then, writing:

I don’t think it’s real. Yes, it’s plausible. But first of all, if someone actually surreptitiously put malicious chips onto motherboards en masse, we would have seen a photo of the alleged chip already. And second, there are easier, more effective, and less obvious ways of adding backdoors to networking equipment.

I seem to have been wrong. From the current Bloomberg story:

Mike Quinn, a cybersecurity executive who served in senior roles at Cisco Systems Inc. and Microsoft Corp., said he was briefed about added chips on Supermicro motherboards by officials from the U.S. Air Force. Quinn was working for a company that was a potential bidder for Air Force contracts, and the officials wanted to ensure that any work would not include Supermicro equipment, he said. Bloomberg agreed not to specify when Quinn received the briefing or identify the company he was working for at the time.

“This wasn’t a case of a guy stealing a board and soldering a chip on in his hotel room; it was architected onto the final device,” Quinn said, recalling details provided by Air Force officials. The chip “was blended into the trace on a multilayered board,” he said.

“The attackers knew how that board was designed so it would pass” quality assurance tests, Quinn said.

Supply-chain attacks are the flavor of the moment, it seems. But they’re serious, and very hard to defend against in our deeply international IT industry. (I have repeatedly called this an “insurmountable problem.”) Here’s me in 2018:

Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that. We can’t trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government.

We need some fundamental security research here. I wrote this in 2019:

The other solution is to build a secure system, even though any of its parts can be subverted. This is what the former Deputy Director of National Intelligence Sue Gordon meant in April when she said about 5G, “You have to presume a dirty network.” Or more precisely, can we solve this by building trustworthy systems out of untrustworthy parts?

It sounds ridiculous on its face, but the Internet itself was a solution to a similar problem: a reliable network built out of unreliable parts. This was the result of decades of research. That research continues today, and it’s how we can have highly resilient distributed systems like Google’s network even though none of the individual components are particularly good. It’s also the philosophy behind much of the cybersecurity industry today: systems watching one another, looking for vulnerabilities and signs of attack.

It seems that supply-chain attacks are constantly in the news right now. That’s good. They’ve been a serious problem for a long time, and we need to take the threat seriously. For further reading, I strongly recommend this Atlantic Council report from last summer: “Breaking trust: Shades of crisis across an insecure software supply chain.

Presidential Cybersecurity and Pelotons

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/presidential-cybersecurity-and-pelotons.html

President Biden wants his Peloton in the White House. For those who have missed the hype, it’s an Internet-connected stationary bicycle. It has a screen, a camera, and a microphone. You can take live classes online, work out with your friends, or join the exercise social network. And all of that is a security risk, especially if you are the president of the United States.

Any computer brings with it the risk of hacking. This is true of our computers and phones, and it’s also true about all of the Internet-of-Things devices that are increasingly part of our lives. These large and small appliances, cars, medical devices, toys and — yes — exercise machines are all computers at their core, and they’re all just as vulnerable. Presidents face special risks when it comes to the IoT, but Biden has the NSA to help him handle them.

Not everyone is so lucky, and the rest of us need something more structural.

US presidents have long tussled with their security advisers over tech. The NSA often customizes devices, but that means eliminating features. In 2010, President Barack Obama complained that his presidential BlackBerry device was “no fun” because only ten people were allowed to contact him on it. In 2013, security prevented him from getting an iPhone. When he finally got an upgrade to his BlackBerry in 2016, he complained that his new “secure” phone couldn’t take pictures, send texts, or play music. His “hardened” iPad to read daily intelligence briefings was presumably similarly handicapped. We don’t know what the NSA did to these devices, but they certainly modified the software and physically removed the cameras and microphones — and possibly the wireless Internet connection.

President Donald Trump resisted efforts to secure his phones. We don’t know the details, only that they were regularly replaced, with the government effectively treating them as burner phones.

The risks are serious. We know that the Russians and the Chinese were eavesdropping on Trump’s phones. Hackers can remotely turn on microphones and cameras, listening in on conversations. They can grab copies of any documents on the device. They can also use those devices to further infiltrate government networks, maybe even jumping onto classified networks that the devices connect to. If the devices have physical capabilities, those can be hacked as well. In 2007, the wireless features of Vice President Richard B. Cheney’s pacemaker were disabled out of fears that it could be hacked to assassinate him. In 1999, the NSA banned Furbies from its offices, mistakenly believing that they could listen and learn.

Physically removing features and components works, but the results are increasingly unacceptable. The NSA could take Biden’s Peloton and rip out the camera, microphone, and Internet connection, and that would make it secure — but then it would just be a normal (albeit expensive) stationary bike. Maybe Biden wouldn’t accept that, and he’d demand that the NSA do even more work to customize and secure the Peloton part of the bicycle. Maybe Biden’s security agents could isolate his Peloton in a specially shielded room where it couldn’t infect other computers, and warn him not to discuss national security in its presence.

This might work, but it certainly doesn’t scale. As president, Biden can direct substantial resources to solving his cybersecurity problems. The real issue is what everyone else should do. The president of the United States is a singular espionage target, but so are members of his staff and other administration officials.

Members of Congress are targets, as are governors and mayors, police officers and judges, CEOs and directors of human rights organizations, nuclear power plant operators, and election officials. All of these people have smartphones, tablets, and laptops. Many have Internet-connected cars and appliances, vacuums, bikes, and doorbells. Every one of those devices is a potential security risk, and all of those people are potential national security targets. But none of those people will get their Internet-connected devices customized by the NSA.

That is the real cybersecurity issue. Internet connectivity brings with it features we like. In our cars, it means real-time navigation, entertainment options, automatic diagnostics, and more. In a Peloton, it means everything that makes it more than a stationary bike. In a pacemaker, it means continuous monitoring by your doctor — and possibly your life saved as a result. In an iPhone or iPad, it means…well, everything. We can search for older, non-networked versions of some of these devices, or the NSA can disable connectivity for the privileged few of us. But the result is the same: in Obama’s words, “no fun.”

And unconnected options are increasingly hard to find. In 2016, I tried to find a new car that didn’t come with Internet connectivity, but I had to give up: there were no options to omit that in the class of car I wanted. Similarly, it’s getting harder to find major appliances without a wireless connection. As the price of connectivity continues to drop, more and more things will only be available Internet-enabled.

Internet security is national security — not because the president is personally vulnerable but because we are all part of a single network. Depending on who we are and what we do, we will make different trade-offs between security and fun. But we all deserve better options.

Regulations that force manufacturers to provide better security for all of us are the only way to do that. We need minimum security standards for computers of all kinds. We need transparency laws that give all of us, from the president on down, sufficient information to make our own security trade-offs. And we need liability laws that hold companies liable when they misrepresent the security of their products and services.

I’m not worried about Biden. He and his staff will figure out how to balance his exercise needs with the national security needs of the country. Sometimes the solutions are weirdly customized, such as the anti-eavesdropping tent that Obama used while traveling. I am much more worried about the political activists, journalists, human rights workers, and oppressed minorities around the world who don’t have the money or expertise to secure their technology, or the information that would give them the ability to make informed decisions on which technologies to choose.

This essay previously appeared in the Washington Post.

Building a Controlled Environment Agriculture Platform

Post Syndicated from Ashu Joshi original https://aws.amazon.com/blogs/architecture/building-a-controlled-environment-agriculture-platform/

This post was co-written by Michael Wirig, Software Engineering Manager at Grōv Technologies.

A substantial percentage of the world’s habitable land is used for livestock farming for dairy and meat production. The dairy industry has leveraged technology to gain insights that have led to drastic improvements and are continuing to accelerate. A gallon of milk in 2017 involved 30% less water, 21% less land, a 19% smaller carbon footprint, and 20% less manure than it did in 2007 (US Dairy, 2019). By focusing on smarter water usage and sustainable land usage, livestock farming can grow to provide sustainable and nutrient-dense food for consumers and livestock alike.

Grōv Technologies (Grōv) has pioneered the Olympus Tower Farm, a fully automated Controlled Environment Agriculture (CEA) system. Unique amongst vertical farming startups, Grōv is growing cattle feed to improve that sustainable use of land for livestock farming while increasing the economic margins for dairy and beef producers.

The challenges of CEA

The set of growing conditions for a CEA is called a “recipe,” which is a combination of ingredients like temperature, humidity, light, carbon dioxide levels, and water. The optimal recipe is dynamic and is sensitive to its ingredients. Crops must be monitored in near-real time, and CEAs should be able to self-correct in order to maintain the recipe. To build a system with these capabilities requires answers to the following questions:

  • What parameters are needed to measure for indoor cattle feed production?
  • What sensors enable the accuracy and price trade-offs at scale?
  • Where do you place the sensors to ensure a consistent crop?
  • How do you correlate the data from sensors to the nutrient value?

To progress from a passively monitored system to a self-correcting, autonomous one, the CEA platform also needs to address:

  • How to maintain optimum crop conditions
  • How the system can learn and adapt to new seed varieties
  • How to communicate key business drivers such as yield and dry matter percentage

Grōv partnered with AWS Professional Services (AWS ProServe) to build a digital CEA platform addressing the challenges posed above.

Olympus Tower - Grov Technologies

Tower automation and edge platform

The Olympus Tower is instrumented for measuring recipe ingredients by combining the mechanical, electrical, and domain expertise of the Grōv team with the IoT edge and sensor expertise of the AWS ProServe team. The teams identified a primary set of features such as height, weight, and evenness of the growth to be measured at multiple stages within the Tower. Sensors were also added to measure secondary features such as water level, water pH, temperature, humidity, and carbon dioxide.

The teams designed and developed a purpose-built modular and industrial sensor station. Each sensor station has sensors for direct measurement of the features identified. The sensor stations are extended to support indirect measurement of features using a combination of Computer Vision and Machine Learning (CV/ML).

The trays with the growing cattle feed circulate through the Olympus Tower. A growth cycle starts on a tray with seeding, circulates through the tower over the cycle, and returns to the starting position to be harvested. The sensor station at the seeding location on the Olympus Tower tags each new growth cycle in a tray with a unique “Grow ID.” As trays pass by, each sensor station in the Tower collects the feature data. The firmware, jointly developed for the sensor station, uses AWS IoT SDK to stream the sensor data along with the Grow ID and metadata that’s specific to the sensor station. This information is sent every five minutes to an on-site edge gateway powered by AWS IoT Greengrass. Dedicated AWS Lambda functions manage the lifecycle of the Grow IDs and the sensor data processing on the edge.

The Grōv team developed AWS Greengrass Lambda functions running at the edge to ingest critical metrics from the operation automation software running the Olympus Towers. This information provides the ability to not just monitor the operational efficiency, but to provide the hooks to control the feedback loop.

The two sources of data were augmented with site-level data by installing sensor stations at the building level or site level to capture environmental data such as weather and energy consumption of the Towers.

All three sources of data are streamed to AWS IoT Greengrass and are processed by AWS Lambda functions. The edge software also fuses the data and correlates all categories of data together. This enables two major actions for the Grōv team – operational capability in real-time at the edge and enhanced data streamed into the cloud.

Grov Technologies - Architecture

Cloud pipeline/platform: analytics and visualization

As the data is streamed to AWS IoT Core via AWS IoT Greengrass. AWS IoT rules are used to route ingested data to store in Amazon Simple Sotrage Service (Amazon S3) and Amazon DynamoDB. The data pipeline also includes Amazon Kinesis Data Streams for batching and additional processing on the incoming data.

A ReactJS-based dashboard application is powered using Amazon API Gateway and AWS Lambda functions to report relevant metrics such as daily yield and machine uptime.

A data pipeline is deployed to analyze data using Amazon QuickSight. AWS Glue is used to create a dataset from the data stored in Amazon S3. Amazon Athena is used to query the dataset to make it available to Amazon QuickSight. This provides the extended Grōv tech team of research scientists the ability to perform a series of what-if analyses on the data coming in from the Tower Systems beyond what is available in the react-based dashboard.

Data pipeline - Grov Technologies

Completing the data-driven loop

Now that the data has been collected from all sources and stored it in a data lake architecture, the Grōv CEA platform established a strong foundation for harnessing the insights and delivering the customer outcomes using machine learning.

The integrated and fused data from the edge (sourced from the Olympus Tower instrumentation, Olympus automation software data, and site-level data) is co-related to the lab analysis performed by Grōv Research Center (GRC). Harvest samples are routinely collected and sent to the lab, which performs wet chemistry and microbiological analysis. Trays sent as samples to the lab are associated with the results of the analysis with the sensor data by corresponding Grow IDs. This serves as a mechanism for labeling and correlating the recipe data with the parameters used by dairy and beef producers – dry matter percentage, micro and macronutrients, and the presence of myco-toxins.

Grōv has chosen Amazon SageMaker to build a machine learning pipeline on its comprehensive data set, which will enable fine tuning the growing protocols in near real-time. Historical data collection unlocks machine learning use cases for future detection of anomalous sensors readings and sensor health monitoring, as well.

Because the solution is flexible, the Grōv team plans to integrate data from animal studies on their health and feed efficiency into the CEA platform. Machine learning on the data from animal studies will enhance the tuning of recipe ingredients that impact the animals’ health. This will give the farmer an unprecedented view of the impact of feed nutrition on the end product and consumer.

Conclusion

Grōv Technologies and AWS ProServe have built a strong foundation for an extensible and scalable architecture for a CEA platform that will nourish animals for better health and yield, produce healthier foods and to enable continued research into dairy production, rumination and animal health to empower sustainable farming practices.

Announcing AWS IoT Greengrass 2.0 – With an Open Source Edge Runtime and New Developer Capabilities

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/announcing-aws-iot-greengrass-2-0-with-an-open-source-edge-runtime-and-new-developer-capabilities/

I am happy to announce AWS IoT Greengrass 2.0, a new version of AWS IoT Greengrass that makes it easy for device builders to build, deploy, and manage intelligent device software. AWS IoT Greengrass 2.0 provides an open source edge runtime, a rich set of pre-built software components, tools for local software development, and new features for managing software on large fleets of devices.

 

AWS IoT Greengrass 2.0 edge runtime is now open source under an Apache 2.0 license, and available on Github. Access to the source code allows you to more easily integrate your applications, troubleshoot problems, and build more reliable and performant applications that use AWS IoT Greengrass.

You can add or remove pre-built software components based on your IoT use case and your device’s CPU and memory resources. For example, you can choose to include pre-built AWS IoT Greengrass components such as stream manager only when you need to process data streams with your application, or machine learning components only when you want to perform machine learning inference locally on your devices.

The AWS IoT Greengrass IoT Greengrass 2.0 includes a new command-line interface (CLI) that allows you to locally develop and debug applications on your device. In addition, there is a new local debug console that helps you visually debug applications on your device. With these new capabilities, you can rapidly develop and debug code on a test device before using the cloud to deploy to your production devices.

AWS IoT Greengrass 2.0 is also integrated with AWS IoT thing groups, enabling you to easily organize your devices in groups and manage application deployments across your devices with features to control rollout rates, timeouts, and rollbacks.

AWS IoT Greengrass 2.0 – Getting Started
Device builders can use AWS IoT Greengrass 2.0 by going to the AWS IoT Greengrass console where you can find a download and install command that you run on your device. Once the installer is downloaded to the device, you can use it to install Greengrass software with all essential features, register the device as an AWS IoT Thing, and create a simple “hello world” software component in less than 10 minutes.

To get started in the AWS IoT Greengrass console, you first register a test device by clicking Set up core device. You assign the name and group of your core device. To deploy to only the core device, select No group. In the next step, install the AWS IoT Greengrass Core software in your device.

When the installer completes, you can find your device in the list of AWS IoT Greengrass Core devices on the Core devices page.

AWS IoT Greengrass components enable you to develop and deploy software to your AWS IoT Greengrass Core devices. You can write your application functionality and bundle it as a private component for deployment. AWS IoT Greengrass also provides public components, which provide pre-built software for common use cases that you can deploy to your devices as you develop your device software. When you finish developing the software for your component, you can register it with AWS IoT Greengrass. Then, you can deploy and run the component on your AWS IoT Greengrass Core devices.

 

To create a component, click the Create component button on the Components page. You can use a recipe or import an AWS Lambda function. The component recipe is a YAML or JSON file that defines the component’s details, dependencies, compatibility, and lifecycle. To learn about the specifications, visit the recipe reference guide.

Here is an example of a YAML recipe.

When you finish developing your component, you can add it to a deployment configuration to deploy to one or more core devices. To create a new deployment or configure the components to deploy to core devices, click the Create button on the Deployments page. You can deploy to a core device or a thing group as a target, and select the components to deploy. The deployment includes the dependencies for each component that you select.

 

You can edit the version and parameters of selected components and advanced settings such as the rollout configuration, which defines the rate at which the configuration deploys to the target devices; timeout configuration, which defines the duration that each device has to apply the deployment; or cancel configuration, which defines when to automatically stop the deployment.

Moving to AWS IoT Greengrass 2.0
Existing devices running AWS IoT Greengrass 1.x will continue to run without any changes. If you want to take advantage of new AWS IoT Greengrass 2.0 features, you will need to move your existing AWS IoT Greengrass 1.x devices and workloads to AWS IoT Greengrass 2.0. To learn how to do this, visit the migration guide.

After you move your 1.x applications over, you can start adding components to your applications using new version 2 features, while leaving your version 1 code as-is until you decide to update them.

AWS IoT Greengrass 2.0 Partners
At launch, industry-leading partners NVIDIA and NXP have qualified a number of their devices for AWS IoT Greengrass 2.0:

See all partner device listings in the AWS Partner Device Catalog. To learn about getting your device qualified, visit the AWS Device Qualification Program.

Available Now
AWS IoT Greengrass 2.0 is available today. Please see the AWS Region table for all the regions where AWS IoT Greengrass is available. For more information, see the developer guide.

Starting today, to help you evaluate, test, and develop with this new release of AWS IoT Greengrass, the first 1,000 devices in your account will not incur any AWS IoT Greengrass charges until December 31, 2021. For pricing information, check out the AWS IoT Greengrass pricing page.

Give it a try, and please send us feedback through your usual AWS Support contacts or the AWS forum for AWS IoT Greengrass.

Learn all the details about AWS IoT Greengrass 2.0 and get started with the new version today.

Channy

New – AWS IoT Core for LoRaWAN to Connect, Manage, and Secure LoRaWAN Devices at Scale

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-aws-iot-core-for-lorawan-to-connect-manage-and-secure-lorawan-devices-at-scale/

Today, I am happy to announce AWS IoT Core for LoRaWAN, a new fully-managed feature that allows AWS IoT Core customers to connect and manage wireless devices that use low-power long-range wide area network (LoRaWAN) connectivity with the AWS Cloud.

Using AWS IoT Core for LoRaWAN, customers can now set up a private LoRaWAN network by connecting their own LoRaWAN devices and gateways to the AWS Cloud – without developing or operating a LoRaWAN Network Server (LNS) by themselves. The LNS is required to manage LoRaWAN devices and gateways’ connection to the cloud; gateways serve as a bridge and carry device data to and from the LNS, usually over Wi-Fi or Ethernet.

This allows customers to eliminate the undifferentiated work and operational burden of managing an LNS, and enables them to easily and quickly connect and secure LoRaWAN device fleets at scale.

Combined with the long range and deep in-building coverage provided by LoRa technology, AWS IoT Core now enables customers to accelerate IoT application development using AWS services and acting on the data generated easily from connected LoRaWAN devices.

Customers – mostly enterprises – need to develop IoT applications using devices that transmit data over long range (1-3 miles of urban coverage or up to 10 miles for line-of-sight) or through the walls and floors of buildings, for example for real-time asset tracking at airports, remote temperature monitoring in buildings, or predictive maintenance of industrial equipment. Such applications also require devices to be optimized for low-power consumption, so that batteries can last several years without replacement, thus making the implementation cost-effective. Given the extended coverage of LoRaWAN connectivity, it is attractive to enterprises for these use cases, but setting up LoRaWAN connectivity in a privately managed site requires customers to operate an LNS.

With AWS IoT Core for LoRaWAN, you can connect LoRaWAN devices and gateways to the cloud with a few simple steps in the AWS IoT Management Console, thus speeding up the network setup time, and connect off-the-shelf LoRaWAN devices, without any requirement to modify embedded software, for a plug and play experience.

AWS IoT Core for LoRaWAN – Getting Started
Getting started with a LoRaWAN network setup is easy. You can find AWS IoT Core for LoRaWAN qualified gateways and developer kits from the AWS Partner Device Catalog. AWS qualified gateways and developer kits are pre-tested and come with a step by step guide from the manufacturer on how to connect it with AWS IoT Core for LoRaWAN.

With AWS IoT Core console, you can register the gateways by providing a gateway’s unique identifier (provided by the gateway vendor) and selecting LoRa frequency band. For registering devices, you can input device credentials (identifiers and security keys provided by the device vendor) on the console.

Each device has a Device Profile that specifies the device capabilities and boot parameters the LNS requires to set up LoRaWAN radio access service. Using the console, you can select a pre-populated Device Profile or create a new one.

A destination automatically routes messages from LoRaWAN devices to AWS IoT Rules Engine. Once a destination is created, you can use it to map multiple LoRaWAN devices to the same IoT rule. You can write rules using simple SQL queries, to transform and act on the device data, like converting data from proprietary binary to JSON format, raising alerts, or routing it to other AWS services like Amazon Simple Storage Service (S3). From the console, you can also query metrics for connected devices and gateways to troubleshoot connectivity issues.

Available Now
AWS IoT Core for LoRaWAN is available today in US East (N. Virginia) and Europe (Ireland) Regions. With pay-as-you-go pricing and no monthly commitments, you can connect and scale LoRaWAN device fleets reliably, and build applications with AWS services quickly and efficiently. For more information, see the pricing page.

To get started, buy an AWS qualified LoRaWAN developer kit and and launch Getting Started experience in the AWS Management Console. To learn more, visit the developer guide. Give this a try, and please send us feedback either through your usual AWS Support contacts or the AWS forum for AWS IoT.

Learn all the details about AWS IoT Core for LoRaWAN and get started with the new feature today.

Channy

Amazon SageMaker Edge Manager Simplifies Operating Machine Learning Models on Edge Devices

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-sagemaker-edge-manager-simplifies-operating-machine-learning-models-on-edge-devices/

Today, I’m extremely happy to announce Amazon SageMaker Edge Manager, a new capability of Amazon SageMaker that makes it easier to optimize, secure, monitor, and maintain machine learning models on a fleet of edge devices.

Edge computing is certainly one of the most exciting developments in information technology. Indeed, thanks to continued advances in compute, storage, networking, and battery technology, organizations routinely deploy large numbers of embedded devices anywhere on the planet for a wide range of industry applications: manufacturing, energy, agriculture, healthcare, and more. Ranging from simple sensors to large industrial machines, the devices have a common purpose: capture data, analyze it, and act on it, for example send an alert if an unwanted condition is detected.

As machine learning (ML) demonstrated its ability to solve a wide range of business problems, customers tried to apply it to edge applications, training models in the cloud and deploying them at the edge in an effort to extract deeper insights from local data. However, given the remote and constrained nature of edge devices, deploying and managing models at the edge is often quite difficult.

For example, a complex model can be too large to fit, forcing customers to settle for a smaller and less accurate model. Also, predicting with several models on the same device (say, to detect different types of anomalies) may require additional code to load and unload models on demand, in order to conserve hardware resources. Finally, monitoring prediction quality is a major concern, as the real world will always be more complex and unpredictable than any training set can anticipate.

Customers asked us to help them solve these challenges, and we got to work.

Announcing Amazon SageMaker Edge Manager
Amazon SageMaker Edge Manager makes it easy for ML edge developers to use the same familiar tools in the cloud or on edge devices. It reduces the time and effort required to get models to production, while continuously monitoring and improving model quality across your device fleet.

Starting from a model that you trained or imported in Amazon SageMaker, SageMaker Edge Manager first optimizes it for your hardware platform using Amazon SageMaker Neo. Launched two years ago, Neo converts models into an efficient common format which is executed on the device by a low footprint runtime. Neo currently supports devices based on chips manufactured by Ambarella, ARM, Intel, NVIDIA, NXP, Qualcomm, TI, and Xilinx.

Then, SageMaker Edge Manager packages the model, and stores it in Amazon Simple Storage Service (S3), where it can be deployed to your devices. In fact, you can deploy multiple models, loading and predicting with a runtime optimized for your hardware of choice.

On-device models are managed by the SageMaker Edge Manager Manager Agent, which communicates with the AWS Cloud for model deployment, and with your application for model management. Indeed, you can integrate this agent with your application, so that it may automatically load and unload models according to your prediction requests. This enables a variety of scenarios, such as freeing all resources for a large model whenever needed, or working with a collection of smaller models that cohabit in memory.

Lenovo, the #1 global PC maker, recently incorporated Amazon SageMaker into its latest predictive maintenance offering. Igor Bergman, Lenovo Vice President, Cloud & Software of PCs and Smart Devices, told us: “At Lenovo, we’re more than a hardware provider and are committed to being a trusted partner in transforming customers’ device experience and delivering on their business goals. Lenovo Device Intelligence is a great example of how we’re doing this with the power of machine learning, enhanced by Amazon SageMaker. With Lenovo Device Intelligence, IT administrators can proactively diagnose PC issues and help predict potential system failures before they occur, helping to decrease downtime and increase employee productivity. By incorporating Amazon SageMaker Neo, we’ve already seen a substantial improvement in the execution of our on-device predictive models – an encouraging sign for the new Amazon SageMaker Edge Manager that will be added in the coming weeks. SageMaker Edge Manager will help eliminate the manual effort required to optimize, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine learning platforms. As we extend AI to new applications across the Lenovo services portfolio, we will continue to require a high-performance pipeline that is flexible and scalable both in the cloud and on millions of edge devices. That’s why we selected the Amazon SageMaker platform. With its rich edge-to-cloud and CI/CD workflow capabilities, we can effectively bring our machine learning models to any device workflow for much higher productivity.

Getting Started
As you can see, SageMaker Edge Manager makes it easier to work with ML models deployed on edge devices. It’s available today in the US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo) regions.

Sample notebooks are available to get you started right away. Give them a try, and let us know what you think.

We’re always looking forward to your feedback, either through your usual AWS support contacts, or on the AWS Forum for SageMaker.

– Julien

New – Amazon Lookout for Equipment Analyzes Sensor Data to Help Detect Equipment Failure

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/new-amazon-lookout-for-equipment-analyzes-sensor-data-to-help-detect-equipment-failure/

Companies that operate industrial equipment are constantly working to improve operational efficiency and avoid unplanned downtime due to component failure. They invest heavily and repeatedly in physical sensors (tags), data connectivity, data storage, and building dashboards over the years to monitor the condition of their equipment and get real-time alerts. The primary data analysis methods are single-variable threshold and physics-based modeling approaches, and while these methods are effective in detecting specific failure types and operating conditions, they can often miss important information detected by deriving multivariate relationships for each piece of equipment.

With machine learning, more powerful technologies have become available that can provide data-driven models that learn from an equipment’s historical data. However, implementing such machine learning solutions is time-consuming and expensive owing to capital investment and training of engineers.

Today, we are happy to announce Amazon Lookout for Equipment, an API-based machine learning (ML) service that detects abnormal equipment behavior. With Lookout for Equipment, customers can bring in historical time series data and past maintenance events generated from industrial equipment that can have up to 300 data tags from components such as sensors and actuators per model. Lookout for Equipment automatically tests the possible combinations and builds an optimal machine learning model to learn the normal behavior of the equipment. Engineers don’t need machine learning expertise and can easily deploy models for real-time processing in the cloud.

Customers can then easily perform ML inference to detect abnormal behavior of the equipment. The results can be integrated into existing monitoring software or AWS IoT SiteWise Monitor to visualize the real-time output or to receive alerts if an asset tends toward anomalous conditions.

How Lookout for Equipment Works
Lookout for Equipment reads directly from Amazon S3 buckets. Customers can publish their industrial data in S3 and leverage Lookout for Equipment for model development. A user determines the value or time period to be used for training and assigns an appropriate label. Given this information, Lookout for Equipment launches a task to learn and creates the best ML model for each customer.

Because Lookout for Equipment is an automated machine learning tool, it gets smarter over time as users use Lookout for Equipment to retrain their models with new data. This is useful for model re-creation when new invisible failures occur, or when the model drifts over time. Once the model is complete and can be inferred, Lookout for Equipment provides real-time analysis.

With the equipment data being published to S3, the user can scheduled inference that ranges from 5 minutes to one hour. When the user data arrives in S3, Lookout for Equipment fetches the new data on the desired schedule, performs data inference, and stores the results in another S3 bucket.

Set up Lookout for Equipment with these simply steps:

  1. Upload data to S3 buckets
  2. Create datasets
  3. Ingest data
  4. Create a model
  5. Schedule inference (if you need real-time analysis)

1. Upload data
You need to upload tag data from equipment to any S3 bucket.

2. Create Datasets

Select Create dataset, and set Dataset name, and set Data Schema. Data schema is like a data design document that defines the data to be fed in later. Then select Create.

creating datasets console

3. Ingest data
After a dataset is created, the next step is to ingest data. If you are familiar with Amazon Personalize or Amazon Forecast, doesn’t this screen feel familiar? Yes, Lookout for Equipment is as easy to use as those are.

Select Ingest data.

Ingesting data consoleSpecify the S3 bucket location where you uploaded your data, and an IAM role. The IAM role has to have a trust relationship to “lookoutequipment.amazonaws.com” You can use the following policy file for the test.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lookoutequipment.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The data format in the S3 bucket has to match the Data Schema you set up in step 2. Please check our technical documents for more detail. Ingesting data takes a few minutes to tens of minutes depending on your data volume.

4. Create a model
After data ingest is completed, you can train your own ML model now. Select Create new model. Fields show us a list of fields in the ingested data. By default, no field is selected. You can select fields you want Lookout for Equipment to learn. Lookout for Equipment automatically finds and trains correlations from multiple specified fields and creates a model.

Image illustrates setting up fields.

If you are sure that your data has some unusual data included, you can optionally set the windows to exclude that data.

setting up maintenance windowOptionally, you can divide ingested data for training and then for evaluation. The data specified during the evaluation period is checked compared to the trained model.

setting up evaluation window

Once you select Create, Lookout for Equipment starts to train your model. This process takes minutes to hours depending on your data volume. After training is finished, you can evaluate your model with the evaluation period data.

model performance console

5. Schedule Inference
Now it is time to analyze your real time data. Select Schedule Inference, and set up your S3 buckets for input.

setting up input S3 bucket

You can also set Data upload frequency, which is actually the same as inferencing frequency, and Offset delay time. Then, you need to set up Output data as Lookout for Equipment outputs the result of inference.

setting up inferenced output S3 bucket

Amazon Lookout for Equipment is In Preview Today
Amazon Lookout for Equipment is in preview today at US East (N. Virginia), Asia Pacific (Seoul), and Europe (Ireland) and you can see the documentation here.

– Kame

Amazon Monitron, a Simple and Cost-Effective Service Enabling Predictive Maintenance

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-monitron-a-simple-cost-effective-service-enabling-predictive-maintenance/

Today, I’m extremely happy to announce Amazon Monitron, a condition monitoring service that detects potential failures and allows user to track developing faults enabling you to implement predictive maintenance and reduce unplanned downtime.

True story: A few months ago, I bought a new washing machine. As the delivery man was installing it in my basement, we were chatting about how unreliable these things seemed to be nowadays; never lasting more than a few years. As the gentleman made his way out, I pointed to my aging and poorly maintained water heater, telling him that I had decided to replace it in the coming weeks and that he’d be back soon to install a new one. Believe it or not, it broke down the next day. You can laugh at me, it’s OK. I deserve it for not planning ahead.

As annoying as this minor domestic episode was, it’s absolutely nothing compared to the tremendous loss of time and money caused by the unexpected failure of machines located in industrial environments, such as manufacturing production lines and warehouses. Any proverbial grain of sand can cause unplanned outages, and Murphy’s Law has taught us that they’re likely to happen in the worst possible configuration and at the worst possible time, resulting in severe business impacts.

To avoid breakdowns, reliability managers and maintenance technicians often combine three strategies:

  1. Run to failure: where equipment is operated without maintenance until it no longer operates reliably. When the repair is completed, equipment is returned to service; however, the condition of the equipment is unknown and failure is uncontrolled.
  2. Planned maintenance: where predefined maintenance activities are performed on a periodic or meter basis, regardless of condition. The effectiveness of planned maintenance activities is dependent on the quality of the maintenance instructions and planned cycle. It risks equipment being both over- and under-maintained, incurring unnecessary cost or still experiencing breakdowns.
  3. Condition-based maintenance: where maintenance is completed when the condition of a monitored component breaches a defined threshold. Monitoring physical characteristics such as tolerance, vibration or temperature is a more optimal strategy, requiring less maintenance and reducing maintenance costs.
  4. Predictive maintenance: where the condition of components is monitored, potential failures detected and developing faults tracked. Maintenance is planned at a time in the future prior to expected failure and when the total cost of maintenance is most cost-effective.

Condition-based maintenance and predictive maintenance require sensors to be installed on critical equipment. These sensors measure and capture physical quantities such as temperature and vibration, whose change is a leading indicator of a potential failure or a deteriorating condition.

As you can guess, building and deploying such maintenance systems can be a long, complex, and costly project involving bespoke hardware, software, infrastructure, and processes. Our customers asked us for help, and we got to work.

Introducing Amazon Monitron
Amazon Monitron is an easy and cost-effective condition monitoring service that allows you to monitor the condition of equipment in your facilities, enabling the implementation of a predictive maintenance program.

Illustration

Setting up Amazon Monitron is extremely simple. You first install Monitron sensors that capture vibration and temperature data from rotating machines, such as bearings, gearboxes, motors, pumps, compressors, and fans. Sensors send vibration and temperature measurements hourly to a nearby Monitron gateway, using Bluetooth Low Energy (BLE) technology allowing the sensors to run for at least three years. The Monitron gateway is itself connected to your WiFi network, and sends sensor data to AWS, where it is stored and analyzed using machine learning and ISO 20816 vibration standards.

As communication is infrequent, up to 20 sensors can be connected to a single gateway, which can be located up to 30 meters away (depending on potential interference). Thanks to the scalability and cost efficiency of Amazon Monitron, you can deploy as many sensors as you need, including on pieces of equipment that until now weren’t deemed critical enough to justify the cost of traditional sensors. As with any data-driven application, security is our No. 1 priority. The Monitron service authenticates the gateway and the sensors to make sure that they’re legitimate. Data is also encrypted end-to-end, without any decryption taking place on the gateway.

Setting up your gateways and sensors only requires installing the Monitron mobile application on an Android mobile device with Bluetooth support for gateway setup, and NFC support for sensor setup. This is an extremely simple process, and you’ll be monitoring in minutes. Technicians will also use the mobile application to receive alerts indicating abnormal machine conditions. They can acknowledge these alerts and provide feedback to improve their accuracy (say, to minimize false alerts and missed anomalies).

Customers are already using Amazon Monitron today, and here are a couple of examples.

Fender Musical Instruments Corporation is an iconic brand and a leading manufacturer of stringed instruments and amplifiers. Here’s what Bill Holmes, Global Director of Facilities at Fender, told us: “Over the past year we have partnered with AWS to help develop a critical but sometimes overlooked part of running a successful manufacturing business which is knowing the condition of your equipment. For manufacturers worldwide, uptime of equipment is the only way we can remain competitive with a global market. Ensuring equipment is up and running and not being surprised by sudden breakdowns helps get the most out of our equipment. Unplanned downtime is costly both in loss of production and labor due to the firefighting nature of the breakdown. The Amazon Monitron condition monitoring system has the potential of giving both large industry as well as small ‘mom and pop shops’ the ability to predict failures of their equipment before a catastrophic breakdown shuts them down. This will allow for a scheduled repair of failing equipment before it breaks down.

GE Gas Power is a leading provider of power generation equipment, solutions and services. It operates many manufacturing sites around the world, in which much of the manufacturing equipment is not connected nor monitored for health. Magnus Akesson, CIO at GE Gas Power Manufacturing says: “Naturally, we can reduce both maintenance costs and downtime, if we can easily and cheaply connect and monitor these assets at scale. Additionally, we want to take advantage of advanced algorithms to look forward, to know not just the current state but also predict future health and to detect abnormal behaviors. This will allow us to transition from time-based to predictive and prescriptive maintenance practices. Using Amazon Monitron, we are now able to quickly retrofit our assets with sensors and connecting them to real- time analytics in the AWS cloud. We can do this without having to require deep technical skills or having to configure our own IT and OT networks. From our initial work on vibration-prone tumblers, we are seeing this vision come to life at an amazing speed: the ease-of-use for the operators and maintenance team, the simplicity, and the ability to implement at scale is extremely attractive to GE. During our pilot, we were also delighted to see one-click capabilities for updating the sensors via remote Over the Air (OTA) firmware upgrades, without having to physically touch the sensors. As we grow in scale, this is a critical capability in order to be able to support and maintain the fleet of sensors.

Now, let me show you how to get started with Amazon Monitron.

Setting up Amazon Monitron
First, I open the Monitron console. In just a few clicks, I create a project, and an administrative user allowed to manage it. Using a link provided in the console, I download and install the Monitron mobile application on my Android phone. Opening the app, I log in using my administrative credentials.

The first step is to create a site describing assets, sensors, and gateways. I name it “my-thor-project.”

Application screenshot

Let’s add a gateway. Enabling BlueTooth on my phone, I press the pairing button on the gateway.

Application screenshot

The name of the gateway appears immediately.

Application screenshot

I select the gateway, and I configure it with my WiFi credentials to let it connect to AWS. A few seconds later, the gateway is online.

Application screenshot

My next step is to create an asset that I’d like to monitor, say a process water pump set, with a motor and a pump that I would like to monitor. I first create the asset itself, simply defining its name, and the appropriate ISO 20816 class (a standard for measurement and evaluation of machine vibration).

Application screenshot

Then, I add a sensor for the motor.

Application screenshot

I start by physically attaching the sensor to the motor using the suggested adhesive. Next, I specify a sensor position, enable the NFC on my smartphone, and tap the Monitron sensor that I attached to the motor with my phone. Within seconds, the sensor is commissioned.

Application screenshot

I repeat the same operation for the pump. Looking at my asset, I see that both sensors are operational.

Application screenshot

They are now capturing temperature and vibration information. Although there isn’t much to see for the moment, graphs are available in the mobile app.

Application screenshot

Over time, the gateway will keep sending this data securely to AWS, where it will be analyzed for early signs of failure. Should either of my assets exhibit these, I would receive an alert in the mobile application, where I could visualize historical data, and decide what the best course of action would be.

Getting Started
As you can see, Monitron makes it easy to deploy sensors enabling predictive maintenance applications. The service is available today in the US East (N. Virginia) region, and using it costs $50 per sensor per year.

If you’d like to evaluate the service, the Monitron Starter Kit includes everything you need (a gateway with a mounting kit, five sensors, and a power supply), and it’s available for $715. Then, you can scale your deployment with additional sensors, which you can buy in 5-packs for $575.

Starter kit picture

Give Amazon Monitron a try, and let us know what you think. We’re always looking forward to your feedback, either through your usual AWS support contacts, or on the AWS Forum for Monitron.

– Julien

Special thanks to my colleague Dave Manley for taking the time to educate me on industrial maintenance operations.

Field Notes: Integrating IoT and ITSM using AWS IoT Greengrass and AWS Secrets Manager – Part 2

Post Syndicated from Gary Emmerton original https://aws.amazon.com/blogs/architecture/field-notes-integrating-iot-and-itsm-using-aws-iot-greengrass-and-aws-secrets-manager-part-2/

In part 1 of this blog I introduced the need for organizations to securely connect thousands of IoT devices with many different systems in the hyperconnected world that exists today, and how that can be addressed using AWS IoT Greengrass and AWS Secrets Manager.  We walked through the creation of ServiceNow credentials in AWS Secrets Manager, the creation of IAM roles and the Lambda functions that will run on our edge device (a Raspberry Pi).

In this second part of the blog, we will setup AWS IoT Greengrass, on our Raspberry Pi, and AWS IoT Core so that we can run the AWS Lambda functions and access our ServiceNow credentials, retrieved securely from AWS Secrets Manager.

Setting up AWS IoT Core and AWS IoT Greengrass

The overall sequence for configuring AWS IoT Core and AWS IoT Greengrass is:

  • Create a certificate, and IoT Thing and link them
  • Create AWS IoT Greengrass group
  • Associate IAM role to the AWS IoT Greengrass group
  • Create and attach a policy to the certificate
  • Create an AWS IoT Greengrass Resource Definition for our ‘Secret’
  • Create an AWS IoT Greengrass Function Definition for our Lambda functions
  • Create an AWS IoT Greengrass Subscription Definition for IoT Topics to be used
  • Finally associate our Resource, Function and Subscription Definitions with our AWS IoT Greengrass Core

Steps

For this walkthrough, I have selected the AWS region “eu-west-1”, however, feel free to use other Regions where AWS IoT Core and AWS IoT Greengrass are available.

First, let’s install Greengrass on the Raspberry Pi:

  • Follow the instructions to configure the pre-requisites on the Raspberry Pi
  • Then we download the AWS IoT Greengrass software
  • And then we unzip the AWS IoT Greengrass software using the following command (note, this command is for version 1.10.0 of Greengrass and will change as later versions are released):

sudo tar -xzvf greengrass-linux-armv6l-1.10.0.tar.gz -C /

Note that AWS IoT Greengrass must be compatible with the version of the AWS Greengrass SDK installed to identify what versions are compatible and use sudo pip3 install greengrasssdk==<version_number> to install the SDK compatible with the version of AWS IoT Greengrass that we installed.

Our AWS IoT Greengrass core will authenticate with AWS IoT Core in AWS using certificates, so we need to generate these first using the following command:

aws iot create-keys-and-certificate --set-as-active --certificate-pem-outfile "iot-ge.cert.pem" --public-key-outfile "iot-ge.public.key" --private-key-outfile "iot-ge.private.key"

This command will generate three files containing the private key, public key and certificate.  All of these files need to be copied to the /greengrass/certs folder on the Raspberry Pi.  Also, the output of the preceding command will give the ARN of the certificate – we need to make a note of this ARN as we will use it in the next steps.

We also need to download a copy of the Amazon Root CA into the /greegrass/certs folder using the command below:

sudo wget -O root.ca.pem https://www.amazontrust.com/repository/AmazonRootCA1.pem

For the next step we need our AWS account number and IoT Host address unique to our account – we get the IoT Host address using the command:

aws iot describe-endpoint --endpoint-type iot:Data-ATS

Now we need to create a config.json file on the Raspberry Pi in the /greengrass/config folder, with the account number and IoT Host address obtained in the previous step;

{
  "coreThing" : {
    "caPath" : "root.ca.pem",
    "certPath" : "iot-ge.cert.pem",
    "keyPath" : "iot-ge.private.key",
    "thingArn" : "arn:aws:iot:eu-west-1:<aws_account_number>:thing/IoT-blog_Core",
    "iotHost" : "<endpoint_address>",
    "ggHost" : "greengrass-ats.iot.eu-west-1.amazonaws.com",
    "keepAlive" : 600
  },
  "runtime" : {
    "cgroup" : {
      "useSystemd" : "yes"
    },
    "allowFunctionsToRunAsRoot" : "yes"
  },
  "managedRespawn" : false,
  "crypto" : {
    "principals" : {
      "SecretsManager" : {
        "privateKeyPath" : "file:///greengrass/certs/iot-ge.private.key"
      },
      "IoTCertificate" : {
        "privateKeyPath" : "file:///greengrass/certs/iot-ge.private.key",
        "certificatePath" : "file:///greengrass/certs/iot-ge.cert.pem"
      }
    },
    "caPath" : "file:///greengrass/certs/root.ca.pem"
  }
}

Note that the line "allowFunctionsToRunAsRoot" : "yes" allows the Lambda functions to easily access the SenseHat on the Raspberry Pi. This configuration should normally be avoided in Production environments for security reasons but has been used here for simplicity.

Next we create the IoT Thing to represent our Raspberry Pi to match the entry we added into the config.json file previously:

aws iot create-thing --thing-name IoT-blog_Core

Now that our config.json file is in place and our IoT ‘thing’ created we can start the AWS IoT Greengrass software using the following commands:

cd /greengrass/ggc/core/
sudo ./greengrassd start

Then we attach the certificate to our new Thing – we need the ARN of the certificate that was noted in the earlier steps when we created the certificates:

aws iot attach-thing-principal --thing-name "IoT-blog_Core" --principal "<certificate_arn>"

Now we create the AWS IoT Greengrass group – make a note of the Group ID in the output of this command as we use it later:

aws greengrass create-group --name IoT-blog-group

Next we create the AWS IoT Greengrass Core definition file – create this using a text editor and save as core-def.json

{
  "Cores": [
    {
      "CertificateArn": "<certificate_arn>",
      "Id": "<IoT Thing Name>",
      "SyncShadow": true,
      "ThingArn": "<thing_arn>"
    }
  ]
}

Then, using the preceding file we just created, we create the core definition using the following command:

aws greengrass create-core-definition --name "IoT-blog_Core" --initial-version file://core-def.json

Now we associate the AWS IoT Greengrass core with the AWS IoT Greengrass group – we need the LatestVersionARN from the output of the command above and the group ID of your existing AWS IoT Greengrass group (in the output from the command for creation of the group in previous steps):

aws greengrass create-group-version --group-id "<greengrass_group_id>" --core-definition-version-arn "<core_definition_version_arn>"

Then we associate the IAM role (created earlier) to the AWS IoT Greengrass group;

aws greengrass associate-role-to-group --group-id "<greengrass_group_id>" --role-arn "arn:aws:iam::<aws_account_number>:role/IoTGGRole"

We need to create a policy to associate with the certificate so that our AWS IoT Greengrass Core (authenticated/authorized by our certificates) has rights to interact with AWS IoT Core.  To do this we create the policy.json file:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "iot:Publish",
        "iot:Subscribe",
        "iot:Connect",
        "iot:Receive"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "iot:GetThingShadow",
        "iot:UpdateThingShadow",
        "iot:DeleteThingShadow"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "greengrass:*"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

Then create the policy using the policy file using the command below:

aws iot create-policy --policy-name myGGPolicy --policy-document file://policy.json

And finally attach our new policy to the certificate – as the certificate is attached to our AWS IoT Greengrass Core, this gives the rights defined in the policy to our AWS IoT Greengrass Core;

aws iot attach-policy --target "<certificate_arn>" --policy-name "myGGPolicy"

Now we have the AWS IoT Greengrass Core and permissions in place, it’s time to add our Secret as a resource for AWS IoT Greengrass.

First, we need to create a resource definition that refers to the ARN of the secret we created earlier.  Get the ARN of the secret using the following command:

aws secretsmanager describe-secret --secret-id "greengrass-snow-creds"

And then we create a text file containing the following and save it as resource.json:

{
"Resources": [
    {
      "Id": "SNOW-Credentials",
      "Name": "SNOW-Credentials",
      "ResourceDataContainer": {
        "SecretsManagerSecretResourceData": {
          "ARN": "<secret_arn>"
        }
      }
    }
  ]
}

Now we command to create the resource reference in IoT to the Secret:

aws greengrass create-resource-definition --name "MySNOWSecret" --initial-version file://resource.json

Note the Resource ID from the output as it is needed as it has to be added to the Lambda definition json file in the next steps.  The function definition file contains the details of the Lambda function(s) that we will attach to our AWS IoT Greengrass group.  We create a text file with the content below and save as lambda-def.json.

We also specify a couple of variables in the definition file; these are the same as the environment variables that can be specified for Lambda, but they make the variables available in AWS IoT Greengrass.

Note, if we specify environment variables for the functions in the Lambda console then these will NOT be available when the function is running under AWS IoT Greengrass.  We will need our ServiceNow API URL to add to the configuration below, and this will be in the form of https://devXXXXX.service-now.com/api/now/table/incident, where XXXXX is the developer instance number assigned by ServiceNow when our instance is created.

We need the ARNs of the Lambda functions that we created in part 1 of the blog – these appear in the output after successfully creating the functions from the command line, or can be obtained using the aws lambda list-functions command – we need to have the ‘:1’ at the end of the ARN as AWS IoT Greengrass needs to reference published function versions.

{
  "DefaultConfig": {
    "Execution": {
      "IsolationMode": "NoContainer",
      "RunAs": {
        "Gid": 0,
        "Uid": 0
      }
    }
  },
  "Functions": [
    {
      "FunctionArn": "<lambda_function1_arn>:1",
      "FunctionConfiguration": {
        "EncodingType": "json",
        "Environment": {
          "Execution": {
            "IsolationMode": "NoContainer"
          },
          "Variables": { 
            "tempLimit": "30",
            "humidLimit": "50"
          }
        },
        "ExecArgs": "string",
        "Executable": "lambda_function.lambda_handler",
        "Pinned": true,
        "Timeout": 10
      },
    "Id": "sensorLambda"
    },
    {
      "FunctionArn": "<lambda_function2_arn>:1",
      "FunctionConfiguration": {
        "EncodingType": "json",
        "Environment": {
          "Execution": {
            "IsolationMode": "NoContainer"
          },
          "ResourceAccessPolicies": [
            {
              "Permission": "ro",
              "ResourceId": "SNOW-Credentials"
            }
          ],
          "Variables": { 
            "snowUrl": "<service_now_api_url>"
          }
        },
        "ExecArgs": "string",
        "Executable": "lambda_function.lambda_handler",
        "Pinned": false,
        "Timeout": 10
      },
    "Id": "anomalyLambda"
    }
  ]
}

The Lambda function now needs to be registered within our AWS IoT Greengrass core using the definition file just created, using the following command:

aws greengrass create-function-definition --name "IoT-blog-lambda" --initial-version file://lambda-def.json

Create Subscriptions

We now need to create some IoT Topics to pass data between the two Lambda functions and also to submit all sensor data to AWS IoT Core, which gives us visibility of the successful collection of sensor data.cd.

First, let’s create a subscription configuration file (subscriptions.json) for sensor data and anomaly data:

{
  "Subscriptions": [
    {
      "Id": "SensorData",
      "Source": "<lambda_function1_arn>:1",
      "Subject": "IoTBlog/sensorData",
      "Target": "cloud"
    },
    {
      "Id": "AnomalyData",
      "Source": "<lambda_function1_arn>:1",
      "Subject": "IoTBlog/anomaly",
      "Target": "<lambda_function2_arn>:1"
    },
    {
      "Id": "AnomalyDataB",
      "Source": "<lambda_function1_arn>:1",
      "Subject": "IoTBlog/anomaly",
      "Target": "cloud"
    }
  ]
}

And next, we run the command to create the subscription from this configuration:

aws greengrass create-subscription-definition --name "IoT-sensor-subs" --initial-version file://subscriptions.json

Update AWS IoT Greengrass Group Associations and Deploy

Now that the functions, subscriptions and resources have been defined, we run the following command to update our AWS IoT Greengrass group to the new version with those components included:

aws greengrass create-group-version --group-id <gg_group_id> --core-definition-version-arn "<core_def_version_arn>" --function-definition-version-arn "<function_def_version_arn>" --resource-definition-version-arn "<resource_def_version_arn>" --subscription-definition-version-arn "<subscription_def_version_arn>"

And finally, we can deploy our configuration.  Use the following command to deploy the Greengrass group to our device, using the group-version-id from the output of the previous command and also the group-id:

aws greengrass create-deployment --deployment-type NewDeployment --group-id <gg_group_id> --group-version-id <gg_group_version_id>

Summarized below is the integration between the different functions and components that we have now deployed to get from our sensor data through to an incident being raised in ServiceNow:

Raspberry PI

Create an Incident

Everything is setup now from an IoT perspective, so we can attempt to trigger a threshold breach on the sensors to trigger the creation of an incident in ServiceNow.  In order to trigger the incident creation, let’s raise the humidity around the sensor so that it breaches the threshold defined in the environment variables of the Lambda function.

Under normal conditions we will just see the data published by the first Lambda function in the IoTBlog/sensorData topic:

IoTblog sensordata

However, when a threshold is breached (in our example, humidity above 50%), the data is published to the IoTBlog/anomaly topic as shown below:

ioTblog Anomaly

Via the AWS IoT Greengrass subscriptions created earlier, this message arriving in the anomaly topic also triggers the second Lambda function to create the ticket in ServiceNow.

The log for the second Lambda function on AWS IoT Greengrass (stored in /greengrass/ggc/var/log/user/<region>/<aws_account_number>/ on the Raspberry Pi) will show a ‘201’ return code if the incident is successfully created in ServiceNow.

201 response

Now let’s log on to ServiceNow and check out our new incident.  Good news, our new incident appears correctly:

And when we click on our incident we can see the detail, including the full data from the IoT topic in the Activities section;

This is only a basic use of the ServiceNow API and there are many other parameters that you can use to increase the richness of the incident, refer to the ServiceNow API documentation for more details.

Cleaning up

To avoid incurring future charges, delete the resources that you created in the walkthrough.

Conclusion

We have built an IoT device (Raspberry Pi), running AWS IoT Greengrass, AWS Lambda, and using ServiceNow credentials managed in AWS Secrets Manager.  Using this we have triggered an anomaly event that has created an incident automatically in ServiceNow, directly from the Lambda function running on our Pi.  You can use this architecture as the foundation to integrate your edge devices and ITSM solution to automate ticket generation in your organization.

Look out for follow-up blogs that will extend this solution to provide a real-time dashboard for the sensor data and store the sensor data in a Data Lake for historical visualization.

Find out more about deploying Secrets to AWS IoT Greengrass Core.

Check out the AWS IoT Blog for more examples of how to use AWS to integrate your edge devices with the AWS Cloud.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

 

Field Notes: Powering the Connected Vehicle with Amazon Alexa

Post Syndicated from Amit Kumar original https://aws.amazon.com/blogs/architecture/field-notes-powering-the-connected-vehicle-with-amazon-alexa/

Alexa has improved the in-home experience and has potential to greatly enhance the in-car experience. This blog is a continuation of my previous blog: Field Notes: Implementing a Digital Shadow of a Connected Vehicle with AWS IoT. Multiple OEMs (Original Equipment Manufacturers) have showcased this capability during CES 2020. Use cases include; a person seating at the rear seat can play a song, control HVAC (Heating, ventilation, and air conditioning), pay for gas/coffee, all while using Alexa. In this blog, I cover how you create a connected vehicle using Alexa, to initiate a command, such as; ‘Alexa, open my trunk’.

Solution Architecture

“Alexa, open my trunk”

The preceding architecture shows a message flowing in the following example:

  1. A user of a connected vehicle wants to open their trunk using an Alexa voice command. Alexa will identify the right intent based on utterances and invoke a Lambda function. The Lambda function updates the device shadow with (desired {““trunk””: ““open””}).
  2. Vehicle TCU registered the callback function shadowRegisterDeltaCallback(). Listen to delta topics for the device shadow by subscribing to delta topics. Whenever there is a difference between the desired and reported state, the registered callback will be called.  The delta payload will be available in the callback. Update performed in #1 will be received in delta callback.
  3. Now, the vehicle must act on the desired state. In this case, it acts on the trunk status change. After performing the required action for the trunk change, the vehicle TCU will update the device shadow with the reported state (reported : { “trunk”: “open”} )
  4. The web/mobile app subscribed to the topic $aws/things/tcu/shadow/update/accepted”. Therefore, as soon as the vehicle TCU updates the shadow, the Web/Mobile app received the update and synchronized the UI state.

As part of the previous blog, we implemented #2, #3 and #4. Lets implement #1 and incorporate into the solution.

The source code (vehicle-command) of this blog is available in this code repository.

The Alexa voice command required the implementation of three key areas:

  1. Configure Alexa – which will listen to utterances and identify the right intent and invoke a Lambda function.
  2. Set up the Lambda function – which will interpret the command and invoke the AWS IoT Core device shadow API.
  3. Handle Command at Vehicle tcu and App – Vehicle tcu must register shadowRegisterDeltaCallback so any update in the device shadow will receive a call message to perform the  actual command by the vehicle and synchronize the state with a web/mobile app.

Let’s ‘Open a trunk’ using Alexa voice command. First set up the environment:

  • Open AWS Cloud9 IDE created in an earlier lab and run the following command:

Set up permanent credentials. Note: Alexa doesn’t work with temporary credentials.  Configure it with permanent credentials for ASK command line interface (CLI).

  1. Open Cloud9 Preferences by clicking AWS Cloud9 > Preference or  by clicking on the “gear” icon in the upper right corner of the  Cloud9 window
  2. Select “AWS  Settings”
  3. Disable “AWS  managed temporary credentials”
  4. $ aws  configure
  5. Enter the Access Key  and Secret Access Key of a user that has required access credentials
  6. Use us-east-1 as the region. It will store in ~/.aws/config

Verify that everything worked by examining the file ~/.aws/credentials. It should resemble the following:

[default]
 aws_access_key_id = <access_key>
 aws_secret_access_key = <secrect_key>
 aws_session_token=

*Remove aws_session_token line from credentials file.

Next, install the Alexa CLI:

$ npm install ask-cli --global

Initialize ASK CLI by issuing the following command. This will initialize the ASK CLI with a profile associated with your Amazon developer credentials.

$ ask configure --no-browser

Check you are linking AWS account with Alexa:

Do you want to link your AWS account in order to host your Alexa skills? Yes

#At the end output should look as follows:

------------------------- Initialization Complete -------------------------
Here is the summary for the profile setup:
ASK Profile: default
AWS Profile: default
Vendor ID: MXXXXXXXXXX

As part of the previous blog, you have already cloned the following git repository in AWS Cloud9 IDE. It has a baseline code to jump start.

$ git clone

Configure Alexa Skills

The Alexa Developer console GUI can be used but we are doing it programmatically so it can be done at scale and allows versioning.

1. Open connected-vehicle-lab/vehicle-command/skill-package/skill.json . We have 2 locale en-US, en-IN are defined in the base code for Alexa command. Let’s add en-GB locale in the json file located at “manifest”/”publishingInformation”/”locales”.  Similarly, you can add locale for your preferred language:

"en-GB": {
"name": "vehicle-command",
"summary": "Control Vehicle using voice command",
"description": "Allow you to control vehicle using voice command",
"examplePhrases": [
    "Alexa open genie",
    "ask genie to lower window",
    "window up"
    ],
"keywords": []
}

If you are inserting into the middle then make sure it is separated by a comma.

2. Let’s create a copy of models connected-vehicle-lab/vehicle-command/skill-package/interactionModels/custom/en-US.json and rename it to en-GB.json and add our intent

  • We have “invocationName”: “genie”.  Here, we  are using “genie” as a command to invoke our Alexa skill. You  can change if needed
  • The key elements in this json file is intent, slots, sample utterance and slot types. Let’s define the  slot types t_action_type for ‘open’, ‘close’, ‘lock’, ‘unlock’. under “types”: [].
        {
        "name": "t_action_type",
        "values": [
            {
                "name": {
                "value": "unlock"
                }
            },
            {
                "name": {
                "value": "lock"
                }
            },
            {
                "name": {
                "value": "close"
                }
            },
            {
                "name": {
                "value": "open"
                }
            }
          ]
        }
  • Let’s add intent under “intents”: [] for trunk  ‘TrunkCommandIntent’ and define the sample utterance speech like ‘lock my trunk’,  ‘open trunk’. We are using slot types to simplify the utterance and  understand the operation requested by a user.
        {
            "name": "TrunkCommandIntent",
            "slots": [
            {
                "name": "t_action",
                "type": "t_action_type"
            }
            ],
            "samples": [
                "{t_action} trunk",
                "trunk {t_action}",
                "{t_action} my trunk",
                "{t_action} trunk"
            ]
}
  • Now add the same intent, slots, slot type and sample utterances  for other locales files (en-US.json and en-IN.json) as well.

3. Let’s add response message under languageString.js (available at /connected-vehicle-lab/vehicle-command/lambda/custom).

TRUNK_OPEN: 'Trunk Open',
TRUNK_CLOSE: 'Trunk Close' 

If you are inserting into the middle then make sure it is separated by a comma.

Set up the Lambda function

1. Add a Lambda function which will get invoked by Alexa. This Lambda function will handle  the intent and invoke IoT Core Device Shadow API and execute the actual command of ‘Trunk open/unlock or lock/close’.

  • Open /connected-vehicle-lab/vehicle-command/lambda/custom/index.js  and add our TrunkCommandIntent
const TrunkCommandIntentHandler = {
                canHandle(handlerInput) {
                return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
                && Alexa.getIntentName(handlerInput.requestEnvelope) === 'TrunkCommandIntent';
                },
                    handle(handlerInput) {
                    var t_action_value = handlerInput.requestEnvelope.request.intent.slots.t_action.value;
                    console.log(t_action_value);
                    var speakOutput;
                    const obj = "trunk";
                    if (t_action_value == "lock" || t_action_value == "open")
                    {
                        updateDeviceShadow(obj, "open");
                        speakOutput = handlerInput.t('TRUNK_OPEN')
                    }
                    else 
                    {
                        updateDeviceShadow(obj, "close");
                        speakOutput = handlerInput.t('TRUNK_CLOSE')
                    } 
                    console.log(speakOutput);
                    return handlerInput.responseBuilder
                    .speak(speakOutput)
                    //.reprompt('add a reprompt if you want to keep the session open for the user to respond')
                    .getResponse();
                }
            };
  • We have  UpdateDeviceShadow(“vehicle_part”, “command”) function  which actually invokes the IoT core Device Shadow API
 function updateDeviceShadow (obj, command)
    {
        shadowMessage.state.desired[obj] = command;
        var iotdata = new AWS.IotData({endpoint: ioT_EndPoint});
        var params = {
        payload: JSON.stringify(shadowMessage) , /* required */
        thingName: deviceName /* required */ 
        };
        iotdata.updateThingShadow(params, function(err, data) {
            if (err) 
            console.log(err, err.stack); // an error occurred
            else 
            console.log(data); 
            //reset the shadow 
            shadowMessage.state.desired = {}
        });
} 

2. Update the value of ioT_EndPoint from AWS IoT Core > Settings > Custom Endpoint

3.  Add Trunk CommandIntent in request handler

exports.handler = Alexa.SkillBuilders.custom()
    .addRequestHandlers(
        LaunchRequestHandler,
        WindowCommandIntentHandler,
        DoorCommandIntentHandler,
        TrunkCommandIntentHandler,

4. Deploy Alexa Skills

$ cd ~/environment/connected-vehicle-lab/vehicle-command
$ ask deploy 

Handle Command at Vehicle tcu and App

For more detail on this section, refer to part 1 of this blog: Field Notes: Implementing a Digital Shadow of a Connected Vehicle with AWS IoT.

@ Vehicle tcu – tcuShadowRead.py has trunk_handle() function to receive a message from device shadow

def trunk_handle(status):
  if status is not None:
    shadowClient.reportedShadowMessage['state']['reported']['trunk'] = status
    print ('Perform action on trunk status change : ' + str(status))

@web App – demo-car/js/websocket.js has handleTrunkCommand function receive callback message as soon any update happened on Device Shadow

//this function will be called by onMessageArrive
function handleTrunkCommand(trunkStatus) {
    obj = document.getElementsByClassName("action trunk")[0];
    obj.checked = trunkStatus == "open" ? true : false;
    console.log(obj.getAttribute("data-text") + " : " + obj.checked);
}

demo-car/js/demo-car.js has handleTrunkCommand function to handle UI input and invoke IoT Core Device Gateway API to update the desired state.

//this function will be called when user will click on trunk checkbox
    handleTrunkCommand: function(obj) {
        obj.checked ? demoCar.shadowMessage.state.desired.trunk = "open" : demoCar.shadowMessage.state.desired.trunk = "close";
        console.log(obj.getAttribute("data-text") + " : " + demoCar.shadowMessage.state.desired.trunk);
        demoCar.accessIoTDevice();
    },

Use Alexa skill to invoke a command

Let’s test or command ‘Alexa, open my trunk’. We can use a command line and execute:

$ask dialog --locale "en-GB" 

Using Alexa GUI, provides an interesting visualization, as shown in the following screenshot.

  1. Open the Alexa GUI,  Select ‘vehicle command’ skill and select test tab. Allow “developer.amazon.com” to use your microphone?
  2. Open a demo.html web app side by side of the Alexa GUI to check an actual operation happened at the Vehicle tcu and synchronize the  status with virtual car model.
  3. Now test the Alexa skill. You can use an audio command as well. You can ask or write ‘ask genie’.

Alexa developer console

Clean Up

What a fun exploration this has been! Now clean up AWS resources created for this and the previous post to avoid incurring any future AWS services costs. Resources created by CDK can be deleted by deleting the stack on the CloudFormation console. Resources created manually need to be deleted individually.

Conclusion

In this blog post, I showed how you can enable voice command for a connected vehicle and enhance in-vehicle user experience.  Similarly, you can also extend this solution for the use cases like Alexa ‘open my garage’. AWS IoT Core Device Shadow API does all the heavy-lifting in this case. Any update in device shadow allows both device and user application to act. Alexa skill is acting as an interface to capture the user command and invoke the lambda function.

Since these are all serverless services, that means this implementation can scale without making any change in the application and you only pay when someone invokes a command. Creating an engaging, high-quality interaction with Alexa in the vehicle is critical. You can refer to Alexa Automotive Documentation for an Alexa Built-in automotive experience.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

 

Field Notes: Implementing a Digital Shadow of a Connected Vehicle with AWS IoT

Post Syndicated from Amit Sinha original https://aws.amazon.com/blogs/architecture/field-notes-implementing-a-digital-shadow-of-a-connected-vehicle-with-aws-iot/

Innovations in connected vehicle technology are expected to improve the quality and speed of vehicle communications and create a safer driving experience. As connected vehicles are becoming part of the mainstream, OEMs (Original Equipment Manufacturers) are broadening the capabilities of their products and dramatically improving the in-vehicle experience for customers.

An important feature in a connected vehicle is its ability to execute a remote command and synchronize the state of the vehicle between a web/mobile app in real time.

This blog demonstrates how to:

  • secure two-way communication between a device (vehicle telematics control unit) and the AWS Cloud using AWS IoT
  • execute command at vehicle
  • execute a remote command
  • and test with a vehicle virtual model

You can watch a quick animation of a remote command execution in the following GIF:

Animated car GIF

Solution Overview

In a traditional connected vehicle approach, there are many processes running on multiple servers. These processes are subscribing to one another, coordinating with each other, and polling for an update. This makes scalability and availability a challenge. We use AWS IoT Core and AWS IoT Device Shadow service as primary components for this solution.

This solution has three building blocks:

  1. a vehicle TCU (telematics control unit),
  2. the AWS Cloud (with connection via AWS IoT Core) and
  3. a virtual Model (e.g.; web/mobile app to send/receive commands to TCU). These three building blocks together reflect the current state of a vehicle.

Alexa Solution Overview

The previous diagram shows a message flowing in the following example:

  1. A user of a connected vehicle wants to open their door using a web/mobile app. The app updates the device shadow with (desired {““door””: ““open””}). The app will always request the vehicle to execute the command; therefore, it will always update the device shadow with the desired state.
  2. Vehicle TCU registered the callback function shadowRegisterDeltaCallback(). Listen on delta topics for the device shadow by subscribing to delta topics. Whenever there is a difference between the desired and reported state, the registered callback is called and the delta payload will be available in the callback. Update performed in #1 will received in delta callback.
  3. Now, the vehicle needs to act on the desired state. In this case, ‘act on’ is the door status change. After performing the required action for the door change, the vehicle TCU will update the device shadow with the reported state (reported : { “door”: “open”} )
  4. Now, the vehicle is closing the door. The vehicle will always perform the action; therefore, it will always update device shadow with reports state (reported: {“door” : “close”})
  5. The Web/Mobile app subscribed to topic $aws/things/tcu/shadow/update/accepted”. Therefore, as soon as the vehicle TCU updates the shadow, the Web/Mobile app received the update and synchronized the UI state.
  6. You can also build an Amazon Alexa skill to control your vehicle (“Alexa, raise my window”). After identifying the utterance, Alexa can invoke the Lambda function to update the device shadow and perform the requested action.

Note: For the Web/Mobile app developments for production, it is recommended to use AWS AppSync and AWS Amplify SDK for building a flexible and decoupled application from the API. Refer to this code sample for more detail.

Implementation

First, you need to set up the code. Refer to the directions in this code sample.

Create device

In AWS IoT Core, name a device ‘TCU’ (created by connected-vehicle-app-cdk-stack). Create a new certificate (download files) and attach the policy generated by cdk.

create a certificate

Next, deploy the certificate key and pem file on your device so it can connect with the AWS Cloud using the X.509 certificate. For more detail, refer to the directions in the code sample.

Execute Command at Vehicle

AWS IoT Device Shadow is an important feature of AWS IoT core for remote command execution because it allows you to decouple the vehicle and the app which controls and commands the vehicle. A device’s shadow is a JSON document that is used to store and retrieve current state information for a device. Primarily we use state.desired and state.reported. properties of a device’s shadow document.

The device shadow (Device SKD and APIs) enables applications to interact with devices even when they are offline and allow:

  • Cloud representation of device state
  • Query last known state for offline devices
  • Real-time state changes
  • Track last known device state
  • Control devices via change of state
  • Automatic synchronization once devices connect to the cloud
  • APIs for applications to discover and interact with devices

The rich features of a device shadow allows the app to interact with the vehicle TCU even when there is no connectivity. Once connectivity is established, the device gateway pushes the changes to device and vice versa.

We need to deploy a program (tcuShadowWrite.py) on the vehicle TCU device to update the device shadow and send the update to the AWS Cloud. This program is available in this code repository.

Let’s assume that after reaching their home, the vehicle’s user closes the door, switches off the headlights, and rolls up the windows. The same state of the vehicle should be reflected on their web/mobile app in real time. The vehicle TCU has to update the “reported” state in the device shadow JSON document.

shadow message

AWSIoTMQTTShadowClient library has a method called shadowUpdate that needs to be called from the vehicle TCU to update the device shadow. Essentially, it is publishing the shadow reported state on topic $aws/things/<thingName>/shadow/update.

If you run tcuShadowWrite.py script, you should be able to see the output as described in the following image.

tcushadowscript

  • Open the AWS IoT Core console.
  • Select Manage -> Things -> Select tcu, and then choose Shadow. You should be able to see the shadow message sent from the device described in the following image.

shadow document

Execute Remote Command

We need to deploy a program (tcuShadowRead.py) on the vehicle TCU to receive updates from the AWS Cloud. It is available in this code sample.

Let’s assume the vehicle owner uses the mobile app to open the door, switch on the headlight and roll down the windows. The vehicle TCU should receive this command and instruct the Electronic Control Unit (ECU) to execute the command. The web/mobile app will update the “desire” state in the device shadow JSON document.

shadow message2

In tcuShadowRead.py, AWSIoTMQTTShadowClient has a method shadowRegisterDeltaCallback. It listens on delta topics for this device shadow by subscribing to delta topics. Whenever there is a difference between the desired and reported state, the registered callback is called and the delta payload will be available in the callback.

callback

The callback function has a code to handle the state change request. In an actual implementation, a function like door_handle() would be calling the ECU to execute the door open command.

door open command

If you make changes in Device Shadow on AWS IoT for the tcu device, you should receive the output in the following image.

Device shadow

Test with a Virtual Vehicle Model

To help you test this solution, you can deploy the virtual vehicle model shown in the following image. Detailed steps for the deployment of the virtual vehicle is available in this code sample.

virtual vehicle model

Any changes in the model state should be reflected on the virtual demo vehicle and vice versa.

Here, we use open-source Paho-mqtt library.  and Developers can use this to write JavaScript applications that access AWS IoT using MQTT or MQTT over the WebSocket protocol without using AWS IoT SDK. This implementation is made simpler by using AWS IoT Device SDK for JavaScript v2 Readme.

Review the JavaScript file named webSocketApp.js:

websocket app

  • onMessageArrived() function will be invoked whenever the device will change the shadow state.
  • handle<object>Command functions (such as handleDoorCommand) will be called with the current state.  Call this function if the device has received any status change.

We have another JavaScript file demo-car.js in the demo-car folder. This includes the functions that our simulated vehicle will use in order to change the device shadow.

Let’s review the following code:

democar javascript

  • We have 3 handle command function defined (e.g., handleDoorCommand) to take the user’s input and access AWS IoT Core services.
  • connectDevice is an actual function to invoke updateThingsShadow function to send the desired state
  • accessIoTDevice uses Amazon Cognito Identity to get the authenticated identities to access AWS IoT Core securely without exposing the access key or secret key.

Now, keep demo.html side by side to your code and run the tcuShadowRead.py script. Any change made at the virtual model will reflect at the command output. Similarly, any change made by tcuShadowWrite.py will reflect the state update on the virtual model.

Conclusion

In this blog, we showed how to implement a digital shadow of a connected vehicle using AWS IoT. This solution removes complexity from running multiple processes in parallel and ensures a successful outcome. AWS IoT Core enables scalable, secure, low-latency, low-overhead, bi-directional communication between connected devices, tolerate and recover from slow/brittle connection, the AWS Cloud and customer-facing applications.

The Device Shadow in AWS IoT Core enables the AWS Cloud and applications to easily and accurately receive data from connected vehicles and send commands to the vehicles. The Device Shadow’s uniform and always-available interface simplifies the implementation of time-sensitive use cases. These include, remote command execution and two-way state synchronization between a device and app where the cloud is acting as a broker. This solution enables you to shift operational responsibilities of a connected vehicle infrastructure to the AWS Cloud while paying only for what you use, with no minimum fees or mandatory service usage.

For more information about how AWS can help you build connected vehicle solutions, refer to the AWS Connected Vehicle solution page.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Hacking a Coffee Maker

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/hacking-a-coffee-maker.html

As expected, IoT devices are filled with vulnerabilities:

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the older coffee makers to see what kinds of hacks he could do with it. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord.

[…]

In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker — ­and possibly other appliances made by Smarter — ­to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

AWS Architecture Monthly Magazine: Agriculture

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/aws-architecture-monthly-magazine-agriculture/

Architecture Monthly Magazine cover - AgricultureIn this month’s issue of AWS Architecture Monthly, Worldwide Tech Lead for Agriculture, Karen Hildebrand (who’s also a fourth generation farmer) refers to agriculture as “the connective tissue our world needs to survive.” As our expert for August’s Agriculture issue, she also talks about what role cloud will play in future development efforts in this industry and why developing personal connections with our AWS agriculture customers is one of the most important aspects of our jobs.

You’ll also buzz through the world of high tech beehives, milk the information about data analytics-savvy cows, and see what the reference architecture of a Smart Farm looks like.

In August’s issue Agriculture issue

  • Ask an Expert: Karen Hildebrand, AWS WW Agriculture Tech Leader
  • Customer Success Story: Tine & Crayon: Revolutionizing the Norwegian Dairy Industry Using Machine Learning on AWS
  • Blog Post: Beewise Combines IoT and AI to Offer an Automated Beehive
  • Reference Architecture:Smart Farm: Enabling Sensor, Computer Vision, and Edge Inference in Agriculture
  • Customer Success Story: Farmobile: Empowering the Agriculture Industry Through Data
  • Blog Post: The Cow Collar Wearable: How Halter benefits from FreeRTOS
  • Related Videos: DuPont, mPrest & Netafirm, and Veolia

Survey opportunity

This month, we’re also asking you to take a 10-question survey about your experiences with this magazine. The survey is hosted by an external company (Qualtrics), so the below survey button doesn’t lead to our website. Please note that AWS will own the data gathered from this survey, and we will not share the results we collect with survey respondents. Your responses to this survey will be subject to Amazon’s Privacy Notice. Please take a few moments to give us your opinions.

How to access the magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

Democratizing LoRaWAN and IoT with The Things Network

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/democratizing-lorawan-iot-with-the-things-network/

With the Internet of Things (IoT), what happens to your thing when there’s no internet? Johan Stokking, co-founder of The Things Network and The Things Industries, along with Matt Yanchyshyn from AWS, dig into this.

About The Things Network and The Things Industries

The Things Network is a community project building a global IoT data network in more than 84 countries. Its devices connect to community-maintained gateways, which can communicate over very long distances and last on a single alkaline battery for up to 5-10 years, thanks to the LoRaWAN protocol. The Things Network’s commercial wing, The Things Industries, built a platform using AWS IoT that allows device data to be collected and processed in the cloud using multiple AWS services.

In this special long-format episode of This Is My Architecture, learn how The Things Network and The Things Industries are helping both hobbyists and businesses connect low-power, long-range devices to the cloud. You’ll learn about:

  • The Things Industries’ architecture
  • How Netcetera is leveraging The Things Network for air quality monitoring in Skopje, North Macedonia
  • How Decentlab builds high-quality, long-lasting LoRaWAN devices that work with The Things Network to track environmental conditions

You’ll also get a feel for the community at a local meetup.

Check out more This Is My Architecture video series.