Tag Archives: Alexa

AWS Week in Review – August 8, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-august-8-2022/

As an ex-.NET developer, and now Developer Advocate for .NET at AWS, I’m excited to bring you this week’s Week in Review post, for reasons that will quickly become apparent! There are several updates, customer stories, and events I want to bring to your attention, so let’s dive straight in!

Last Week’s launches
.NET developers, here are two new updates to be aware of—and be sure to check out the events section below for another big announcement:

Tiered pricing for AWS Lambda will interest customers running large workloads on Lambda. The tiers, based on compute duration (measured in GB-seconds), help you save on monthly costs—automatically. Find out more about the new tiers, and see some worked examples showing just how they can help reduce costs, in this AWS Compute Blog post by Heeki Park, a Principal Solutions Architect for Serverless.

Amazon Relational Database Service (RDS) released updates for several popular database engines:

  • RDS for Oracle now supports the April 2022 patch.
  • RDS for PostgreSQL now supports new minor versions. Besides the version upgrades, there are also updates for the PostgreSQL extensions pglogical, pg_hint_plan, and hll.
  • RDS for MySQL can now enforce SSL/TLS for client connections to your databases to help enhance transport layer security. You can enforce SSL/TLS by simply enabling the require_secure_transport parameter (disabled by default) via the Amazon RDS Management console, the AWS Command Line Interface (AWS CLI), AWS Tools for PowerShell, or using the API. When you enable this parameter, clients will only be able to connect if an encrypted connection can be established.

Amazon Elastic Compute Cloud (Amazon EC2) expanded availability of the latest generation storage-optimized Is4gen and Im4gn instances to the Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), and Europe (London) Regions. Built on the AWS Nitro System and powered by AWS Graviton2 processors, these instance types feature up to 30 TB of storage using the new custom-designed AWS Nitro System SSDs. They’re ideal for maximizing the storage performance of I/O intensive workloads that continuously read and write from the SSDs in a sustained manner, for example SQL/NoSQL databases, search engines, distributed file systems, and data analytics.

Lastly, there’s a new URL from AWS Support API to use when you need to access the AWS Support Center console. I recommend bookmarking the new URL, https://support.console.aws.amazon.com/, which the team built using the latest architectural standards for high availability and Region redundancy to ensure you’re always able to contact AWS Support via the console.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here’s some other news items and customer stories that you may find interesting:

AWS Open Source News and Updates – Catch up on all the latest open-source projects, tools, and demos from the AWS community in installment #123 of the weekly open source newsletter.

In one recent AWS on Air livestream segment from AWS re:MARS, discussing the increasing scale of machine learning (ML) models, our guests mentioned billion-parameter ML models which quite intrigued me. As an ex-developer, my mental model of parameters is a handful of values, if that, supplied to methods or functions—not billions. Of course, I’ve since learned they’re not the same thing! As I continue my own ML learning journey I was particularly interested in reading this Amazon Science blog on 20B-parameter Alexa Teacher Models (AlexaTM). These large-scale multilingual language models can learn new concepts and transfer knowledge from one language or task to another with minimal human input, given only a few examples of a task in a new language.

When developing games intended to run fully in the cloud, what benefits might there be in going fully cloud-native and moving the entire process into the cloud? Find out in this customer story from Return Entertainment, who did just that to build a cloud-native gaming infrastructure in a few months, reducing time and cost with AWS services.

Upcoming events
Check your calendar and sign up for these online and in-person AWS events:

AWS Storage Day: On August 10, tune into this virtual event on twitch.tv/aws, 9:00 AM–4.30 PM PT, where we’ll be diving into building data resiliency into your organization, and how to put data to work to gain insights and realize its potential, while also optimizing your storage costs. Register for the event here.

AWS SummitAWS Global Summits: These free events bring the cloud computing community together to connect, collaborate, and learn about AWS. Registration is open for the following AWS Summits in August:

AWS .NET Enterprise Developer Days 2022 – North America: Registration for this free, 2-day, in-person event and follow-up 2-day virtual event opened this past week. The in-person event runs September 7–8, at the Palmer Events Center in Austin, Texas. The virtual event runs September 13–14. AWS .NET Enterprise Developer Days (.NET EDD) runs as a mini-conference within the DeveloperWeek Cloud conference (also in-person and virtual). Anyone registering for .NET EDD is eligible for a free pass to DeveloperWeek Cloud, and vice versa! I’m super excited to be helping organize this third .NET event from AWS, our first that has an in-person version. If you’re a .NET developer working with AWS, I encourage you to check it out!

That’s all for this week. Be sure to check back next Monday for another Week in Review roundup!

— Steve
This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Deploying Alexa Skills with the AWS CDK

Post Syndicated from Jeff Gardner original https://aws.amazon.com/blogs/devops/deploying-alexa-skills-with-aws-cdk/

So you’re expanding your reach by leveraging voice interfaces for your applications through the Alexa ecosystem. You’ve experimented with a new Alexa Skill via the Alexa Developer Console, and now you’re ready to productionalize it for your customers. How exciting!

You are also a proponent of Infrastructure as Code (IaC). You appreciate the speed, consistency, and change management capabilities enabled by IaC. Perhaps you have other applications that you provision and maintain via DevOps practices, and you want to deploy and maintain your Alexa Skill in the same way. Great idea!

That’s where AWS CloudFormation and the AWS Cloud Development Kit (AWS CDK) come in. AWS CloudFormation lets you treat infrastructure as code, so that you can easily model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles. The AWS CDK is an open-source software development framework for modeling and provisioning your cloud application resources via familiar programming languages, like TypeScript, Python, Java, and .NET. AWS CDK utilizes AWS CloudFormation in the background in order to provision resources in a safe and repeatable manner.

In this post, we show you how to achieve Infrastructure as Code for your Alexa Skills by leveraging powerful AWS CDK features.

Concepts

Alexa Skills Kit (ASK)

In addition to the Alexa Developer Console, skill developers can utilize the Alexa Skills Kit (ASK) to build interactive voice interfaces for Alexa. ASK provides a suite of self-service APIs and tools for building and interacting with Alexa Skills, including the ASK CLI, the Skill Management API (SMAPI), and SDKs for Node.js, Java, and Python. These tools provide a programmatic interface for your Alexa Skills in order to update them with code rather than through a user interface.

AWS CloudFormation

AWS CloudFormation lets you create templates written in either YAML or JSON format to model your infrastructure in code form. CloudFormation templates are declarative and idempotent, allowing you to check them into a versioned code repository, deploy them automatically, and track changes over time.

The ASK CloudFormation resource allows you to incorporate Alexa Skills in your CloudFormation templates alongside your other infrastructure. However, this has limitations that we’ll discuss in further detail in the Problem section below.

AWS Cloud Development Kit (AWS CDK)

Think of the AWS CDK as a developer-centric toolkit that leverages the power of modern programming languages to define your AWS infrastructure as code. When AWS CDK applications are run, they compile down to fully formed CloudFormation JSON/YAML templates that are then submitted to the CloudFormation service for provisioning. Because the AWS CDK leverages CloudFormation, you still enjoy every benefit provided by CloudFormation, such as safe deployment, automatic rollback, and drift detection. AWS CDK currently supports TypeScript, JavaScript, Python, Java, C#, and Go (currently in Developer Preview).

Perhaps the most compelling part of AWS CDK is the concept of constructs—the basic building blocks of AWS CDK apps. The three levels of constructs reflect the level of abstraction from CloudFormation. A construct can represent a single resource, like an AWS Lambda Function, or it can represent a higher-level component consisting of multiple AWS resources.

The three different levels of constructs begin with low-level constructs, called L1 (short for “level 1”) or Cfn (short for CloudFormation) resources. These constructs directly represent all of the resources available in AWS CloudFormation. The next level of constructs, called L2, also represents AWS resources, but it has a higher-level and intent-based API. They provide not only similar functionality, but also the defaults, boilerplate, and glue logic you’d be writing yourself with a CFN Resource construct. Finally, the AWS Construct Library includes even higher-level constructs, called L3 constructs, or patterns. These are designed to help you complete common tasks in AWS, often involving multiple resource types. Learn more about constructs in the AWS CDK developer guide.

One L2 construct example is the Custom Resources module. This lets you execute custom logic via a Lambda Function as part of your deployment in order to cover scenarios that the AWS CDK doesn’t support yet. While the Custom Resources module leverages CloudFormation’s native Custom Resource functionality, it also greatly reduces the boilerplate code in your CDK project and simplifies the necessary code in the Lambda Function. The open-source construct library referenced in the Solution section of this post utilizes Custom Resources to avoid some limitations of what CloudFormation and CDK natively support for Alexa Skills.

Problem

The primary issue with utilizing the Alexa::ASK::Skill CloudFormation resource, and its corresponding CDK CfnSkill construct, arises when you define the Skill’s backend Lambda Function in the same CloudFormation template or CDK project. When the Skill’s endpoint is set to a Lambda Function, the ASK service validates that the Skill has the appropriate permissions to invoke that Lambda Function. The best practice is to enable Skill ID verification in your Lambda Function. This effectively restricts the Lambda Function to be invokable only by the configured Skill ID. The problem is that in order to configure Skill ID verification, the Lambda Permission must reference the Skill ID, so it cannot be added to the Lambda Function until the Alexa Skill has been created. If we try creating the Alexa Skill without the Lambda Permission in place, insufficient permissions will cause the validation to fail. The endpoint validation causes a circular dependency preventing us from defining our desired end state with just the native CloudFormation resource.

Unfortunately, the AWS CDK also does not yet support any L2 constructs for Alexa skills. While the ASK Skill Management API is another option, managing imperative API calls within a CI/CD pipeline would not be ideal.

Solution

Overview

AWS CDK is extensible in that if there isn’t a native construct that does what you want, you can simply create your own! You can also publish your custom constructs publicly or privately for others to leverage via package registries like npm, PyPI, NuGet, Maven, etc.

We could write our own code to solve the problem, but luckily this use case allows us to leverage an open-source construct library that addresses our needs. This library is currently available for TypeScript (npm) and Python (PyPI).

The complete solution can be found at the GitHub repository, here. The code is in TypeScript, but you can easily port it to another language if necessary. See the AWS CDK Developer Guide for more guidance on translating between languages.

Prerequisites

You will need the following in order to build and deploy the solution presented below. Please be mindful of any prerequisites for these tools.

  • Alexa Developer Account
  • AWS Account
  • Docker
    • Used by CDK for bundling assets locally during synthesis and deployment.
    • See Docker website for installation instructions based on your operating system.
  • AWS CLI
    • Used by CDK to deploy resources to your AWS account.
    • See AWS CLI user guide for installation instructions based on your operating system.
  • Node.js
    • The CDK Toolset and backend runs on Node.js regardless of the project language. See the detailed requirements in the AWS CDK Getting Started Guide.
    • See the Node.js website to download the specific installer for your operating system.

Clone Code Repository and Install Dependencies

The code for the solution in this post is located in this repository on GitHub. First, clone this repository and install its local dependencies by executing the following commands in your local Terminal:

# clone repository
git clone https://github.com/aws-samples/aws-devops-blog-alexa-cdk-walkthrough
# navigate to project directory
cd aws-devops-blog-alexa-cdk-walkthrough
# install dependencies
npm install

Note that CLI commands in the sections below (ask, cdk) use npx. This executes the command from local project binaries if they exist, or, if not, it installs the binaries required to run the command. In our case, the local binaries are installed as part of the npm install command above. Therefore, npx will utilize the local version of the binaries even if you already have those tools installed globally. We use this method to simplify setup and alleviate any issues arising from version discrepancies.

Get Alexa Developer Credentials

To create and manage Alexa Skills via CDK, we will need to provide Alexa Developer account credentials, which are separate from our AWS credentials. The following values must be supplied in order to authenticate:

  • Vendor ID: Represents the Alexa Developer account.
  • Client ID: Represents the developer, tool, or organization requiring permission to perform a list of operations on the skill. In this case, our AWS CDK project.
  • Client Secret: The secret value associated with the Client ID.
  • Refresh Token: A token for reauthentication. The ASK service uses access tokens for authentication that expire one hour after creation. Refresh tokens do not expire and can retrieve a new access token when needed.

Follow the steps below to retrieve each of these values.

Get Alexa Developer Vendor ID

Easily retrieve your Alexa Developer Vendor ID from the Alexa Developer Console.

  1. Navigate to the Alexa Developer console and login with your Amazon account.
  2. After logging in, on the main screen click on the “Settings” tab.

Screenshot of Alexa Developer console showing location of Settings tab

  1. Your Vendor ID is listed in the “My IDs” section. Note this value.

Screenshot of Alexa Developer console showing location of Vendor ID

Create Login with Amazon (LWA) Security Profile

The Skill Management API utilizes Login with Amazon (LWA) for authentication, so first we must create a security profile for LWA under the same Amazon account that we will use to create the Alexa Skill.

  1. Navigate to the LWA console and login with your Amazon account.
  2. Click the “Create a New Security Profile” button.

Screenshot of Login with Amazon console showing location of Create a New Security Profile button

  1. Fill out the form with a Name, Description, and Consent Privacy Notice URL, and then click “Save”.

Screenshot of Login with Amazon console showing Create a New Security Profile form

  1. The new Security Profile should now be listed. Hover over the gear icon, located to the right of the new profile name, and click “Web Settings”.

Screenshot of Login with Amazon console showing location of Web Settings link

  1. Click the “Edit” button and add the following under “Allowed Return URLs”:
    • http://127.0.0.1:9090/cb
    • https://s3.amazonaws.com/ask-cli/response_parser.html
  2. Click the “Save” button to save your changes.
  3. Click the “Show Secret” button to reveal your Client Secret. Note your Client ID and Client Secret.

Screenshot of Login with Amazon console showing location of Client ID and Client Secret values

Get Refresh Token from ASK CLI

Your Client ID and Client Secret let you generate a refresh token for authenticating with the ASK service.

  1. Navigate to your local Terminal and enter the following command, replacing <your Client ID> and <your Client Secret> with your Client ID and Client Secret, respectively:
# ensure you are in the root directory of the repository
npx ask util generate-lwa-tokens --client-id "<your Client ID>" --client-confirmation "<your Client Secret>" --scopes "alexa::ask:skills:readwrite alexa::ask:models:readwrite"
  1. A browser window should open with a login screen. Supply credentials for the same Amazon account with which you created the LWA Security Profile previously.
  2. Click the “Allow” button to grant the refresh token appropriate access to your Amazon Developer account.
  3. Return to your Terminal. The credentials, including your new refresh token, should be printed. Note the value in the refresh_token field.

NOTE: If your Terminal shows an error like CliFileNotFoundError: File ~/.ask/cli_config not exists., you need to first initialize the ASK CLI with the command npx ask configure. This command will open a browser with a login screen, and you should enter the credentials for the Amazon account with which you created the LWA Security Profile previously. After signing in, return to your Terminal and enter n to decline linking your AWS account. After completing this process, try the generate-lwa-tokens command above again.

NOTE: If your Terminal shows an error like CliError: invalid_client, make sure that you have included the quotation marks (") around the --client_id and --client-confirmation arguments.

Add Alexa Developer Credentials to AWS SSM Parameter Store / AWS Secrets Manager

Our AWS CDK project requires access to the Alexa Developer credentials we just generated (Client ID, Client Secret, Refresh Token) in order to create and manage our Skill. To avoid hard-coding these values into our code, we can store the values in AWS Systems Manager (SSM) Parameter Store and AWS Secrets Manager, and then retrieve them programmatically when deploying our CDK project. In our case, we are using SSM Parameter Store to store the non-sensitive values in plaintext, and Secrets Manager to store the secret values in encrypted form.

The repository contains a shell script at scripts/upload-credentials.sh that can create the appropriate parameters and secrets via AWS CLI. You’ll just need to supply the credential values from the previous steps. Alternatively, instructions for creating parameters and secrets via the AWS Console or AWS CLI can each be found in the AWS Systems Manager User Guide and AWS Secrets Manager User Guide.

You will need the following resources created in your AWS account before proceeding:

Name Service Type
/alexa-cdk-blog/alexa-developer-vendor-id SSM Parameter Store String
/alexa-cdk-blog/lwa-client-id SSM Parameter Store String
/alexa-cdk-blog/lwa-client-secret Secrets Manager Plaintext / secret-string
/alexa-cdk-blog/lwa-refresh-token Secrets Manager Plaintext / secret-string

Code Walkthrough

Skill Package

When you programmatically create an Alexa Skill, you supply a Skill Package, which is a zip file consisting of a set of files defining your Skill. A skill package includes a manifest JSON file, and optionally a set of interaction model files, in-skill product files, and/or image assets for your skill. See the Skill Management API documentation for details regarding skill packages.

The repository contains a skill package that defines a simple Time Teller Skill at src/skill-package. If you want to use an existing Skill instead, replace the contents of src/skill-package with your skill package.

If you want to export the skill package of an existing Skill, use the ASK CLI:

  1. Navigate to the Alexa Developer console and log in with your Amazon account.
  2. Find the Skill you want to export and click the link under the name “Copy Skill ID”. Either make sure this stays on your clipboard or note the Skill ID for the next step.
  3. Navigate to your local Terminal and enter the following command, replacing <your Skill ID> with your Skill ID:
# ensure you are in the root directory of the repository
cd src
npx ask smapi export-package --stage development --skill-id <your Skill ID>

NOTE: To export the skill package for a live skill, replace --stage development with --stage live.

NOTE: The CDK code in this solution will dynamically populate the manifest.apis section in skill.json. If that section is populated in your skill package, either clear it out or know that it will be replaced when the project is deployed.

Skill Backend Lambda Function

The Lambda Function code for the Time Teller Alexa Skill’s backend also resides within the CDK project at src/lambda/skill-backend. If you want to use an existing Skill instead, replace the contents of src/lambda/skill-backend with your Lambda code. Also note the following if you want to use your own Lambda code:

  • The CDK code in the repository assumes that the Lambda Function runtime is Python. However, you can modify for another runtime if necessary by using either the aws-lambda or aws-lambda-nodejs CDK module instead of aws-lambda-python.
  • If you’re using your own Python Lambda Function code, please note the following to ensure the Lambda Function definition compatibility in the sample CDK project. If your Lambda Function varies from what is below, then you may need to modify the CDK code. See the Python Lambda code in the repository for an example.
    • The skill-backend/ directory should contain all of the necessary resources for your Lambda Function. For Python functions, this should include at least a file named index.py that contains your Lambda entrypoint, and a requirements.txt file containing your pip dependencies.
    • For Python functions, your Lambda handler function should be called handler(). This generally looks like handler = SkillBuilder().lambda_handler() when using the Python ASK SDK.

Open-Source Alexa Skill Construct Library

As mentioned above, this solution utilizes an open-source construct library to create and manage the Alexa Skill. This construct library utilizes the L1 CfnSkill construct along with other L1 and L2 constructs to create a complete Alexa Skill with a functioning backend Lambda Function. Utilizing this construct library means that we are no longer limited by the shortcomings of only using the Alexa::ASK::Skill CloudFormation resource or L1 CfnSkill construct.

Look into the construct library code if you’re curious. There’s only one construct—Skill—and you can follow the code to see how it dodges the Lambda Permission issue.

CDK Stack

The CDK stack code is located in lib/alexa-cdk-stack.ts. Let’s dive in to understand what’s happening. We’ll look at one section at a time:

...
const PARAM_PREFIX = '/alexa-cdk-blog/'

export class AlexaCdkStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Get Alexa Developer credentials from SSM Parameter Store/Secrets Manager.
    // NOTE: Parameters and secrets must have been created in the appropriate account before running `cdk deploy` on this stack.
    //       See sample script at scripts/upload-credentials.sh for how to create appropriate resources via AWS CLI.
    const alexaVendorId = ssm.StringParameter.valueForStringParameter(this, `${PARAM_PREFIX}alexa-developer-vendor-id`);
    const lwaClientId = ssm.StringParameter.valueForStringParameter(this, `${PARAM_PREFIX}lwa-client-id`);
    const lwaClientSecret = cdk.SecretValue.secretsManager(`${PARAM_PREFIX}lwa-client-secret`);
    const lwaRefreshToken = cdk.SecretValue.secretsManager(`${PARAM_PREFIX}lwa-refresh-token`);
    ...
  }
}

First, within the stack’s constructor, after calling the constructor of the base class, we retrieve the credentials we uploaded earlier to SSM and Secrets Manager. This lets us to store our account credentials in a safe place—encrypted in the case of our lwaClientSecret and lwaRefreshToken secrets—and we avoid storing sensitive data in plaintext or source control.

...
export class AlexaCdkStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    ...
    // Create the Lambda Function for the Skill Backend
    const skillBackend = new lambdaPython.PythonFunction(this, 'SkillBackend', {
      entry: 'src/lambda/skill-backend',
      timeout: cdk.Duration.seconds(7)
    });
    ...
  }
}

Next, we create the Lambda Function containing the skill’s backend logic. In this case, we are using the aws-lambda-python module. This transparently handles every aspect of the dependency installation and packaging for us. Rather than leave the default 3-second timeout, specify a 7-second timeout to correspond with the Alexa service timeout of 8 seconds.

...

export class AlexaCdkStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    ...
    // Create the Alexa Skill
    const skill = new Skill(this, 'Skill', {
      endpointLambdaFunction: skillBackend,
      skillPackagePath: 'src/skill-package',
      alexaVendorId: alexaVendorId,
      lwaClientId: lwaClientId,
      lwaClientSecret: lwaClientSecret,
      lwaRefreshToken: lwaRefreshToken
    });
  }
}

Finally, we create our Skill! All we need to do is pass the Lambda Function with the Skill’s backend code into where the skill package is located, as well as the credentials for authenticating into our Alexa Developer account. All of the wiring for deploying the skill package and connecting the Lambda Function to the Skill is handled transparently within the construct code.

Deploy CDK project

Now that all of our code is in place, we can deploy our project and test it out!

  1. Make sure that you have bootstrapped your AWS account for CDK. If not, you can bootstrap with the following command:
# ensure you are in the root directory of the repository
npx cdk bootstrap
  1. Make sure that the Docker daemon is running locally. This is generally done by starting the Docker Desktop application.
    • You can also use the following Terminal command to determine whether the Docker daemon is running. The command will return an error if the daemon is not running.
docker ps -q
    • See more details regarding starting the Docker daemon based on your operating system via the Docker website.
  1. Synthesize your CDK project in order to confirm that your project is building properly.
# ensure you are in the root directory of the repository
npx cdk synth

NOTE: In addition to generating the CloudFormation template for this project, this command also bundles the Lambda Function code via Docker, so it may take a few minutes to complete.

  1. Deploy!
# ensure you are in the root directory of the repository
npx cdk deploy
    • Feel free to review the IAM policies that will be created, and enter y to continue when prompted.
    • If you would like to skip the security approval requirement and deploy in one step, use cdk deploy --require-approval never instead.

Check it out!

Once your project finishes deploying, take a look at your new Skill!

  1. Navigate to the Alexa Developer console and log in with your Amazon account.
  2. After logging in, on the main screen you should now see your new Skill listed. Click on the name to go to the “Build” screen.
  3. Investigate the console to confirm that your Skill was created as expected.
  4. On the left-hand navigation menu, click “Endpoint” and confirm that the ARN for your backend Lambda Function is showing in the “Default Region” field. This ARN was added dynamically by our CDK project.

Screenshot of Alexa Developer console showing location of Endpoint text box

  1. Test the Skill to confirm that it functions properly.
    1. Click on the “Test” tab and enable testing for the “Development” stage of your skill.
    2. Type your Skill’s invocation name in the Alexa Simulator in order to launch the skill and invoke a response.
      • If you deployed the sample skill package and Lambda Function, the invocation name is “time teller”. If Alexa responds with the current time in UTC, it is working properly!

Bonus Points

Now that you can deploy your Alexa Skill via the AWS CDK, can you incorporate your new project into a CI/CD pipeline for automated deployments? Extra kudos if the pipeline is defined with the CDK 🙂 Follow these links for some inspiration:

Cleanup

After you are finished, delete the resources you created to avoid incurring future charges. This can be easily done by deleting the CloudFormation stack from the CloudFormation console, or by executing the following command in your Terminal, which has the same effect:

# ensure you are in the root directory of the repository
npx cdk destroy

Conclusion

You can, and should, strive for IaC and CI/CD in every project, and the powerful AWS CDK features make that easier with a set of simple yet flexible constructs. Leverage the simplicity of declarative infrastructure definitions with convenient default configurations and helper methods via the AWS CDK. This example also reveals that if there are any gaps in the built-in functionality, you can easily fill them with a custom resource construct, or one of the thousands of open-source construct libraries shared by fellow CDK developers around the world. Happy coding!

Carlos Santos

Jeff Gardner

Jeff Gardner is a Solutions Architect with Amazon Web Services (AWS). In his role, Jeff helps enterprise customers through their cloud journey, leveraging his experience with application architecture and DevOps practices. Outside of work, Jeff enjoys watching and playing sports and chasing around his three young children.

Majority of Alexa Now Running on Faster, More Cost-Effective Amazon EC2 Inf1 Instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/majority-of-alexa-now-running-on-faster-more-cost-effective-amazon-ec2-inf1-instances/

Today, we are announcing that the Amazon Alexa team has migrated the vast majority of their GPU-based machine learning inference workloads to Amazon Elastic Compute Cloud (EC2) Inf1 instances, powered by AWS Inferentia. This resulted in 25% lower end-to-end latency, and 30% lower cost compared to GPU-based instances for Alexa’s text-to-speech workloads. The lower latency allows Alexa engineers to innovate with more complex algorithms and to improve the overall Alexa experience for our customers.

AWS built AWS Inferentia chips from the ground up to provide the lowest-cost machine learning (ML) inference in the cloud. They power the Inf1 instances that we launched at AWS re:Invent 2019. Inf1 instances provide up to 30% higher throughput and up to 45% lower cost per inference compared to GPU-based G4 instances, which were, before Inf1, the lowest-cost instances in the cloud for ML inference.

Alexa is Amazon’s cloud-based voice service that powers Amazon Echo devices and more than 140,000 models of smart speakers, lights, plugs, smart TVs, and cameras. Today customers have connected more than 100 million devices to Alexa. And every month, tens of millions of customers interact with Alexa to control their home devices (“Alexa, increase temperature in living room,” “Alexa, turn off bedroom’“), to listen to radios and music (“Alexa, start Maxi 80 on bathroom,” “Alexa, play Van Halen from Spotify“), to be informed (“Alexa, what is the news?” “Alexa, is it going to rain today?“), or to be educated, or entertained with 100,000+ Alexa Skills.

If you ask Alexa where she lives, she’ll tell you she is right here, but her head is in the cloud. Indeed, Alexa’s brain is deployed on AWS, where she benefits from the same agility, large-scale infrastructure, and global network we built for our customers.

How Alexa Works
When I’m in my living room and ask Alexa about the weather, I trigger a complex system. First, the on-device chip detects the wake word (Alexa). Once detected, the microphones record what I’m saying and stream the sound for analysis in the cloud. At a high level, there are two phases to analyze the sound of my voice. First, Alexa converts the sound to text. This is known as Automatic Speech Recognition (ASR). Once the text is known, the second phase is to understand what I mean. This is Natural Language Understanding (NLU). The output of NLU is an Intent (what does the customer want) and associated parameters. In this example (“Alexa, what’s the weather today ?”), the intent might be “GetWeatherForecast” and the parameter can be my postcode, inferred from my profile.

This whole process uses Artificial Intelligence heavily to transform the sound of my voice to phonemes, phonemes to words, words to phrases, phrases to intents. Based on the NLU output, Alexa routes the intent to a service to fulfill it. The service might be internal to Alexa or external, like one of the skills activated on my Alexa account. The fulfillment service processes the intent and returns a response as a JSON document. The document contains the text of the response Alexa must say.

The last step of the process is to generate the voice of Alexa from the text. This is known as Text-To-Speech (TTS). As soon as the TTS starts to produce sound data, it is streamed back to my Amazon Echo device: “The weather today will be partly cloudy with highs of 16 degrees and lows of 8 degrees.” (I live in Europe, these are Celsius degrees 🙂 ). This Text-To-Speech process also heavily involves machine learning models to build a phrase that sounds natural in terms of pronunciations, rhythm, connection between words, intonation etc.

Alexa is one of the most popular hyperscale machine learning services in the world, with billions of inference requests every week. Of Alexa’s three main inference workloads (ASR, NLU, and TTS), TTS workloads initially ran on GPU-based instances. But the Alexa team decided to move to the Inf1 instances as fast as possible to improve the customer experience and reduce the service compute cost.

What is AWS Inferentia?
AWS Inferentia is a custom chip, built by AWS, to accelerate machine learning inference workloads and optimize their cost. Each AWS Inferentia chip contains four NeuronCores. Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, dramatically reducing latency and increasing throughput.

AWS Inferentia can be used natively from popular machine-learning frameworks like TensorFlow, PyTorch, and MXNet, with AWS Neuron. AWS Neuron is a software development kit (SDK) for running machine learning inference using AWS Inferentia chips. It consists of a compiler, run-time, and profiling tools that enable you to run high-performance and low latency inference.

Who Else is Using Amazon EC2 Inf1?
In addition to Alexa, Amazon Rekognition is also adopting AWS Inferentia. Running models such as object classification on Inf1 instances resulted in 8x lower latency and doubled throughput compared to running these models on GPU instances.

Customers, from Fortune 500 companies to startups, are using Inf1 instances for machine learning inference. For example, Snap Inc.​ incorporates machine learning (ML) into many aspects of Snapchat, and exploring innovation in this field is a key priority for them. Once they heard about AWS Inferentia, they collaborated with AWS to adopt Inf1 instances to help with ML deployment, including around performance and cost. They started with their recommendation models inference, and are now looking forward to deploying more models on Inf1 instances in the future.

Conde Nast, one of the world’s leading media companies, saw a 72% reduction in cost of inference compared to GPU-based instances for its recommendation engine. And Anthem, one of the leading healthcare companies in the US, observed 2x higher throughput compared to GPU-based instances for its customer sentiment machine learning workload.

How to Get Started with Amazon EC2 Inf1
You can start using Inf1 instances today.

If you prefer to manage your own machine learning application development platforms, you can get started by either launching Inf1 instances with AWS Deep Learning AMIs, which include the Neuron SDK, or you can use Inf1 instances via Amazon Elastic Kubernetes Service or Amazon ECS for containerized machine learning applications. To learn more about running containers on Inf1 instances, read this blog to get started on ECS and this blog to get started on EKS.

The easiest and quickest way to get started with Inf1 instances is via Amazon SageMaker, a fully managed service that enables developers to build, train, and deploy machine learning models quickly.

Get started with Inf1 on Amazon SageMaker today.

— seb

PS: The team just released this video, check it out!

Field Notes: Powering the Connected Vehicle with Amazon Alexa

Post Syndicated from Amit Kumar original https://aws.amazon.com/blogs/architecture/field-notes-powering-the-connected-vehicle-with-amazon-alexa/

Alexa has improved the in-home experience and has potential to greatly enhance the in-car experience. This blog is a continuation of my previous blog: Field Notes: Implementing a Digital Shadow of a Connected Vehicle with AWS IoT. Multiple OEMs (Original Equipment Manufacturers) have showcased this capability during CES 2020. Use cases include; a person seating at the rear seat can play a song, control HVAC (Heating, ventilation, and air conditioning), pay for gas/coffee, all while using Alexa. In this blog, I cover how you create a connected vehicle using Alexa, to initiate a command, such as; ‘Alexa, open my trunk’.

Solution Architecture

“Alexa, open my trunk”

The preceding architecture shows a message flowing in the following example:

  1. A user of a connected vehicle wants to open their trunk using an Alexa voice command. Alexa will identify the right intent based on utterances and invoke a Lambda function. The Lambda function updates the device shadow with (desired {““trunk””: ““open””}).
  2. Vehicle TCU registered the callback function shadowRegisterDeltaCallback(). Listen to delta topics for the device shadow by subscribing to delta topics. Whenever there is a difference between the desired and reported state, the registered callback will be called.  The delta payload will be available in the callback. Update performed in #1 will be received in delta callback.
  3. Now, the vehicle must act on the desired state. In this case, it acts on the trunk status change. After performing the required action for the trunk change, the vehicle TCU will update the device shadow with the reported state (reported : { “trunk”: “open”} )
  4. The web/mobile app subscribed to the topic $aws/things/tcu/shadow/update/accepted”. Therefore, as soon as the vehicle TCU updates the shadow, the Web/Mobile app received the update and synchronized the UI state.

As part of the previous blog, we implemented #2, #3 and #4. Lets implement #1 and incorporate into the solution.

The source code (vehicle-command) of this blog is available in this code repository.

The Alexa voice command required the implementation of three key areas:

  1. Configure Alexa – which will listen to utterances and identify the right intent and invoke a Lambda function.
  2. Set up the Lambda function – which will interpret the command and invoke the AWS IoT Core device shadow API.
  3. Handle Command at Vehicle tcu and App – Vehicle tcu must register shadowRegisterDeltaCallback so any update in the device shadow will receive a call message to perform the  actual command by the vehicle and synchronize the state with a web/mobile app.

Let’s ‘Open a trunk’ using Alexa voice command. First set up the environment:

  • Open AWS Cloud9 IDE created in an earlier lab and run the following command:

Set up permanent credentials. Note: Alexa doesn’t work with temporary credentials.  Configure it with permanent credentials for ASK command line interface (CLI).

  1. Open Cloud9 Preferences by clicking AWS Cloud9 > Preference or  by clicking on the “gear” icon in the upper right corner of the  Cloud9 window
  2. Select “AWS  Settings”
  3. Disable “AWS  managed temporary credentials”
  4. $ aws  configure
  5. Enter the Access Key  and Secret Access Key of a user that has required access credentials
  6. Use us-east-1 as the region. It will store in ~/.aws/config

Verify that everything worked by examining the file ~/.aws/credentials. It should resemble the following:

[default]
 aws_access_key_id = <access_key>
 aws_secret_access_key = <secrect_key>
 aws_session_token=

*Remove aws_session_token line from credentials file.

Next, install the Alexa CLI:

$ npm install ask-cli --global

Initialize ASK CLI by issuing the following command. This will initialize the ASK CLI with a profile associated with your Amazon developer credentials.

$ ask configure --no-browser

Check you are linking AWS account with Alexa:

Do you want to link your AWS account in order to host your Alexa skills? Yes

#At the end output should look as follows:

------------------------- Initialization Complete -------------------------
Here is the summary for the profile setup:
ASK Profile: default
AWS Profile: default
Vendor ID: MXXXXXXXXXX

As part of the previous blog, you have already cloned the following git repository in AWS Cloud9 IDE. It has a baseline code to jump start.

$ git clone

Configure Alexa Skills

The Alexa Developer console GUI can be used but we are doing it programmatically so it can be done at scale and allows versioning.

1. Open connected-vehicle-lab/vehicle-command/skill-package/skill.json . We have 2 locale en-US, en-IN are defined in the base code for Alexa command. Let’s add en-GB locale in the json file located at “manifest”/”publishingInformation”/”locales”.  Similarly, you can add locale for your preferred language:

"en-GB": {
"name": "vehicle-command",
"summary": "Control Vehicle using voice command",
"description": "Allow you to control vehicle using voice command",
"examplePhrases": [
    "Alexa open genie",
    "ask genie to lower window",
    "window up"
    ],
"keywords": []
}

If you are inserting into the middle then make sure it is separated by a comma.

2. Let’s create a copy of models connected-vehicle-lab/vehicle-command/skill-package/interactionModels/custom/en-US.json and rename it to en-GB.json and add our intent

  • We have “invocationName”: “genie”.  Here, we  are using “genie” as a command to invoke our Alexa skill. You  can change if needed
  • The key elements in this json file is intent, slots, sample utterance and slot types. Let’s define the  slot types t_action_type for ‘open’, ‘close’, ‘lock’, ‘unlock’. under “types”: [].
        {
        "name": "t_action_type",
        "values": [
            {
                "name": {
                "value": "unlock"
                }
            },
            {
                "name": {
                "value": "lock"
                }
            },
            {
                "name": {
                "value": "close"
                }
            },
            {
                "name": {
                "value": "open"
                }
            }
          ]
        }
  • Let’s add intent under “intents”: [] for trunk  ‘TrunkCommandIntent’ and define the sample utterance speech like ‘lock my trunk’,  ‘open trunk’. We are using slot types to simplify the utterance and  understand the operation requested by a user.
        {
            "name": "TrunkCommandIntent",
            "slots": [
            {
                "name": "t_action",
                "type": "t_action_type"
            }
            ],
            "samples": [
                "{t_action} trunk",
                "trunk {t_action}",
                "{t_action} my trunk",
                "{t_action} trunk"
            ]
}
  • Now add the same intent, slots, slot type and sample utterances  for other locales files (en-US.json and en-IN.json) as well.

3. Let’s add response message under languageString.js (available at /connected-vehicle-lab/vehicle-command/lambda/custom).

TRUNK_OPEN: 'Trunk Open',
TRUNK_CLOSE: 'Trunk Close' 

If you are inserting into the middle then make sure it is separated by a comma.

Set up the Lambda function

1. Add a Lambda function which will get invoked by Alexa. This Lambda function will handle  the intent and invoke IoT Core Device Shadow API and execute the actual command of ‘Trunk open/unlock or lock/close’.

  • Open /connected-vehicle-lab/vehicle-command/lambda/custom/index.js  and add our TrunkCommandIntent
const TrunkCommandIntentHandler = {
                canHandle(handlerInput) {
                return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
                && Alexa.getIntentName(handlerInput.requestEnvelope) === 'TrunkCommandIntent';
                },
                    handle(handlerInput) {
                    var t_action_value = handlerInput.requestEnvelope.request.intent.slots.t_action.value;
                    console.log(t_action_value);
                    var speakOutput;
                    const obj = "trunk";
                    if (t_action_value == "lock" || t_action_value == "open")
                    {
                        updateDeviceShadow(obj, "open");
                        speakOutput = handlerInput.t('TRUNK_OPEN')
                    }
                    else 
                    {
                        updateDeviceShadow(obj, "close");
                        speakOutput = handlerInput.t('TRUNK_CLOSE')
                    } 
                    console.log(speakOutput);
                    return handlerInput.responseBuilder
                    .speak(speakOutput)
                    //.reprompt('add a reprompt if you want to keep the session open for the user to respond')
                    .getResponse();
                }
            };
  • We have  UpdateDeviceShadow(“vehicle_part”, “command”) function  which actually invokes the IoT core Device Shadow API
 function updateDeviceShadow (obj, command)
    {
        shadowMessage.state.desired[obj] = command;
        var iotdata = new AWS.IotData({endpoint: ioT_EndPoint});
        var params = {
        payload: JSON.stringify(shadowMessage) , /* required */
        thingName: deviceName /* required */ 
        };
        iotdata.updateThingShadow(params, function(err, data) {
            if (err) 
            console.log(err, err.stack); // an error occurred
            else 
            console.log(data); 
            //reset the shadow 
            shadowMessage.state.desired = {}
        });
} 

2. Update the value of ioT_EndPoint from AWS IoT Core > Settings > Custom Endpoint

3.  Add Trunk CommandIntent in request handler

exports.handler = Alexa.SkillBuilders.custom()
    .addRequestHandlers(
        LaunchRequestHandler,
        WindowCommandIntentHandler,
        DoorCommandIntentHandler,
        TrunkCommandIntentHandler,

4. Deploy Alexa Skills

$ cd ~/environment/connected-vehicle-lab/vehicle-command
$ ask deploy 

Handle Command at Vehicle tcu and App

For more detail on this section, refer to part 1 of this blog: Field Notes: Implementing a Digital Shadow of a Connected Vehicle with AWS IoT.

@ Vehicle tcu – tcuShadowRead.py has trunk_handle() function to receive a message from device shadow

def trunk_handle(status):
  if status is not None:
    shadowClient.reportedShadowMessage['state']['reported']['trunk'] = status
    print ('Perform action on trunk status change : ' + str(status))

@web App – demo-car/js/websocket.js has handleTrunkCommand function receive callback message as soon any update happened on Device Shadow

//this function will be called by onMessageArrive
function handleTrunkCommand(trunkStatus) {
    obj = document.getElementsByClassName("action trunk")[0];
    obj.checked = trunkStatus == "open" ? true : false;
    console.log(obj.getAttribute("data-text") + " : " + obj.checked);
}

demo-car/js/demo-car.js has handleTrunkCommand function to handle UI input and invoke IoT Core Device Gateway API to update the desired state.

//this function will be called when user will click on trunk checkbox
    handleTrunkCommand: function(obj) {
        obj.checked ? demoCar.shadowMessage.state.desired.trunk = "open" : demoCar.shadowMessage.state.desired.trunk = "close";
        console.log(obj.getAttribute("data-text") + " : " + demoCar.shadowMessage.state.desired.trunk);
        demoCar.accessIoTDevice();
    },

Use Alexa skill to invoke a command

Let’s test or command ‘Alexa, open my trunk’. We can use a command line and execute:

$ask dialog --locale "en-GB" 

Using Alexa GUI, provides an interesting visualization, as shown in the following screenshot.

  1. Open the Alexa GUI,  Select ‘vehicle command’ skill and select test tab. Allow “developer.amazon.com” to use your microphone?
  2. Open a demo.html web app side by side of the Alexa GUI to check an actual operation happened at the Vehicle tcu and synchronize the  status with virtual car model.
  3. Now test the Alexa skill. You can use an audio command as well. You can ask or write ‘ask genie’.

Alexa developer console

Clean Up

What a fun exploration this has been! Now clean up AWS resources created for this and the previous post to avoid incurring any future AWS services costs. Resources created by CDK can be deleted by deleting the stack on the CloudFormation console. Resources created manually need to be deleted individually.

Conclusion

In this blog post, I showed how you can enable voice command for a connected vehicle and enhance in-vehicle user experience.  Similarly, you can also extend this solution for the use cases like Alexa ‘open my garage’. AWS IoT Core Device Shadow API does all the heavy-lifting in this case. Any update in device shadow allows both device and user application to act. Alexa skill is acting as an interface to capture the user command and invoke the lambda function.

Since these are all serverless services, that means this implementation can scale without making any change in the application and you only pay when someone invokes a command. Creating an engaging, high-quality interaction with Alexa in the vehicle is critical. You can refer to Alexa Automotive Documentation for an Alexa Built-in automotive experience.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

 

Registration for Amazon re:MARS 2020 is OPEN 🎉

Post Syndicated from Alejandra Quetzalli original https://aws.amazon.com/blogs/aws/registration-for-amazon-remars-2020-is-open-%F0%9F%8E%89/

Can you believe it? It’s almost time for re:MARS 2020 🎉!

It’ll be from June 16-19 in Las Vegas, NV, with standard pricing at $1999. This year we have an all-new Amazon re:MARS Developer Day for engineers and developers to engage with Amazon product teams. We’ll also have breakout sessions just like last year where leaders, thinkers, and technical advisors can get inspired by how Amazon leaders and industry peers are applying these technologies today.

You will be able to choose from over 100 main event sessions and activities featuring visionaries working on the future, organizations applying these technologies today, and even take a peek behind the curtain at Amazon’s own implementations. You’ll also see new exhibits and interact with even more peers and experts than last year!

Get ready to hear from Jeff Bezos, Jon Favreau, and more at Amazon’s event dedicated to Machine Learning 🧠, Automation and Voice 🔊, Robotics 🤖, and Space 🛰.

Sad because you missed it last year?

Don’t be! 🙃

You can check out last year’s recorded sessions on YouTube, the Day 1 Blog post, the 2019 Session Catalog, and even Jeff Bezos’s own tweet in the following links provided…

What to get excited about for Amazon re:MARS 2020

⚠New for 2020! Amazon re:MARS Developer Day ⚠

Make sure to register in advance and arrive early for this deep dive on technical content with all kinds of Amazon product leaders. This Amazon re:MARS Developer Day — included in the conference pass — will host a keynote and a series of tracks including diverse AWS Machine Learning services, AWS RoboMaker, Alexa Skills, and Alexa for Device Makers.

🧠Machine Learning
Get ready to get your mind blown away. Come learn about pre-trained AI services for computer vision, language, recommendations, and forecasting. We’ll also teach you how to build, train and deploying machine learning models at scale and even how to build custom models with support for open-source frameworks.

🤖AWS RoboMaker
Hear from AWS Robotics leaders about the AWS RoboMaker service, how customers are using it to advance their robotics initiatives, and learn about the latest features. Follow up with deep dive sessions led by the product team to help you understand the concepts, onboard, and get started building.

🔊Alexa Skills
Have you ever wanted to take a behind-the-scenes look at Alexa developer services? Learn how Alexa can help you innovate in conversational AI and engage customers through voice; get the latest voice enablement and conversational interface features; and attend deep dives on technologies like Alexa Conversations.

🛠Alexa for Device Makers
If you enjoy tinkering with hardware, you’re going to love our sessions on building Alexa into appliances, cameras, computers, headsets, TVs and more! A voice interface makes it natural to control and access content and services, and joins your device to the expanding network of Alexa devices, skills and services. Learn from Alexa leaders about the latest innovations and break out into deep dive sessions on the tech.

🗣Our Keynote Speakers

  • Jeff Bezos, Founder & CEO of Amazon
  • Jeff Wilke, CEO, Worldwide Consumer, Amazon
  • Dr. Cynthia Breazeal, Professor of Media Arts and Sciences, Head of Personal Robots Group, MIT
  • Dr. Kate Darling, Leading Expert in Social Robotics and MIT Media Lab Research Specialist
  • Dr. Bethany Ehlmann, Professor of Planetary Science, Caltech, Research Scientist at Jet Propulsion Laboratory
  • Dr. Ayanna Howard, Roboticist and Chair, School of Interactive Computing, Georgia Tech
  • Dr. Maja Matarić, Chaired and Distinguished Professor, Computer Science, Neuroscience, and Pediatrics, USC
  • Dr. Sara Seager, Planetary Scientist and Astrophysicist, MIT
  • Boyan Slat, CEO and Founder of The Ocean Cleanup
  • Dr. Eric Topol, Executive Vice President, Scripps Research, Founder and Director, Scripps Research Translational Institute
  • Jon Favreau, Director, Producer, Writer, and Actor with Ben Grossman, CEO, Magnopus, and Mixed Reality Director

👩🏿‍💻Ready to register for Amazon re:MARS 2020?

Ready to join the fun? Check out the re:MARS 2020 website and register. (Psst!🤫Academics and students registering with a .edu email address can use discount code ACAD20REMARS.)

🌗Partner Sponsorship

Are you an Amazon partner (APN) looking to showcase your expertise in Machine Learning, Automation, Robotics or Space? The re:MARS sponsorship program gives sponsors access to an experiential (i.e. learning through reflection on doing) Tech Showcase that highlights their areas of expertise. Select sponsors may also offer executive thought leadership in sponsor-led breakout sessions. To learn more about this program and view all of the available options, please view the sponsorship prospectus. Questions? 📧Email our team📧 for more information.

#AstronautsAttendForFree 👩‍🚀

Will I see you there?

I hope so! All you have to do is register. 🥳

 

 

¡Gracias por tu tiempo!

~Alejandra 💁🏻‍♀️ &  Canela🐾

Learn about AWS Services & Solutions – April AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-april-aws-online-tech-talks/

AWS Tech Talks

Join us this April to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Blockchain

May 2, 2019 | 11:00 AM – 12:00 PM PTHow to Build an Application with Amazon Managed Blockchain – Learn how to build an application on Amazon Managed Blockchain with the help of demo applications and sample code.

Compute

April 29, 2019 | 1:00 PM – 2:00 PM PTHow to Optimize Amazon Elastic Block Store (EBS) for Higher Performance – Learn how to optimize performance and spend on your Amazon Elastic Block Store (EBS) volumes.

May 1, 2019 | 11:00 AM – 12:00 PM PTIntroducing New Amazon EC2 Instances Featuring AMD EPYC and AWS Graviton Processors – See how new Amazon EC2 instance offerings that feature AMD EPYC processors and AWS Graviton processors enable you to optimize performance and cost for your workloads.

Containers

April 23, 2019 | 11:00 AM – 12:00 PM PTDeep Dive on AWS App Mesh – Learn how AWS App Mesh makes it easy to monitor and control communications for services running on AWS.

March 22, 2019 | 9:00 AM – 10:00 AM PTDeep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application.

Databases

April 23, 2019 | 1:00 PM – 2:00 PM PTSelecting the Right Database for Your Application – Learn how to develop a purpose-built strategy for databases, where you choose the right tool for the job.

April 25, 2019 | 9:00 AM – 10:00 AM PTMastering Amazon DynamoDB ACID Transactions: When and How to Use the New Transactional APIs – Learn how the new Amazon DynamoDB’s transactional APIs simplify the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables.

DevOps

April 24, 2019 | 9:00 AM – 10:00 AM PTRunning .NET applications with AWS Elastic Beanstalk Windows Server Platform V2 – Learn about the easiest way to get your .NET applications up and running on AWS Elastic Beanstalk.

Enterprise & Hybrid

April 30, 2019 | 11:00 AM – 12:00 PM PTBusiness Case Teardown: Identify Your Real-World On-Premises and Projected AWS Costs – Discover tools and strategies to help you as you build your value-based business case.

IoT

April 30, 2019 | 9:00 AM – 10:00 AM PTBuilding the Edge of Connected Home – Learn how AWS IoT edge services are enabling smarter products for the connected home.

Machine Learning

April 24, 2019 | 11:00 AM – 12:00 PM PTStart Your Engines and Get Ready to Race in the AWS DeepRacer League – Learn more about reinforcement learning, how to build a model, and compete in the AWS DeepRacer League.

April 30, 2019 | 1:00 PM – 2:00 PM PTDeploying Machine Learning Models in Production – Learn best practices for training and deploying machine learning models.

May 2, 2019 | 9:00 AM – 10:00 AM PTAccelerate Machine Learning Projects with Hundreds of Algorithms and Models in AWS Marketplace – Learn how to use third party algorithms and model packages to accelerate machine learning projects and solve business problems.

Networking & Content Delivery

April 23, 2019 | 9:00 AM – 10:00 AM PTSmart Tips on Application Load Balancers: Advanced Request Routing, Lambda as a Target, and User Authentication – Learn tips and tricks about important Application Load Balancers (ALBs) features that were recently launched.

Productivity & Business Solutions

April 29, 2019 | 11:00 AM – 12:00 PM PTLearn How to Set up Business Calling and Voice Connector in Minutes with Amazon Chime – Learn how Amazon Chime Business Calling and Voice Connector can help you with your business communication needs.

May 1, 2019 | 1:00 PM – 2:00 PM PTBring Voice to Your Workplace – Learn how you can bring voice to your workplace with Alexa for Business.

Serverless

April 25, 2019 | 11:00 AM – 12:00 PM PTModernizing .NET Applications Using the Latest Features on AWS Development Tools for .NET – Get a dive deep and demonstration of the latest updates to the AWS SDK and tools for .NET to make development even easier, more powerful, and more productive.

May 1, 2019 | 9:00 AM – 10:00 AM PTCustomer Showcase: Improving Data Processing Workloads with AWS Step Functions’ Service Integrations – Learn how innovative customers like SkyWatch are coordinating AWS services using AWS Step Functions to improve productivity.

Storage

April 24, 2019 | 1:00 PM – 2:00 PM PTAmazon S3 Glacier Deep Archive: The Cheapest Storage in the Cloud – See how Amazon S3 Glacier Deep Archive offers the lowest cost storage in the cloud, at prices significantly lower than storing and maintaining data in on-premises magnetic tape libraries or archiving data offsite.

Amazon Cognito for Alexa Skills User Management

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/amazon-cognito-for-alexa-skills-user-management/

This post is courtesy of Tom Moore, Solutions Architect – AWS

If your Alexa skill is a general information skill, such as a random facts skill or a news feed, you can provide information to any user who has an Alexa enabled device with your skill turned on. However, sometimes you need to know who the user is before you can provide information to them. You can fulfill this user management scenario with Amazon Cognito user pools.

This blog post will show you how to set up an Amazon Cognito user pool and how to use it to perform authentication for both your Alexa skill and a webpage.

Getting started

In order to complete the steps in this blog post you will need the following:

  • An AWS account
  • An Amazon developer account
  • A basic understanding of Amazon Alexa skill development

This example will use a sample Alexa skill deployed from one of the available skill templates. To fully develop your own Alexa skill, you will need a professional code editor or IDE, as well as knowledge of Alexa skill development. It is beyond the scope of this blog post to cover these details.

Before you begin, consider the set of services that you will use and their availability. To implement this solution, you will use Amazon Cognito for user accounts and AWS Lambda for the Alexa function.

Today, AWS Lambda supports calls from Alexa in the following regions:

  • Asia Pacific (Tokyo)
  • EU (Ireland)
  • US East (N. Virginia)
  • US West (Oregon)

These four regions also support Amazon Cognito. While it is possible to use Amazon Cognito in a different region than your Lambda function, I recommend choosing one of the four listed regions to deploy your entire solution for simplicity.

Setting up Amazon Cognito

To set up Amazon Cognito, you’ll need to create a user pool, create an Alexa client, and set up your authentication UI.

Create your Amazon Cognito user pool

  1. Sign in to the Amazon Cognito console. You might be prompted for your AWS credentials.
  2. From the console navigation bar, choose one of the four regions listed above. For the purposes of this blog, I’ll use US East (N. Virginia).
  3. Choose Manage User Pools.Amazon Cognito
  4. Choose Create a user pool, and provide a name for your user pool. Remember that user pools may be used across multiple applications and platforms including web, mobile, and Alexa. The pool name does not have to be globally unique, but it should be unique in your account so you can easily find the pool when needed. I have named my user pool “Alexa Demo.”Create A User Pool
  5. After you name your pool, choose Step through settings. You can accept the defaults for the remaining steps to set up your user pool, with the following exceptions:
    • Choose email address or phone number as the sign-in method, and then choose Allow both email addresses and phone numbers.
    • Enable Multi-Factor Authentication (MFA).You can use Amazon Cognito to enforce Multi-Factor Authentication for your users. Amazon Cognito also allows you to validate email and phone numbers when the user is created. The verification process for phone numbers requires that Amazon Cognito is able to access the Amazon Simple Notification Service (SNS) service in order to dispatch the SMS message for phone number verification. This access is granted through the use of an AWS Identity and Access Management (IAM) service role. The Amazon Cognito Setup process can automatically create this role for you.
  6. To set up Multi-Factor Authentication:
    • Under Do you want to enable Multi-Factor Authentication (MFA), choose Optional.
    • Choose SMS text message as a second authentication factor, and then choose the options you want to be verified.
    • Choose Create Role, and then choose Next Step.Configure Multi-Factor Authentication
    • For more information, see Adding Multi-Factor Authentication (MFA) to a User Pool.Because the verification process sends SMS messages, some costs will be incurred on your account. If you have not already done so, you will need to request a spending increase on your account to accommodate those charges. To learn more about costs for SMS messages, see SMS Text Messages MFA.
  7. Review the selections that you have made. If you are happy with the settings that you have selected, choose Create Pool.

Create the Alexa client

By completing the steps above, you will have created an Amazon Cognito user pool. The next task in setting up account linking is to create the Alexa client definition inside the Amazon Cognito user pool.

  1. From the Amazon Cognito console, choose Manage User Pools. Select the user pool you just created.
  2. From the General settings menu, choose App Clients to set up applications that will connect to your Amazon Cognito user pool.General Settings for App Clients
  3. Choose Add an App Client, and provide the App client name. In this example, I have chosen “Alexa.” Leave the rest of the options set to default and choose Create App Client to generate the client record for Alexa to use. This process creates an app client ID and a secret.App Client Settings
    To learn more, see Configuring a User Pool App Client.

Set up your Authentication UI

Amazon Cognito can set up and manage the Authentication UI for your application so that you don’t have to host your own sign-in and sign-up UI for your Alexa application.

  1. From the App integration menu, choose Domain name.Choose Domain Name
  2. For this example, I will use an Amazon Cognito domain. Provide a subdomain name and choose Check Availability. If the option is available, choose Save Changes.Choosing a Domain Name

Setting up the Alexa skill

Now you can create the Alexa skill and link it back to the Amazon Cognito user pool that you created.

For step-by-step instructions for creating a new Alexa skill, see Create a New Skill in the Alexa documentation. Follow those instructions, with the following specific selections:

Under Choose a model to add to your skill, keep the default option of Custom.


Under Choose a method to host your skill’s back end resources, keep the default selection of Self Hosted.
Self Hosted

For a custom skill, you can choose a predefined skill template for the back end code for your skill. For this example, I’ll use a Fact Skill template as a starting point. The skill template prepopulates the Lambda function that your Alexa skill uses.

Fact Skill
After you create your sample skill, you’ll need to complete a few basic operations:

  • Set the invocation name of the skill
  • Prepare a Lambda function to handle the skill invocation
  • Connect the Alexa skill to your lambda
  • Test your skill

A full description of these steps is beyond the scope of this blog post. To learn more, see Manage Skills in the Developer Console. Once you have completed these steps, return to this post to continue linking your skill with Amazon Cognito.

Linking Alexa with Amazon Cognito

To link your Alexa skill with Amazon Cognito user pools, you’ll need to update both the Amazon Cognito and Alexa interfaces with data from the other service. I recommend that you have both interfaces open in different tabs of your web browser to make it easy to move back and forth between the two services.

  1. In Amazon Cognito, open the app pool that you created. Under General Settings, choose App Clients. Next, choose Show Details in the section for the Alexa Client that you set up earlier. Make a note of the App client ID and the App client secret. These will be needed to configure Alexa skills app linking.App Client Settings
  2. Switch over to your Alexa developer account and open the skill that you are linking to Amazon Cognito. Choose Account Linking.
  3. Select the option to allow users to link accounts. Leave the default option for an Auth Code Grant selected.TheAccount Linking
    Authorization URI will be made up of the following template:

    https://{Sub-Domain}.auth.{Region}.amazoncognito.com/oauth2/authorize?response_type=code&redirect_uri=https://pitangui.amazon.com/api/skill/link/{Vendor ID}

  4. Replace the {Sub-Domain} with the sub domain that you selected when you set up your Amazon Cognito user pool. In my example, it was “mooretom-alexademo”
  5. Replace {Vendor ID} with your specific vendor ID for your Alexa development account. The easiest way to find this is to scroll down to the bottom of the account linking page. Your Vendor ID will be the final piece of information in the Redirect URI’s.Redirect URLs
  6. Replace {Region} with the name of the region you are deploying your resources into. In my example, was us-east-1.
  7. The Access Token URI will be made up of the following template:
    https://{Sub-Domain}.auth.{region}.amazoncognito.com/oauth2/token

  8. Enter the app client ID and the app client secret that you noted above, or return to the Amazon Cognito tab to copy and paste them.Grant Auth Code
  9. Choose Save at the top of the page. Make a note of the redirect URLs at the bottom of the page, as these will be required to finish the Amazon Cognito configuration in the next step.
  10. Switch back to your Amazon Cognito user pool. Under App Integration, choose App Client Settings. You will see the integration settings for the Alexa client in the details panel on the right.
  11. Under Enabled Identity Providers, choose Cognito User Pool.
  12. Under Callback URL(s) enter in the three callback URLs from your Alexa skill page. For example, here are all three URLs separated by commas:
    https://alexa.amazon.co.jp/api/skill/link/{Vendor ID},
    https://layla.amazon.com/api/skill/link/{Vendor ID},
    https://pitangui.amazon.com/api/skill/link/{Vendor ID}

    The Sign Out URL will follow this template:

    https://{SubDomain}.auth.us-east-1.amazoncognito.com/logout?response_type=code

  13. Under Allowed OAuth Flows, select Authorization code grant.
  14. Under Allowed OAuth Scopes, select phone, email, and openid.Enable Identity Providers
  15. Choose Save Changes.

Testing your Alexa skill

After you have linked Alexa with Amazon Cognito, return to the Alexa developer console and build your model. Then log into the Alexa application on your mobile phone and enable the skill. When the skill is enabled, you will be able to configure access and create a new user with phone number authentication included automatically.

After going through the account creation steps, you can return to your Amazon Cognito user pool and see the new user you created.

New Customer

Conclusion

By completing the steps in this post, you have leveraged Amazon Cognito as a source of authentication for your Amazon Alexa skill. Amazon Cognito provides user authentication as well as sign-in and sign-up functionality without requiring you to write any code. You can now use the Amazon Cognito user ID to personalize the user experience for your Alexa skill. You can also use Amazon Cognito to authenticate your users to a companion application or website.

Chat with the Alexa Prize Finalists Today

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/chat-with-the-alexa-prize-finalists-today/

The Alexa Prize is an annual competition designed to spur academic research and development in the field of conversational artificial intelligence. This year, students are working to build socialbots that can engage in a fun, high-quality conversation on popular societal topics for up to 20 minutes. In order to succeed at this task, the teams must innovate in a broad range of areas including knowledge acquisition, natural language understanding, natural language generation, context modeling, common-sense reasoning, and dialog planning. They use the Alexa Skills Kit (ASK) to construct their bot and to receive real-time feedback on its performance.

Last month the socialbots from Heriot-Watt University (Alana), Czech Technical University (Alquist), and UC Davis (Gunrock) were chosen as the finalists (watch the Twitch stream to learn more). The competition was tough, with points assigned for the potential scientific contribution to the field, the technical merit of the approach, the overall novelty of the idea, and the team’s ability to deliver on their vision.

Time to Chat
We’re now ready for the final round.

Step up to your nearest Alexa-powered device and say “Alexa, let’s chat!” You will be connected to one of the three socialbots (chosen at random) and can converse with it for as long as you would like. When you are through, say “Alexa stop,” and rate the socialbot when prompted. You can also provide additional feedback for the team. We’ll announce the winner at AWS re:Invent 2018 in Las Vegas.

Jeff;

PS – If you are ready to build your very own Alexa Skill, check out the Alexa Skills Kit Tutorials and subscribe to the Alexa Blogs.

 

Sending Inaudible Commands to Voice Assistants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/sending_inaudib.html

Researchers have demonstrated the ability to send inaudible commands to voice assistants like Alexa, Siri, and Google Assistant.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online ­– simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.

Serverless Architectures with AWS Lambda: Overview and Best Practices

Post Syndicated from Andrew Baird original https://aws.amazon.com/blogs/architecture/serverless-architectures-with-aws-lambda-overview-and-best-practices/

For some organizations, the idea of “going serverless” can be daunting. But with an understanding of best practices – and the right tools — many serverless applications can be fully functional with only a few lines of code and little else.

Examples of fully-serverless-application use cases include:

  • Web or mobile backends – Create fully-serverless, mobile applications or websites by creating user-facing content in a native mobile application or static web content in an S3 bucket. Then have your front-end content integrate with Amazon API Gateway as a backend service API. Lambda functions will then execute the business logic you’ve written for each of the API Gateway methods in your backend API.
  • Chatbots and virtual assistants – Build new serverless ways to interact with your customers, like customer support assistants and bots ready to engage customers on your company-run social media pages. The Amazon Alexa Skills Kit (ASK) and Amazon Lex have the ability to apply natural-language understanding to user-voice and freeform-text input so that a Lambda function you write can intelligently respond and engage with them.
  • Internet of Things (IoT) backends – AWS IoT has direct-integration for device messages to be routed to and processed by Lambda functions. That means you can implement serverless backends for highly secure, scalable IoT applications for uses like connected consumer appliances and intelligent manufacturing facilities.

Using AWS Lambda as the logic layer of a serverless application can enable faster development speed and greater experimentation – and innovation — than in a traditional, server-based environment.

We recently published the “Serverless Architectures with AWS Lambda: Overview and Best Practices” whitepaper to provide the guidance and best practices you need to write better Lambda functions and build better serverless architectures.

Once you’ve finished reading the whitepaper, below are a couple additional resources I recommend as your next step:

  1. If you would like to better understand some of the architecture pattern possibilities for serverless applications: Thirty Serverless Architectures in 30 Minutes (re:Invent 2017 video)
  2. If you’re ready to get hands-on and build a sample serverless application: AWS Serverless Workshops (GitHub Repository)
  3. If you’ve already built a serverless application and you’d like to ensure your application has been Well Architected: The Serverless Application Lens: AWS Well Architected Framework (Whitepaper)

About the Author

 

Andrew Baird is a Sr. Solutions Architect for AWS. Prior to becoming a Solutions Architect, Andrew was a developer, including time as an SDE with Amazon.com. He has worked on large-scale distributed systems, public-facing APIs, and operations automation.

AIY Projects 2: Google’s AIY Projects Kits get an upgrade

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/google-aiy-projects-2/

After the outstanding success of their AIY Projects Voice and Vision Kits, Google has announced the release of upgraded kits, complete with Raspberry Pi Zero WH, Camera Module, and preloaded SD card.

Google AIY Projects Vision Kit 2 Raspberry Pi

Google’s AIY Projects Kits

Google launched the AIY Projects Voice Kit last year, first as a cover gift with The MagPi magazine and later as a standalone product.

Makers needed to provide their own Raspberry Pi for the original kit. The new kits include everything you need, from Pi to SD card.

Within a DIY cardboard box, makers were able to assemble their own voice-activated AI assistant akin to the Amazon Alexa, Apple’s Siri, and Google’s own Google Home Assistant. The Voice Kit was an instant hit that spurred no end of maker videos and tutorials, including our own free tutorial for controlling a robot using voice commands.

Later in the year, the team followed up the success of the Voice Kit with the AIY Projects Vision Kit — the same cardboard box hosting a camera perfect for some pretty nifty image recognition projects.

For more on the AIY Voice Kit, here’s our release video hosted by the rather delightful Rob Zwetsloot.

AIY Projects adds natural human interaction to your Raspberry Pi

Check out the exclusive Google AIY Projects Kit that comes free with The MagPi 57! Grab yourself a copy in stores or online now: http://magpi.cc/2pI6IiQ This first AIY Projects kit taps into the Google Assistant SDK and Cloud Speech API using the AIY Projects Voice HAT (Hardware Accessory on Top) board, stereo microphone, and speaker (included free with the magazine).

AIY Projects 2

So what’s new with version 2 of the AIY Projects Voice Kit? The kit now includes the recently released Raspberry Pi Zero WH, our Zero W with added pre-soldered header pins for instant digital making accessibility. Purchasers of the kits will also get a micro SD card with preloaded OS to help them get started without having to set the card up themselves.

Google AIY Projects Vision Kit 2 Raspberry Pi

Everything you need to build your own Raspberry Pi-powered Google voice assistant

In the newly upgraded AIY Projects Vision Kit v1.2, makers are also treated to an official Raspberry Pi Camera Module v2, the latest model of our add-on camera.

Google AIY Projects Vision Kit 2 Raspberry Pi

“Everything you need to get started is right there in the box,” explains Billy Rutledge, Google’s Director of AIY Projects. “We knew from our research that even though makers are interested in AI, many felt that adding it to their projects was too difficult or required expensive hardware.”

Google AIY Projects Vision Kit 2 Raspberry Pi
Google AIY Projects Vision Kit 2 Raspberry Pi
Google AIY Projects Vision Kit 2 Raspberry Pi

Google is also hard at work producing AIY Projects companion apps for Android, iOS, and Chrome. The Android app is available now to coincide with the launch of the upgraded kits, with the other two due for release soon. The app supports wireless setup of the AIY Kit, though avid coders will still be able to hack theirs to better suit their projects.

Google has also updated the AIY Projects website with an AIY Models section highlighting a range of neural network projects for the kits.

Get your kit

The updated Voice and Vision Kits were announced last night, and in the US they are available now from Target. UK-based makers should be able to get their hands on them this summer — keep an eye on our social channels for updates and links.

The post AIY Projects 2: Google’s AIY Projects Kits get an upgrade appeared first on Raspberry Pi.

Innovation Flywheels and the AWS Serverless Application Repository

Post Syndicated from Tim Wagner original https://aws.amazon.com/blogs/compute/innovation-flywheels-and-the-aws-serverless-application-repository/

At AWS, our customers have always been the motivation for our innovation. In turn, we’re committed to helping them accelerate the pace of their own innovation. It was in the spirit of helping our customers achieve their objectives faster that we launched AWS Lambda in 2014, eliminating the burden of server management and enabling AWS developers to focus on business logic instead of the challenges of provisioning and managing infrastructure.

 

In the years since, our customers have built amazing things using Lambda and other serverless offerings, such as Amazon API Gateway, Amazon Cognito, and Amazon DynamoDB. Together, these services make it easy to build entire applications without the need to provision, manage, monitor, or patch servers. By removing much of the operational drudgery of infrastructure management, we’ve helped our customers become more agile and achieve faster time-to-market for their applications and services. By eliminating cold servers and cold containers with request-based pricing, we’ve also eliminated the high cost of idle capacity and helped our customers achieve dramatically higher utilization and better economics.

After we launched Lambda, though, we quickly learned an important lesson: A single Lambda function rarely exists in isolation. Rather, many functions are part of serverless applications that collectively deliver customer value. Whether it’s the combination of event sources and event handlers, as serverless web apps that combine APIs with functions for dynamic content with static content repositories, or collections of functions that together provide a microservice architecture, our customers were building and delivering serverless architectures for every conceivable problem. Despite the economic and agility benefits that hundreds of thousands of AWS customers were enjoying with Lambda, we realized there was still more we could do.

How Customer Feedback Inspired Us to Innovate

We heard from our customers that getting started—either from scratch or when augmenting their implementation with new techniques or technologies—remained a challenge. When we looked for serverless assets to share, we found stellar examples built by serverless pioneers that represented a multitude of solutions across industries.

There were apps to facilitate monitoring and logging, to process image and audio files, to create Alexa skills, and to integrate with notification and location services. These apps ranged from “getting started” examples to complete, ready-to-run assets. What was missing, however, was a unified place for customers to discover this diversity of serverless applications and a step-by-step interface to help them configure and deploy them.

We also heard from customers and partners that building their own ecosystems—ecosystems increasingly composed of functions, APIs, and serverless applications—remained a challenge. They wanted a simple way to share samples, create extensibility, and grow consumer relationships on top of serverless approaches.

 

We built the AWS Serverless Application Repository to help solve both of these challenges by offering publishers and consumers of serverless apps a simple, fast, and effective way to share applications and grow user communities around them. Now, developers can easily learn how to apply serverless approaches to their implementation and business challenges by discovering, customizing, and deploying serverless applications directly from the Serverless Application Repository. They can also find libraries, components, patterns, and best practices that augment their existing knowledge, helping them bring services and applications to market faster than ever before.

How the AWS Serverless Application Repository Inspires Innovation for All Customers

Companies that want to create ecosystems, share samples, deliver extensibility and customization options, and complement their existing SaaS services use the Serverless Application Repository as a distribution channel, producing apps that can be easily discovered and consumed by their customers. AWS partners like HERE have introduced their location and transit services to thousands of companies and developers. Partners like Datadog, Splunk, and TensorIoT have showcased monitoring, logging, and IoT applications to the serverless community.

Individual developers are also publishing serverless applications that push the boundaries of innovation—some have published applications that leverage machine learning to predict the quality of wine while others have published applications that monitor crypto-currencies, instantly build beautiful image galleries, or create fast and simple surveys. All of these publishers are using serverless apps, and the Serverless Application Repository, as the easiest way to share what they’ve built. Best of all, their customers and fellow community members can find and deploy these applications with just a few clicks in the Lambda console. Apps in the Serverless Application Repository are free of charge, making it easy to explore new solutions or learn new technologies.

Finally, we at AWS continue to publish apps for the community to use. From apps that leverage Amazon Cognito to sync user data across applications to our latest collection of serverless apps that enable users to quickly execute common financial calculations, we’re constantly looking for opportunities to contribute to community growth and innovation.

At AWS, we’re more excited than ever by the growing adoption of serverless architectures and the innovation that services like AWS Lambda make possible. Helping our customers create and deliver new ideas drives us to keep inventing ways to make building and sharing serverless apps even easier. As the number of applications in the Serverless Application Repository grows, so too will the innovation that it fuels for both the owners and the consumers of those apps. With the general availability of the Serverless Application Repository, our customers become more than the engine of our innovation—they become the engine of innovation for one another.

To browse, discover, deploy, and publish serverless apps in minutes, visit the Serverless Application Repository. Go serverless—and go innovate!

Dr. Tim Wagner is the General Manager of AWS Lambda and Amazon API Gateway.

Mission Space Lab flight status announced!

Post Syndicated from Erin Brindley original https://www.raspberrypi.org/blog/mission-space-lab-flight-status-announced/

In September of last year, we launched our 2017/2018 Astro Pi challenge with our partners at the European Space Agency (ESA). Students from ESA membership and associate countries had the chance to design science experiments and write code to be run on one of our two Raspberry Pis on the International Space Station (ISS).

Astro Pi Mission Space Lab logo

Submissions for the Mission Space Lab challenge have just closed, and the results are in! Students had the opportunity to design an experiment for one of the following two themes:

  • Life in space
    Making use of Astro Pi Vis (Ed) in the European Columbus module to learn about the conditions inside the ISS.
  • Life on Earth
    Making use of Astro Pi IR (Izzy), which will be aimed towards the Earth through a window to learn about Earth from space.

ESA astronaut Alexander Gerst, speaking from the replica of the Columbus module at the European Astronaut Center in Cologne, has a message for all Mission Space Lab participants:

ESA astronaut Alexander Gerst congratulates Astro Pi 2017-18 winners

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Flight status

We had a total of 212 Mission Space Lab entries from 22 countries. Of these, a 114 fantastic projects have been given flight status, and the teams’ project code will run in space!

But they’re not winners yet. In April, the code will be sent to the ISS, and then the teams will receive back their experimental data. Next, to get deeper insight into the process of scientific endeavour, they will need produce a final report analysing their findings. Winners will be chosen based on the merit of their final report, and the winning teams will get exclusive prizes. Check the list below to see if your team got flight status.

Belgium

Flight status achieved:

  • Team De Vesten, Campus De Vesten, Antwerpen
  • Ursa Major, CoderDojo Belgium, West-Vlaanderen
  • Special operations STEM, Sint-Claracollege, Antwerpen

Canada

Flight status achieved:

  • Let It Grow, Branksome Hall, Toronto
  • The Dark Side of Light, Branksome Hall, Toronto
  • Genie On The ISS, Branksome Hall, Toronto
  • Byte by PIthons, Youth Tech Education Society & Kid Code Jeunesse, Edmonton
  • The Broadviewnauts, Broadview, Ottawa

Czech Republic

Flight status achieved:

  • BLEK, Střední Odborná Škola Blatná, Strakonice

Denmark

Flight status achieved:

  • 2y Infotek, Nærum Gymnasium, Nærum
  • Equation Quotation, Allerød Gymnasium, Lillerød
  • Team Weather Watchers, Allerød Gymnasium, Allerød
  • Space Gardners, Nærum Gymnasium, Nærum

Finland

Flight status achieved:

  • Team Aurora, Hyvinkään yhteiskoulun lukio, Hyvinkää

France

Flight status achieved:

  • INC2, Lycée Raoul Follereau, Bourgogne
  • Space Project SP4, Lycée Saint-Paul IV, Reunion Island
  • Dresseurs2Python, clg Albert CAMUS, essonne
  • Lazos, Lycée Aux Lazaristes, Rhone
  • The space nerds, Lycée Saint André Colmar, Alsace
  • Les Spationautes Valériquais, lycée de la Côte d’Albâtre, Normandie
  • AstroMega, Institut de Genech, north
  • Al’Crew, Lycée Algoud-Laffemas, Auvergne-Rhône-Alpes
  • AstroPython, clg Albert CAMUS, essonne
  • Aruden Corp, Lycée Pablo Neruda, Normandie
  • HeroSpace, clg Albert CAMUS, essonne
  • GalaXess [R]evolution, Lycée Saint Cricq, Nouvelle-Aquitaine
  • AstroBerry, clg Albert CAMUS, essonne
  • Ambitious Girls, Lycée Adam de Craponne, PACA

Germany

Flight status achieved:

  • Uschis, St. Ursula Gymnasium Freiburg im Breisgau, Breisgau
  • Dosi-Pi, Max-Born-Gymnasium Germering, Bavaria

Greece

Flight status achieved:

  • Deep Space Pi, 1o Epal Grevenon, Grevena
  • Flox Team, 1st Lyceum of Kifissia, Attiki
  • Kalamaria Space Team, Second Lyceum of Kalamaria, Central Macedonia
  • The Earth Watchers, STEM Robotics Academy, Thessaly
  • Celestial_Distance, Gymnasium of Kanithos, Sterea Ellada – Evia
  • Pi Stars, Primary School of Rododaphne, Achaias
  • Flarions, 5th Primary School of Salamina, Attica

Ireland

Flight status achieved:

  • Plant Parade, Templeogue College, Leinster
  • For Peats Sake, Templeogue College, Leinster
  • CoderDojo Clonakilty, Co. Cork

Italy

Flight status achieved:

  • Trentini DOP, CoderDojo Trento, TN
  • Tarantino Space Lab, Liceo G. Tarantino, BA
  • Murgia Sky Lab, Liceo G. Tarantino, BA
  • Enrico Fermi, Liceo XXV Aprile, Veneto
  • Team Lampone, CoderDojoTrento, TN
  • GCC, Gali Code Club, Trentino Alto Adige/Südtirol
  • Another Earth, IISS “Laporta/Falcone-Borsellino”
  • Anti Pollution Team, IIS “L. Einaudi”, Sicily
  • e-HAND, Liceo Statale Scientifico e Classico ‘Ettore Majorana’, Lombardia
  • scossa team, ITTS Volterra, Venezia
  • Space Comet Sisters, Scuola don Bosco, Torino

Luxembourg

Flight status achieved:

  • Spaceballs, Atert Lycée Rédange, Diekirch
  • Aline in space, Lycée Aline Mayrisch Luxembourg (LAML)

Poland

Flight status achieved:

  • AstroLeszczynPi, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • Astrokompasy, High School nr XVII in Wrocław named after Agnieszka Osiecka, Lower Silesian
  • Cosmic Investigators, Publiczna Szkoła Podstawowa im. Św. Jadwigi Królowej w Rzezawie, Małopolska
  • ApplePi, III Liceum Ogólnokształcące im. prof. T. Kotarbińskiego w Zielonej Górze, Lubusz Voivodeship
  • ELE Society 2, Zespol Szkol Elektronicznych i Samochodowych, Lubuskie
  • ELE Society 1, Zespol Szkol Elektronicznych i Samochodowych, Lubuskie
  • SpaceOn, Szkola Podstawowa nr 12 w Jasle – Gimnazjum Nr 2, Podkarpackie
  • Dewnald Ducks, III Liceum Ogólnokształcące w Zielonej Górze, lubuskie
  • Nova Team, III Liceum Ogolnoksztalcace im. prof. T. Kotarbinskiego, lubuskie district
  • The Moons, Szkola Podstawowa nr 12 w Jasle – Gimnazjum Nr 2, Podkarpackie
  • Live, Szkoła Podstawowa nr 1 im. Tadeusza Kościuszki w Zawierciu, śląskie
  • Storm Hunters, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • DeepSky, Szkoła Podstawowa nr 1 im. Tadeusza Kościuszki w Zawierciu, śląskie
  • Small Explorers, ZPO Konina, Malopolska
  • AstroZSCL, Zespół Szkół w Czerwionce-Leszczynach, śląskie
  • Orchestra, Szkola Podstawowa nr 12 w Jasle, Podkarpackie
  • ApplePi, I Liceum Ogolnoksztalcace im. Krola Stanislawa Leszczynskiego w Jasle, podkarpackie
  • Green Crew, Szkoła Podstawowa nr 2 w Czeladzi, Silesia

Portugal

Flight status achieved:

  • Magnetics, Escola Secundária João de Deus, Faro
  • ECA_QUEIROS_PI, Secondary School Eça de Queirós, Lisboa
  • ESDMM Pi, Escola Secundária D. Manuel Martins, Setúbal
  • AstroPhysicists, EB 2,3 D. Afonso Henriques, Braga

Romania

Flight status achieved:

  • Caelus, “Tudor Vianu” National High School of Computer Science, District One
  • CodeWarriors, “Tudor Vianu” National High School of Computer Science, District One
  • Dark Phoenix, “Tudor Vianu” National High School of Computer Science, District One
  • ShootingStars, “Tudor Vianu” National High School of Computer Science, District One
  • Astro Pi Carmen Sylva 2, Liceul Teoretic “Carmen Sylva”, Constanta
  • Astro Meridian, Astro Club Meridian 0, Bihor

Slovenia

Flight status achieved:

  • astrOSRence, OS Rence
  • Jakopičevca, Osnovna šola Riharda Jakopiča, Ljubljana

Spain

Flight status achieved:

  • Exea in Orbit, IES Cinco Villas, Zaragoza
  • Valdespartans, IES Valdespartera, Zaragoza
  • Valdespartans2, IES Valdespartera, Zaragoza
  • Astropithecus, Institut de Bruguers, Barcelona
  • SkyPi-line, Colegio Corazón de María, Asturias
  • ClimSOLatic, Colegio Corazón de María, Asturias
  • Científicosdelsaz, IES Profesor Pablo del Saz, Málaga
  • Canarias 2, IES El Calero, Las Palmas
  • Dreamers, M. Peleteiro, A Coruña
  • Canarias 1, IES El Calero, Las Palmas

The Netherlands

Flight status achieved:

  • Team Kaki-FM, Rkbs De Reiger, Noord-Holland

United Kingdom

Flight status achieved:

  • Binco, Teignmouth Community School, Devon
  • 2200 (Saddleworth), Detached Flight Royal Air Force Air Cadets, Lanchashire
  • Whatevernext, Albyn School, Highlands
  • GraviTeam, Limehurst Academy, Leicestershire
  • LSA Digital Leaders, Lytham St Annes Technology and Performing Arts College, Lancashire
  • Mead Astronauts, Mead Community Primary School, Wiltshire
  • STEAMCademy, Castlewood Primary School, West Sussex
  • Lux Quest, CoderDojo Banbridge, Co. Down
  • Temparatus, Dyffryn Taf, Carmarthenshire
  • Discovery STEMers, Discovery STEM Education, South Yorkshire
  • Code Inverness, Code Club Inverness, Highland
  • JJB, Ashton Sixth Form College, Tameside
  • Astro Lab, East Kent College, Kent
  • The Life Savers, Scratch and Python, Middlesex
  • JAAPiT, Taylor Household, Nottingham
  • The Heat Guys, The Archer Academy, Greater London
  • Astro Wantenauts, Wantage C of E Primary School, Oxfordshire
  • Derby Radio Museum, Radio Communication Museum of Great Britain, Derbyshire
  • Bytesyze, King’s College School, Cambridgeshire

Other

Flight status achieved:

  • Intellectual Savage Stars, Lycée français de Luanda, Luanda

 

Congratulations to all successful teams! We are looking forward to reading your reports.

The post Mission Space Lab flight status announced! appeared first on Raspberry Pi.

MagPi 66: Raspberry Pi media projects for your home

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-66-media-pi/

Hey folks, Rob from The MagPi here! Issue 66 of The MagPi is out right now, with the ultimate guide to powering your home media with Raspberry Pi. We think the Pi is the perfect replacement or upgrade for many media devices, so in this issue we show you how to build a range of Raspberry Pi media projects.

MagPi 66

Yes, it does say Pac-Man robotics on the cover. They’re very cool.

The article covers file servers for sharing media across your network, music streaming boxes that connect to Spotify, a home theatre PC to make your TV-watching more relaxing, a futuristic Pi-powered moving photoframe, and even an Alexa voice assistant to control all these devices!

More to see

That’s not all though — The MagPi 66 also shows you how to build a Raspberry Pi cluster computer, how to control LEGO robots using the GPIO, and why your Raspberry Pi isn’t affected by Spectre and Meltdown.




In addition, you’ll also find our usual selection of product reviews and excellent project showcases.

Get The MagPi 66

Issue 66 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Want to support the Raspberry Pi Foundation and the magazine, and get some cool free stuff? If you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

I hope you enjoy this issue! See you next month.

The post MagPi 66: Raspberry Pi media projects for your home appeared first on Raspberry Pi.

Raspberry Pi Spy’s Alexa Skill

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pi-spy-alexa-skill/

With Raspberry Pi projects using home assistant services such as Amazon Alexa and Google Home becoming more and more popular, we invited Raspberry Pi maker Matt ‘Raspberry Pi Spy‘ Hawkins to write a guest post about his latest project, the Pi Spy Alexa Skill.

Pi Spy Alexa Skill Raspberry Pi

Pi Spy Skill

The Alexa system uses Skills to provide voice-activated functionality, and it allows you to create new Skills to add extra features. With the Pi Spy Skill, you can ask Alexa what function each pin on the Raspberry Pi’s GPIO header provides, for example by using the phrase “Alexa, ask Pi Spy what is Pin 2.” In response to a phrase such as “Alexa, ask Pi Spy where is GPIO 8”, Alexa can now also tell you on which pin you can find a specific GPIO reference number.

This information is already available in various forms, but I thought it would be useful to retrieve it when I was busy soldering or building circuits and had no hands free.

Creating an Alexa Skill

There is a learning curve to creating a new Skill, and in some regards it was similar to mobile app development.

A Skill consists of two parts: the first is created within the Amazon Developer Console and defines the structure of the voice commands Alexa should recognise. The second part is a webservice that can receive data extracted from the voice commands and provide a response back to the device. You can create the webservice on a webserver, internet-connected device, or cloud service.

I decided to use Amazon’s AWS Lambda service. Once set up, this allows you to write code without having to worry about the server it is running on. It also supports Python, so it fit in nicely with most of my other projects.

To get started, I logged into the Amazon Developer Console with my personal Amazon account and navigated to the Alexa section. I created a new Skill named Pi Spy. Within a Skill, you define an Intent Schema and some Sample Utterances. The schema defines individual intents, and the utterances define how these are invoked by the user.

Here is how my ExaminePin intent is defined in the schema:

Pi Spy Alexa Skill Raspberry Pi

Example utterances then attempt to capture the different phrases the user might speak to their device.

Pi Spy Alexa Skill Raspberry Pi

Whenever Alexa matches a spoken phrase to an utterance, it passes the name of the intent and the variable PinID to the webservice.

In the test section, you can check what JSON data will be generated and passed to your webservice in response to specific phrases. This allows you to verify that the webservices’ responses are correct.

Pi Spy Alexa Skill Raspberry Pi

Over on the AWS Services site, I created a Lambda function based on one of the provided examples to receive the incoming requests. Here is the section of that code which deals with the ExaminePin intent:

Pi Spy Alexa Skill Raspberry Pi

For this intent, I used a Python dictionary to match the incoming pin number to its description. Another Python function deals with the GPIO queries. A URL to this Lambda function was added to the Skill as its ‘endpoint’.

As with the Skill, the Python code can be tested to iron out any syntax errors or logic problems.

With suitable configuration, it would be possible to create the webservice on a Pi, and that is something I’m currently working on. This approach is particularly interesting, as the Pi can then be used to control local hardware devices such as cameras, lights, or pet feeders.

Note

My Alexa Skill is currently only available to UK users. I’m hoping Amazon will choose to copy it to the US service, but I think that is down to its perceived popularity, or it may be done in bulk based on release date. In the next update, I’ll be adding an American English version to help speed up this process.

The post Raspberry Pi Spy’s Alexa Skill appeared first on Raspberry Pi.

AWS IoT, Greengrass, and Machine Learning for Connected Vehicles at CES

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-greengrass-and-machine-learning-for-connected-vehicles-at-ces/

Last week I attended a talk given by Bryan Mistele, president of Seattle-based INRIX. Bryan’s talk provided a glimpse into the future of transportation, centering around four principle attributes, often abbreviated as ACES:

Autonomous – Cars and trucks are gaining the ability to scan and to make sense of their environments and to navigate without human input.

Connected – Vehicles of all types have the ability to take advantage of bidirectional connections (either full-time or intermittent) to other cars and to cloud-based resources. They can upload road and performance data, communicate with each other to run in packs, and take advantage of traffic and weather data.

Electric – Continued development of battery and motor technology, will make electrics vehicles more convenient, cost-effective, and environmentally friendly.

Shared – Ride-sharing services will change usage from an ownership model to an as-a-service model (sound familiar?).

Individually and in combination, these emerging attributes mean that the cars and trucks we will see and use in the decade to come will be markedly different than those of the past.

On the Road with AWS
AWS customers are already using our AWS IoT, edge computing, Amazon Machine Learning, and Alexa products to bring this future to life – vehicle manufacturers, their tier 1 suppliers, and AutoTech startups all use AWS for their ACES initiatives. AWS Greengrass is playing an important role here, attracting design wins and helping our customers to add processing power and machine learning inferencing at the edge.

AWS customer Aptiv (formerly Delphi) talked about their Automated Mobility on Demand (AMoD) smart vehicle architecture in a AWS re:Invent session. Aptiv’s AMoD platform will use Greengrass and microservices to drive the onboard user experience, along with edge processing, monitoring, and control. Here’s an overview:

Another customer, Denso of Japan (one of the world’s largest suppliers of auto components and software) is using Greengrass and AWS IoT to support their vision of Mobility as a Service (MaaS). Here’s a video:

AWS at CES
The AWS team will be out in force at CES in Las Vegas and would love to talk to you. They’ll be running demos that show how AWS can help to bring innovation and personalization to connected and autonomous vehicles.

Personalized In-Vehicle Experience – This demo shows how AWS AI and Machine Learning can be used to create a highly personalized and branded in-vehicle experience. It makes use of Amazon Lex, Polly, and Amazon Rekognition, but the design is flexible and can be used with other services as well. The demo encompasses driver registration, login and startup (including facial recognition), voice assistance for contextual guidance, personalized e-commerce, and vehicle control. Here’s the architecture for the voice assistance:

Connected Vehicle Solution – This demo shows how a connected vehicle can combine local and cloud intelligence, using edge computing and machine learning at the edge. It handles intermittent connections and uses AWS DeepLens to train a model that responds to distracted drivers. Here’s the overall architecture, as described in our Connected Vehicle Solution:

Digital Content Delivery – This demo will show how a customer uses a web-based 3D configurator to build and personalize their vehicle. It will also show high resolution (4K) 3D image and an optional immersive AR/VR experience, both designed for use within a dealership.

Autonomous Driving – This demo will showcase the AWS services that can be used to build autonomous vehicles. There’s a 1/16th scale model vehicle powered and driven by Greengrass and an overview of a new AWS Autonomous Toolkit. As part of the demo, attendees drive the car, training a model via Amazon SageMaker for subsequent on-board inferencing, powered by Greengrass ML Inferencing.

To speak to one of my colleagues or to set up a time to see the demos, check out the Visit AWS at CES 2018 page.

Some Resources
If you are interested in this topic and want to learn more, the AWS for Automotive page is a great starting point, with discussions on connected vehicles & mobility, autonomous vehicle development, and digital customer engagement.

When you are ready to start building a connected vehicle, the AWS Connected Vehicle Solution contains a reference architecture that combines local computing, sophisticated event rules, and cloud-based data processing and storage. You can use this solution to accelerate your own connected vehicle projects.

Jeff;

Detecting Adblocker Blockers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/detecting_adblo.html

Interesting research on the prevalence of adblock blockers: “Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis“:

Abstract: Millions of people use adblockers to remove intrusive and malicious ads as well as protect themselves against tracking and pervasive surveillance. Online publishers consider adblockers a major threat to the ad-powered “free” Web. They have started to retaliate against adblockers by employing anti-adblockers which can detect and stop adblock users. To counter this retaliation, adblockers in turn try to detect and filter anti-adblocking scripts. This back and forth has prompted an escalating arms race between adblockers and anti-adblockers.

We want to develop a comprehensive understanding of anti-adblockers, with the ultimate aim of enabling adblockers to bypass state-of-the-art anti-adblockers. In this paper, we present a differential execution analysis to automatically detect and analyze anti-adblockers. At a high level, we collect execution traces by visiting a website with and without adblockers. Through differential execution analysis, we are able to pinpoint the conditions that lead to the differences caused by anti-adblocking code. Using our system, we detect anti-adblockers on 30.5% of the Alexa top-10K websites which is 5-52 times more than reported in prior literature. Unlike prior work which is limited to detecting visible reactions (e.g., warning messages) by anti-adblockers, our system can discover attempts to detect adblockers even when there is no visible reaction. From manually checking one third of the detected websites, we find that the websites that have no visible reactions constitute over 90% of the cases, completely dominating the ones that have visible warning messages. Finally, based on our findings, we further develop JavaScript rewriting and API hooking based solutions (the latter implemented as a Chrome extension) to help adblockers bypass state-of-the-art anti-adblockers.

News article.

Amazon’s Door Lock Is Amazon’s Bid to Control Your Home

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/amazons_door_lo.html

Interesting essay about Amazon’s smart lock:

When you add Amazon Key to your door, something more sneaky also happens: Amazon takes over.

You can leave your keys at home and unlock your door with the Amazon Key app — but it’s really built for Amazon deliveries. To share online access with family and friends, I had to give them a special code to SMS (yes, text) to unlock the door. (Amazon offers other smartlocks that have physical keypads).

The Key-compatible locks are made by Yale and Kwikset, yet don’t work with those brands’ own apps. They also can’t connect with a home-security system or smart-home gadgets that work with Apple and Google software.

And, of course, the lock can’t be accessed by businesses other than Amazon. No Walmart, no UPS, no local dog-walking company.

Keeping tight control over Key might help Amazon guarantee security or a better experience. “Our focus with smart home is on making things simpler for customers ­– things like providing easy control of connected devices with your voice using Alexa, simplifying tasks like reordering household goods and receiving packages,” the Amazon spokeswoman said.

But Amazon is barely hiding its goal: It wants to be the operating system for your home. Amazon says Key will eventually work with dog walkers, maids and other service workers who bill through its marketplace. An Amazon home security service and grocery delivery from Whole Foods can’t be far off.

This is happening all over. Everyone wants to control your life: Google, Apple, Amazon…everyone. It’s what I’ve been calling the feudal Internet. I fear it’s going to get a lot worse.