Tag Archives: Uncategorized

Friday Squid Blogging: Calamari vs. Squid

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/friday-squid-blogging-calamari-vs-squid.html

St. Louis Magazine answers the important question: “Is there a difference between calamari and squid?” Short answer: no.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

What the blink is my IP address?

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/what-the-blink-is-my-ip-address/

Picture the scene: you have a Raspberry Pi configured to run on your network, you power it up headless (without a monitor), and now you need to know which IP address it was assigned.

Matthias came up with this solution, which makes your Raspberry Pi blink its IP address, because he used a Raspberry Pi Zero W headless for most of his projects and got bored with having to look it up with his DHCP server or hunt for it by pinging different IP addresses.

How does it work?

A script runs when you start your Raspberry Pi and indicates which IP address is assigned to it by blinking it out on the device’s LED. The script comprises about 100 lines of Python, and you can get it on GitHub.

A screen running Python
Easy peasy GitHub breezy

The power/status LED on the edge of the Raspberry Pi blinks numbers in a Roman numeral-like scheme. You can tell which number it’s blinking based on the length of the blink and the gaps between each blink, rather than, for example, having to count nine blinks for a number nine.

Blinking in Roman numerals

Short, fast blinks represent the numbers one to four, depending on how many short, fast blinks you see. A gap between short, fast blinks means the LED is about to blink the next digit of the IP address, and a longer blink represents the number five. So reading the combination of short and long blinks will give you your device’s IP address.

You can see this in action at this exact point in the video. You’ll see the LED blink fast once, then leave a gap, blink fast once again, then leave a gap, then blink fast twice. That means the device’s IP address ends in 112.

What are octets?

Luckily, you usually only need to know the last three numbers of the IP address (the last octet), as the previous octets will almost always be the same for all other computers on the LAN.

The script blinks out the last octet ten times, to give you plenty of chances to read it. Then it returns the LED to its default functionality.

Which LED on which Raspberry Pi?

On a Raspberry Pi Zero W, the script uses the green status/power LED, and on other Raspberry Pis it uses the green LED next to the red power LED.

The green LED blinking the IP address (the red power LED is slightly hidden by Matthias’ thumb)

Once you get the hang of the Morse code-like blinking style, this is a really nice quick solution to find your device’s IP address and get on with your project.

The post What the blink is my IP address? appeared first on Raspberry Pi.

Ranking National Cyber Power

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/ranking-national-cyber-power.html

Harvard Kennedy School’s Belfer Center published the “National Cyber Power Index 2020: Methodology and Analytical Considerations.” The rankings: 1. US, 2. China, 3. UK, 4. Russia, 5. Netherlands, 6. France, 7. Germany, 8. Canada, 9. Japan, 10. Australia, 11. Israel. More countries are in the document.

We could — and should — argue about the criteria and the methodology, but it’s good that someone is starting this conversation.

Executive Summary: The Belfer National Cyber Power Index (NCPI) measures 30 countries’ cyber capabilities in the context of seven national objectives, using 32 intent indicators and 27 capability indicators with evidence collected from publicly available data.

In contrast to existing cyber related indices, we believe there is no single measure of cyber power. Cyber Power is made up of multiple components and should be considered in the context of a country’s national objectives. We take an all-of-country approach to measuring cyber power. By considering “all-of-country” we include all aspects under the control of a government where possible. Within the NCPI we measure government strategies, capabilities for defense and offense, resource allocation, the private sector, workforce, and innovation. Our assessment is both a measurement of proven power and potential, where the final score assumes that the government of that country can wield these capabilities effectively.

The NCPI has identified seven national objectives that countries pursue using cyber means. The seven objectives are:

  1. Surveilling and Monitoring Domestic Groups;
  2. Strengthening and Enhancing National Cyber Defenses;
  3. Controlling and Manipulating the Information Environment;
  4. Foreign Intelligence Collection for National Security;
  5. Commercial Gain or Enhancing Domestic Industry Growth;
  6. Destroying or Disabling an Adversary’s Infrastructure and Capabilities; and,
  7. Defining International Cyber Norms and Technical Standards.

In contrast to the broadly held view that cyber power means destroying or disabling an adversary’s infrastructure (commonly referred to as offensive cyber operations), offense is only one of these seven objectives countries pursue using cyber means.

Turn a watermelon into a RetroPie games console

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/turn-a-watermelon-into-a-retropie-games-console/

OK Cedrick, we don’t need to know why, but we have to know how you turned a watermelon into a games console.

This has got to be a world first. What started out as a regular RetroPie project has blown up reddit due to the unusual choice of casing for the games console: nearly 50,000 redditors upvoted this build within a week of Cedrick sharing it.

See, we’re not kidding

What’s inside?

  • Raspberry Pi 3
  • Jingo Dot power bank (that yellow thing you can see below)
  • Speakers
  • Buttons
  • Small 1.8″ screen
Cedric’s giggling really makes this video

Retropie

While this build looks epic, it isn’t too tricky to make. First, Cedrick flashed the RetroPie image onto an SD card, then he wired up a Raspberry Pi’s GPIO pins to the red console buttons, speakers, and the screen.

Cedrick achieved audio output by adding just a few lines of code to the config file, and he downloaded libraries for screen configuration and button input. That’s it! That’s all you need to get a games console up and running.

Cedrick just hanging on the train with his WaterBoy

Now for the messy bit

Cedrick had to gut an entire watermelon before he could start getting all the hardware in place. He power-drilled holes for the buttons to stick through, and a Stanley knife provided the precision he needed to get the right-sized gap for the screen.

A gutted watermelon with gaps cut to fit games console buttons and a screen

Rather than drill even more holes for the speakers, Cedrick stuck them in place inside the watermelon using toothpicks. He did try hot glue first but… yeah. Turns out fruit guts are impervious to glue.

Moisture was going to be a huge problem, so to protect all the hardware from the watermelon’s sticky insides, Cedric lined it with plastic clingfilm.

Infinite lives

And here’s how you can help: Cedrick is open to any tips as to how to preserve the perishable element of his project: the watermelon. Resin? Vaseline? Time machine? How can he keep the watermelon fresh?

Share your ideas on reddit or YouTube, and remember to subscribe to see more of Cedric’s maverick making in the wild.

The post Turn a watermelon into a RetroPie games console appeared first on Raspberry Pi.

The Third Edition of Ross Anderson’s Security Engineering

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/the_third_editi.html

Ross Anderson’s fantastic textbook, Security Engineering, will have a third edition. The book won’t be published until December, but Ross has been making drafts of the chapters available online as he finishes them. Now that the book is completed, I expect the publisher to make him take the drafts off the Internet.

I personally find both the electronic and paper versions to be incredibly useful. Grab an electronic copy now while you still can.

US Space Cybersecurity Directive

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/us-space-cybersecurity-directive.html

The Trump Administration just published “Space Policy Directive – 5“: “Cybersecurity Principles for Space Systems.” It’s pretty general:

Principles. (a) Space systems and their supporting infrastructure, including software, should be developed and operated using risk-based, cybersecurity-informed engineering. Space systems should be developed to continuously monitor, anticipate,and adapt to mitigate evolving malicious cyber activities that could manipulate, deny, degrade, disrupt,destroy, surveil, or eavesdrop on space system operations. Space system configurations should be resourced and actively managed to achieve and maintain an effective and resilient cyber survivability posture throughout the space system lifecycle.

(b) Space system owners and operators should develop and implement cybersecurity plans for their space systems that incorporate capabilities to ensure operators or automated control center systems can retain or recover positive control of space vehicles. These plans should also ensure the ability to verify the integrity, confidentiality,and availability of critical functions and the missions, services,and data they enable and provide.

These unclassified directives are typically so general that it’s hard to tell whether they actually matter.

News article.

Give your voice assistant a retro Raspberry Pi makeover

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/give-your-voice-assistant-a-retro-raspberry-pi-makeover/

Do you feel weird asking the weather or seeking advice from a faceless device? Would you feel better about talking to a classic 1978 2-XL educational robot from Mego Corporation? Matt over at element14 Community, where tons of interesting stuff happens, has got your back.

Watch Matt explain how the 2-XL toy robot worked before he started tinkering with it. This robot works with Google Assistant on a Raspberry Pi, and answers to a custom wake word.

Kit list

Our recent blog about repurposing a Furby as a voice assistant device would have excited Noughties kids, but this one is mostly for our beautiful 1970s- and 1980s-born fanbase.

Time travel

2-XL, Wikipedia tells us, is considered the first “smart toy”, marketed way back in 1978, and exhibiting “rudimentary intelligence, memory, gameplay, and responsiveness”. 2-XL had a personality that kept kids’ attention, telling jokes and offering verbal support as they learned.

Teardown

Delve under the robot’s armour to see how the toy was built, understand the basic working mechanism, and watch Matt attempt to diagnose why his 2-XL is not working.

Setting up Google Assistant

The Matrix Creator daughter board mentioned in the kit list is an ideal platform for developing your own AI assistant. It’s the daughter board’s 8-microphone array that makes it so brilliant for this task. Learn how to set up Google Assistant on the Matrix board in this video.

What if you don’t want to wake your retrofit voice assistant in the same way as all the other less dedicated users, the ones who didn’t spend hours of love and care refurbishing an old device? Instead of having your homemade voice assistant answer to “OK Google” or “Alexa”, you can train it to recognise a phrase of your choice. In this tutorial, Matt shows you how to set up a custom wake word with your voice assistant, using word detection software called Snowboy.

Keep an eye on element14 on YouTube for the next instalment of this excellent retrofit project.

The post Give your voice assistant a retro Raspberry Pi makeover appeared first on Raspberry Pi.

More on NIST’s Post-Quantum Cryptography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/more_on_nists_p.html

Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.

Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.

NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.

Nandu’s lockdown Raspberry Pi robot project

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/nandus-lockdown-raspberry-pi-robot-project/

Nandu Vadakkath was inspired by a line-following robot built (literally) entirely from salvage materials that could wait patiently and purchase beer for its maker in Tamil Nadu, India. So he set about making his own, but with the goal of making it capable of slightly more sophisticated tasks.

“Robot, can you play a song?”

Hardware

Robot comes when called, and recognises you as its special human

Software

Nandu had ambitious plans for his robot: navigation, speech and listening, recognition, and much more were on the list of things he wanted it to do. And in order to make it do everything he wanted, he incorporated a lot of software, including:

Robot shares Nandu’s astrological chart
  • Python 3
  • virtualenv, a tool for creating isolating virtual Python environments
  • the OpenCV open source computer vision library
  • the spaCy open source natural language processing library
  • the TensorFlow open source machine learning platform
  • Haar cascade algorithms for object detection
  • A ResNet neural network with the COCO dataset for object detection
  • DeepSpeech, an open source speech-to-text engine
  • eSpeak NG, an open source speech synthesiser
  • The MySQL database service

So how did Nandu go about trying to make the robot do some of the things on his wishlist?

Context and intents engine

The engine uses spaCy to analyse sentences, classify all the elements it identifies, and store all this information in a MySQL database. When the robot encounters a sentence with a series of possible corresponding actions, it weighs them to see what the most likely context is, based on sentences it has previously encountered.

Getting to know you

The robot has been trained to follow Nandu around but it can get to know other people too. When it meets a new person, it takes a series of photos and processes them in the background, so it learns to remember them.

Nandu's home made robot
There she blows!

Speech

Nandu didn’t like the thought of a basic robotic voice, so he searched high and low until he came across the MBROLA UK English voice. Have a listen in the videos above!

Object and people detection

The robot has an excellent group photo function: it looks for a person, calculates the distance between the top of their head and the top of the frame, then tilts the camera until this distance is about 60 pixels. This is a lot more effort than some human photographers put into getting all of everyone’s heads into the frame.

Nandu has created a YouTube channel for his robot companion, so be sure to keep up with its progress!

The post Nandu’s lockdown Raspberry Pi robot project appeared first on Raspberry Pi.

Schneier.com is Moving

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/schneiercom_is_.html

I’m switching my website software from Movable Type to WordPress, and moving to a new host.

The migration is expected to last from approximately 3 AM EST Monday until 4 PM EST Tuesday. The site will still be visible during that time, but comments will be disabled. (This is to prevent any new comments from disappearing in the move.)

This is not a site redesign, so you shouldn’t notice many differences. Even the commenting system is pretty much the same, though you’ll be able to use Markdown instead of HTML if you want to.

The conversion to WordPress was done by Automattic, who did an amazing job of getting all of the site’s customizations and complexities — this website is 17 years old — to work on a new platform. Automattic is also providing the new hosting on their Pressable service. I’m not sure I could have done it without them.

Hopefully everything will work smoothly.

Troubleshooting Amazon API Gateway with enhanced observability variables

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/troubleshooting-amazon-api-gateway-with-enhanced-observability-variables/

Amazon API Gateway is often used for managing access to serverless applications. Additionally, it can help developers reduce code and increase security with features like AWS WAF integration and authorizers at the API level.

Because more is handled by API Gateway, developers tell us they would like to see more data points on the individual parts of the request. This data helps developers understand each phase of the API request and how it affects the request as a whole. In response to this request, the API Gateway team has added new enhanced observability variables to the API Gateway access logs. With these new variables, developers can troubleshoot on a more granular level to quickly isolate and resolve request errors and latency issues.

The phases of an API request

API Gateway divides requests into phases, reflected by the variables that have been added. Depending upon the features configured for the application, an API request goes through multiple phases. The phases appear in a specific order as follows:

Phases of an API request

Phases of an API request

  • WAF: the WAF phase only appears when an AWS WAF web access control list (ACL) is configured for enhanced security. During this phase, WAF rules are evaluated and a decision is made on whether to continue or cancel the request.
  • Authenticate: the authenticate phase is only present when AWS Identity and Access Management (IAM) authorizers are used. During this phase, the credentials of the signed request are verified. Access is granted or denied based on the client’s right to assume the access role.
  • Authorizer: the authorizer phase is only present when a Lambda, JWT, or Amazon Cognito authorizer is used. During this phase, the authorizer logic is processed to verify the user’s right to access the resource.
  • Authorize: the authorize phase is only present when a Lambda or IAM authorizer is used. During this phase, the results from the authenticate and authorizer phase are evaluated and applied.
  • Integration: during this phase, the backend integration processes the request.

Each phrase can add latency to the request, return a status, or raise an error. To capture this data, API Gateway now provides enhanced observability variables based on each phase. The variables are named according to the phase they occur in and follow the naming structure, $context.phase.property. Therefore, you can get data about WAF latency by using $context.waf.latency.

Some existing variables have also been given aliases to match this naming schema. For example, $context.integrationErrorMessage has a new alias of $context.integration.error. The resulting list of variables is as follows:

Phases and variables for API Gateway requests

Phases and variables for API Gateway requests

API Gateway provides status, latency, and error data for each phase. In the authorizer and integration phases, there are additional variables you can use in logs. The $context.phase.requestId provides the request ID from that service and the $context.phase.integrationStatus provide the status code.

For example, when using an AWS Lambda function as the integration, API Gateway receives two status codes. The first, $context.integration.integrationStatus, is the status of the Lambda service itself. This is usually 200, unless there is a service or permissions error. The second, $context.integration.status, is the status of the Lambda function and reports on the success or failure of the code.

A full list of access log variables is in the documentation for REST APIs, WebSocket APIs, and HTTP APIs.

A troubleshooting example

In this example, an application is built using an API Gateway REST API with a Lambda function for the backend integration. The application uses an IAM authorizer to require AWS account credentials for application access. The application also uses an AWS WAF ACL to rate limit requests to 100 requests per IP, per five minutes. The demo application and deployment instructions can be found in the Sessions With SAM repository.

Because the application involves an AWS WAF and IAM authorizer for security, the request passes through four phases: waf, authenticate, authorize, and integration. The access log format is configured to capture all the data regarding these phases:

{
  "requestId":"$context.requestId",
  "waf-error":"$context.waf.error",
  "waf-status":"$context.waf.status",
  "waf-latency":"$context.waf.latency",
  "waf-response":"$context.wafResponseCode",
  "authenticate-error":"$context.authenticate.error",
  "authenticate-status":"$context.authenticate.status",
  "authenticate-latency":"$context.authenticate.latency",
  "authorize-error":"$context.authorize.error",
  "authorize-status":"$context.authorize.status",
  "authorize-latency":"$context.authorize.latency",
  "integration-error":"$context.integration.error",
  "integration-status":"$context.integration.status",
  "integration-latency":"$context.integration.latency",
  "integration-requestId":"$context.integration.requestId",
  "integration-integrationStatus":"$context.integration.integrationStatus",
  "response-latency":"$context.responseLatency",
  "status":"$context.status"
}

Once the application is deployed, use Postman to test the API with a sigV4 request.

Configuring Postman authorization

Configuring Postman authorization

To show troubleshooting with the new enhanced observability variables, the first request sent through contains invalid credentials. The user receives a 403 Forbidden error.

Client response view with invalid tokens

Client response view with invalid tokens

The access log for this request is:

{
    "requestId": "70aa9606-26be-4396-991c-405a3671fd9a",
    "waf-error": "-",
    "waf-status": "200",
    "waf-latency": "8",
    "waf-response": "WAF_ALLOW",
    "authenticate-error": "-",
    "authenticate-status": "403",
    "authenticate-latency": "17",
    "authorize-error": "-",
    "authorize-status": "-",
    "authorize-latency": "-",
    "integration-error": "-",
    "integration-status": "-",
    "integration-latency": "-",
    "integration-requestId": "-",
    "integration-integrationStatus": "-",
    "response-latency": "48",
    "status": "403"
}

The request passed through the waf phase first. Since this is the first request and the rate limit has not been exceeded, the request is passed on to the next phase, authenticate. During the authenticate phase, the user’s credentials are verified. In this case, the credentials are invalid and the request is rejected with a 403 response before invoking the downstream phases.

To correct this, the next request uses valid credentials, but those credentials do not have access to invoke the API. Again, the user receives a 403 Forbidden error.

Client response view with unauthorized tokens

Client response view with unauthorized tokens

The access log for this request is:

{
  "requestId": "c16d9edc-037d-4f42-adf3-eaadf358db2d",
  "waf-error": "-",
  "waf-status": "200",
  "waf-latency": "7",
  "waf-response": "WAF_ALLOW",
  "authenticate-error": "-",
  "authenticate-status": "200",
  "authenticate-latency": "8",
  "authorize-error": "The client is not authorized to perform this operation.",
  "authorize-status": "403",
  "authorize-latency": "0",
  "integration-error": "-",
  "integration-status": "-",
  "integration-latency": "-",
  "integration-requestId": "-",
  "integration-integrationStatus": "-",
  "response-latency": "52",
  "status": "403"
}

This time, the access logs show that the authenticate phase returns a 200. This indicates that the user credentials are valid for this account. However, the authorize phase returns a 403 and states, “The client is not authorized to perform this operation”. Again, the request is rejected with a 403 response before invoking downstream phases.

The last request for the API contains valid credentials for a user that has rights to invoke this API. This time the user receives a 200 OK response and the requested data.

Client response view with valid request

Client response view with valid request

The log for this request is:

{
  "requestId": "ac726ce5-91dd-4f1d-8f34-fcc4ae0bd622",
  "waf-error": "-",
  "waf-status": "200",
  "waf-latency": "7",
  "waf-response": "WAF_ALLOW",
  "authenticate-error": "-",
  "authenticate-status": "200",
  "authenticate-latency": "1",
  "authorize-error": "-",
  "authorize-status": "200",
  "authorize-latency": "0",
  "integration-error": "-",
  "integration-status": "200",
  "integration-latency": "16",
  "integration-requestId": "8dc58335-fa13-4d48-8f99-2b1c97f41a3e",
  "integration-integrationStatus": "200",
  "response-latency": "48",
  "status": "200"
}

This log contains a 200 status code from each of the phases and returns a 200 response to the user. Additionally, each of the phases reports latency. This request had a total of 48 ms of latency. The latency breaks down according to the following:

Request latency breakdown

Request latency breakdown

Developers can use this information to identify the cause of latency within the API request and adjust accordingly. While some phases like authenticate or authorize are immutable, optimizing the integration phase of this request could remove a large chunk of the latency involved.

Conclusion

This post covers the enhanced observability variables, the phases they occur in, and the order of those phases. With these new variables, developers can quickly isolate the problem and focus on resolving issues.

When configured with the proper access logging variables, API Gateway access logs can provide a detailed story of API performance. They can help developers to continually optimize that performance. To learn how to configure logging in API Gateway with AWS SAM, see the demonstration app for this blog.

#ServerlessForEveryone

Building a serverless document scanner using Amazon Textract and AWS Amplify

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-serverless-document-scanner-using-amazon-textract-and-aws-amplify/

This guide demonstrates creating and deploying a production ready document scanning application. It allows users to manage projects, upload images, and generate a PDF from detected text. The sample can be used as a template for building expense tracking applications, handling forms and legal documents, or for digitizing books and notes.

The frontend application is written in Vue.js and uses the Amplify Framework. The backend is built using AWS serverless technologies and consists of an Amazon API Gateway REST API that invokes AWS Lambda functions. Amazon Textract is used to analyze text from uploaded images to an Amazon S3 bucket. Detected text is stored in Amazon DynamoDB.

An architectural diagram of the application.

An architectural diagram of the application.

Prerequisites

You need the following to complete the project:

Deploy the application

The solution consists of two parts, the frontend application and the serverless backend. The Amplify CLI deploys all the Amazon Cognito authentication, and hosting resources for the frontend. The backend requires the Amazon Cognito user pool identifier to configure an authorizer on the API. This enables an authorization workflow, as shown in the following image.

A diagram showing how an Amazon Cognito authorization workflow works

A diagram showing how an Amazon Cognito authorization workflow works

First, configure the frontend. Complete the following steps using a terminal running on a computer or by using the AWS Cloud9 IDE. If using AWS Cloud9, create an instance using the default options.

From the terminal:

  1. Install the Amplify CLI by running this command.
    npm install -g @aws-amplify/cli
  2. Configure the Amplify CLI using this command. Follow the guided process to completion.
    amplify configure
  3. Clone the project from GitHub.
    git clone https://github.com/aws-samples/aws-serverless-document-scanner.git
  4. Navigate to the amplify-frontend directory and initialize the project using the Amplify CLI command. Follow the guided process to completion.
    cd aws-serverless-document-scanner/amplify-frontend
    
    amplify init
  5. Deploy all the frontend resources to the AWS Cloud using the Amplify CLI command.
    amplify push
  6. After the resources have finishing deploying, make note of the StackName and UserPoolId properties in the amplify-frontend/amplify/backend/amplify-meta.json file. These are required when deploying the serverless backend.

Next, deploy the serverless backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the document-scanner application in the AWS Serverless Application Repository.
  2. In Application settings, name the application and provide the StackName and UserPoolId from the frontend application for the UserPoolID and AmplifyStackName parameters. Provide a unique name for the BucketName parameter.
  3. Choose Deploy.
  4. Once complete, copy the API endpoint so that it can be configured on the frontend application in the next section.

Configure and run the frontend application

  1. Create a file, amplify-frontend/src/api-config.js, in the frontend application with the following content. Include the API endpoint and the unique BucketName from the previous step. The s3_region value must be the same as the Region where your serverless backend is deployed.
    const apiConfig = {
    	"endpoint": "<API ENDPOINT>",
    	"s3_bucket_name": "<BucketName>",
    	"s3_region": "<Bucket Region>"
    };
    
    export default apiConfig;
  2. In a terminal, navigate to the root directory of the frontend application and run it locally for testing.
    cd aws-serverless-document-scanner/amplify-frontend
    
    npm install
    
    npm run serve

    You should see an output like this:

  3. To publish the frontend application to cloud hosting, run the following command.
    amplify publish

    Once complete, a URL to the hosted application is provided.

Using the frontend application

Once the application is running locally or hosted in the cloud, navigating to it presents a user login interface with an option to register. The registration flow requires a code sent to the provided email for verification. Once verified you’re presented with the main application interface.

Once you create a project and choose it from the list, you are presented with an interface for uploading images by page number.

On mobile, it uses the device camera to capture images. On desktop, images are provided by the file system. You can replace an image and the page selector also lets you go back and change an image. The corresponding analyzed text is updated in DynamoDB as well.

Each time you upload an image, the page is incremented. Choosing “Generate PDF” calls the endpoint for the GeneratePDF Lambda function and returns a PDF in base64 format. The download begins automatically.

You can also open the PDF in another window, if viewing a preview in a desktop browser.

Understanding the serverless backend

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

In the GitHub project, the folder serverless-backend/ contains the AWS SAM template file and the Lambda functions. It creates an API Gateway endpoint, six Lambda functions, an S3 bucket, and two DynamoDB tables. The template also defines an Amazon Cognito authorizer for the API using the UserPoolID passed in as a parameter:

Parameters:
  UserPoolID:
    Type: String
    Description: (Required) The user pool ID created by the Amplify frontend.

  AmplifyStackName:
    Type: String
    Description: (Required) The stack name of the Amplify backend deployment. 

  BucketName:
    Type: String
    Default: "ds-userfilebucket"
    Description: (Required) A unique name for the user file bucket. Must be all lowercase.  


Globals:
  Api:
    Cors:
      AllowMethods: "'*'"
      AllowHeaders: "'*'"
      AllowOrigin: "'*'"

Resources:

  DocumentScannerAPI:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      Auth:
        DefaultAuthorizer: CognitoAuthorizer
        Authorizers:
          CognitoAuthorizer:
            UserPoolArn: !Sub 'arn:aws:cognito-idp:${AWS::Region}:${AWS::AccountId}:userpool/${UserPoolID}'
            Identity:
              Header: Authorization
        AddDefaultAuthorizerToCorsPreflight: False

This only allows authenticated users of the frontend application to make requests with a JWT token containing their user name and email. The backend uses that information to fetch and store data in DynamoDB that corresponds to the user making the request.

Two DynamoDB tables are created. A Project table, which tracks all the project names by user, and a Pages table, which tracks pages by project and user. The DynamoDB tables are created by the AWS SAM template with the partition key and range key defined for each table. These are used by the Lambda functions to query and sort items. See the documentation to learn more about DynamoDB table key schema.

ProjectsTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "project_name"
          AttributeType: "S"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: project_name
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

  PagesTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "project"
          AttributeType: "S"
        - 
          AttributeName: "page"
          AttributeType: "N"
      KeySchema: 
        - AttributeName: project
          KeyType: HASH
        - AttributeName: page
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

When an API Gateway endpoint is called, it passes the user credentials in the request context to a Lambda function. This is used by the CreateProject Lambda function, which also receives a project name in the request body, to create an item in the Project Table and associate it with a user.

The endpoint for the FetchProjects Lambda function is called to retrieve the list of projects associated with a user. The DeleteProject Lambda function removes a specific project from the Project table and any associated pages in the Pages table. It also deletes the folder in the S3 bucket containing all images for the project.

When a user enters a Project, the API endpoint calls the FetchPageCount Lambda function. This returns the number of pages for a project to update the current page number in the upload selector. The project is retrieved from the path parameters, as defined in the AWS SAM template:

FetchPageCount:
    Type: AWS::Serverless::Function
    Properties:
      Handler: app.handler
      Runtime: python3.8
      CodeUri: lambda_functions/fetchPageCount/
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref PagesTable
      Environment:
        Variables:
          PAGES_TABLE_NAME: !Ref PagesTable
      Events:
        GetResource:
          Type: Api
          Properties:
            RestApiId: !Ref DocumentScannerAPI
            Path: /pages/count/{project+}
            Method: get  

The template creates an S3 bucket and two AWS IAM managed policies. The policies are applied to the AuthRole and UnauthRole created by Amplify. This allows users to upload images directly to the S3 bucket. To understand how Amplify works with Storage, see the documentation.

The template also sets an S3 event notification on the bucket for all object create events with a “.png” suffix. Whenever the frontend uploads an image to S3, the object create event invokes the ProcessDocument Lambda function.

The function parses the object key to get the project name, user, and page number. Amazon Textract then analyzes the text of the image. The object returned by Amazon Textract contains the detected text and detailed information, such as the positioning of text in the image. Only the raw lines of text are stored in the Pages table.

import os
import json, decimal
import boto3
import urllib.parse
from boto3.dynamodb.conditions import Key, Attr

client = boto3.resource('dynamodb')
textract = boto3.client('textract')

tableName = os.environ.get('PAGES_TABLE_NAME')

def handler(event, context):

  table = client.Table(tableName)

  print(table.table_status)
 
  key = urllib.parse.unquote(event['Records'][0]['s3']['object']['key'])
  bucket = event['Records'][0]['s3']['bucket']['name']
  project = key.split('/')[3]
  page = key.split('/')[4].split('.')[0]
  user = key.split('/')[2]
  
  response = textract.detect_document_text(
    Document={
        'S3Object': {
            'Bucket': bucket,
            'Name': key
        }
    })
    
  fullText = ""
  
  for item in response["Blocks"]:
    if item["BlockType"] == "LINE":
        fullText = fullText + item["Text"] + '\n'
  
  print(fullText)

  table.put_item(Item= {
    'project': user + '/' + project,
    'page': int(page), 
    'text': fullText
    })

  # print(response)
  return

The GeneratePDF Lambda function retrieves the detected text for each page in a project from the Pages table. It combines the text into a PDF and returns it as a base64-encoded string for download. This function can be modified if your document structure differs.

Understanding the frontend

In the GitHub repo, the folder amplify-frontend/src/ contains all the code for the frontend application. In main.js, the Amplify VueJS modules are configured to use the resources defined in aws-exports.js. It also configures the endpoint and S3 bucket of the serverless backend, defined in api-config.js.

In components/DocumentScanner.vue, the API module is imported and the API is defined.

API calls are defined as Vue methods that can be called by various other components and elements of the application.

In components/Project.vue, the frontend uses the Storage module for Amplify to upload images. For more information on how to use S3 in an Amplify project see the documentation.

Conclusion

This blog post shows how to create a multiuser application that can analyze text from images and generate PDF documents. This guide demonstrates how to do so in a secure and scalable way using a serverless approach. The example also shows an event driven pattern for handling high volume image processing using S3, Lambda, and Amazon Textract.

The Amplify Framework simplifies the process of implementing authentication, storage, and backend integration. Explore the full solution on GitHub to modify it for your next project or startup idea.

To learn more about AWS serverless and keep up to date on the latest features, subscribe to the YouTube channel.

#ServerlessForEveryone

Recreate Q*bert’s cube-hopping action | Wireframe #42

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-qberts-cube-hopping-action-wireframe-42/

Code the mechanics of an eighties arcade hit in Python and Pygame Zero. Mark Vanstone shows you how

Players must change the colour of every cube to complete the level.

Late in 1982, a funny little orange character with a big nose landed in arcades. The titular Q*bert’s task was to jump around a network of cubes arranged in a pyramid formation, changing the colours of each as they went. Once the cubes were all the same colour, it was on to the next level; to make things more interesting, there were enemies like Coily the snake, and objects which helped Q*bert: some froze enemies in their tracks, while floating discs provided a lift back to the top of the stage.

Q*bert was designed by Warren Davis and Jeff Lee at the American company Gottlieb, and soon became such a smash hit that, the following year, it was already being ported to most of the home computer platforms available at the time. New versions and remakes continued to appear for years afterwards, with a mobile phone version appearing in 2003. Q*bert was by far Gottlieb’s most popular game, and after several changes in company ownership, the firm is now part of Sony’s catalogue – Q*bert’s main character even made its way into the 2015 film, Pixels.

Q*bert uses isometric-style graphics to draw a pseudo-3D display – something we can easily replicate in Pygame Zero by using a single cube graphic with which we make a pyramid of Actor objects. Starting with seven cubes on the bottom row, we can create a simple double loop to create the pile of cubes. Our Q*bert character will be another Actor object which we’ll position at the top of the pile to start. The game screen can then be displayed in the draw() function by looping through our 28 cube Actors and then drawing Q*bert.

Our homage to Q*bert. Try not to fall into the terrifying void.

We need to detect player input, and for this we use the built-in keyboard object and check the cursor keys in our update() function. We need to make Q*bert move from cube to cube so we can move the Actor 32 pixels on the x-axis and 48 pixels on the y-axis. If we do this in steps of 2 for x and 3 for y, we will have Q*bert on the next cube in 16 steps. We can also change his image to point in the right direction depending on the key pressed in our jump() function. If we use this linear movement in our move() function, we’ll see the Actor go in a straight line to the next block. To add a bit of bounce to Q*bert’s movement, we add or subtract (depending on the direction) the values in the bounce[] list. This will make a bit more of a curved movement to the animation.

Now that we have our long-nosed friend jumping around, we need to check where he’s landing. We can loop through the cube positions and check whether Q*bert is over each one. If he is, then we change the image of the cube to one with a yellow top. If we don’t detect a cube under Q*bert, then the critter’s jumped off the pyramid, and the game’s over. We can then do a quick loop through all the cube Actors, and if they’ve all been changed, then the player has completed the level. So those are the basic mechanics of jumping around on a pyramid of cubes. We just need some snakes and other baddies to annoy Q*bert – but we’ll leave those for you to add. Good luck!

Here’s Mark’s code for a Q*bert-style, cube-hopping platform game. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 42

You can read more features like this one in Wireframe issue 42, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 42 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate Q*bert’s cube-hopping action | Wireframe #42 appeared first on Raspberry Pi.

Raspberry Pi retro player

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-retro-player/

We found this project at TeCoEd and we loved the combination of an OLED display housed inside a retro Argus slide viewer. It uses a Raspberry Pi 3 with Python and OpenCV to pull out single frames from a video and write them to the display in real time.​

TeCoEd names this creation the Raspberry Pi Retro Player, or RPRP, or – rather neatly – RP squared. The Argus viewer, he tells us, was a charity-shop find that cost just 50p.  It sat collecting dust for a few years until he came across an OLED setup guide on hackster.io, which inspired the birth of the RPRP.

Timelapse of the build and walk-through of the code

At the heart of the project is a Raspberry Pi 3 which is running a Python program that uses the OpenCV computer vision library.  The code takes a video clip and breaks it down into individual frames. Then it resizes each frame and converts it to black and white, before writing it to the OLED display. The viewer sees the video play in pleasingly retro monochrome on the slide viewer.

Tiny but cute, like us!

TeCoEd ran into some frustrating problems with the OLED display, which, he discovered, uses the SH1106 driver, rather than the standard SH1306 driver that the Adafruit CircuitPython library expects. Many OLED displays use the SH1306 driver, but it turns out that cheaper displays like the one in this project use the SH1106. He has made a video to spare other makers this particular throw-it-all-in-the-bin moment.

Tutorial for using the SH1106 driver for cheap OLED displays

If you’d like to try this build for yourself, here’s all the code and setup advice on GitHub.

Wiring diagram

TeCoEd is, as ever, our favourite kind of maker – the sharing kind! He has collated everything you’ll need to get to grips with OpenCV, connecting the SH1106 OLED screen over I2C, and more. He’s even told us where we can buy the OLED board.

The post Raspberry Pi retro player appeared first on Raspberry Pi.

Jump-starting your serverless development environment

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/jump-starting-your-serverless-development-environment/

Developers building serverless applications often wonder how they can jump-start their local development environment. This blog post provides a broad guide for those developers wanting to set up a development environment for building serverless applications.

serverless development environment

AWS and open source tools for a serverless development environment .

To use AWS Lambda and other AWS services, create and activate an AWS account.

Command line tooling

Command line tools are scripts, programs, and libraries that enable rapid application development and interactions from within a command line shell.

The AWS CLI

The AWS Command Line Interface (AWS CLI) is an open source tool that enables developers to interact with AWS services using a command line shell. In many cases, the AWS CLI increases developer velocity for building cloud resources and enables automating repetitive tasks. It is an important piece of any serverless developer’s toolkit. Follow these instructions to install and configure the AWS CLI on your operating system.

AWS enables you to build infrastructure with code. This provides a single source of truth for AWS resources. It enables development teams to use version control and create deployment pipelines for their cloud infrastructure. AWS CloudFormation provides a common language to model and provision these application resources in your cloud environment.

AWS Serverless Application Model (AWS SAM CLI)

AWS Serverless Application Model (AWS SAM) is an extension for CloudFormation that further simplifies the process of building serverless application resources.

It provides shorthand syntax to define Lambda functions, APIs, databases, and event source mappings. During deployment, the AWS SAM syntax is transformed into AWS CloudFormation syntax, enabling you to build serverless applications faster.

The AWS SAM CLI is an open source command line tool used to locally build, test, debug, and deploy serverless applications defined with AWS SAM templates.

Install AWS SAM CLI on your operating system.

Test the installation by initializing a new quick start project with the following command:

$ sam init
  1. Choose 1 for the “Quick Start Templates
  2. Choose 1 for the “Node.js runtime
  3. Use the default name.

The generated /sam-app/template.yaml contains all the resource definitions for your serverless application. This includes a Lambda function with a REST API endpoint, along with the necessary IAM permissions.

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: hello-world/
      Handler: app.lambdaHandler
      Runtime: nodejs12.x
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get

Deploy this application using the AWS SAM CLI guided deploy:

$ sam deploy -g

Local testing with AWS SAM CLI

The AWS SAM CLI requires Docker containers to simulate the AWS Lambda runtime environment on your local development environment. To test locally, install Docker Engine and run the Lambda function with following command:

$ sam local invoke "HelloWorldFunction" -e events/event.json

The first time this function is invoked, Docker downloads the lambci/lambda:nodejs12.x container image. It then invokes the Lambda function with a pre-defined event JSON file.

Helper tools

There are a number of open source tools and packages available to help you monitor, author, and optimize your Lambda-based applications. Some of the most popular tools are shown in the following list.

Template validation tooling

CloudFormation Linter is a validation tool that helps with your CloudFormation development cycle. It analyses CloudFormation YAML and JSON templates to resolve and validate intrinsic functions and resource properties. By analyzing your templates before deploying them, you can save valuable development time and build automated validation into your deployment release cycle.

Follow these instructions to install the tool.

Once, installed, run the cfn-lint command with the path to your AWS SAM template provided as the first argument:

cfn-lint template.yaml
AWS SAM template validation with cfn-lint

AWS SAM template validation with cfn-lint

The following example shows that the template is not valid because the !GettAtt function does not evaluate correctly.

IDE tooling

Use AWS IDE plugins to author and invoke Lambda functions from within your existing integrated development environment (IDE). AWS IDE toolkits are available for PyCharm, IntelliJ. Visual Studio.

The AWS Toolkit for Visual Studio Code provides an integrated experience for developing serverless applications. It enables you to invoke Lambda functions, specify function configurations, locally debug, and deploy—all conveniently from within the editor. The toolkit supports Node.js, Python, and .NET.

The AWS Toolkit for Visual Studio Code

From Visual Studio Code, choose the Extensions icon on the Activity Bar. In the Search Extensions in Marketplace box, enter AWS Toolkit and then choose AWS Toolkit for Visual Studio Code as shown in the following example. This opens a new tab in the editor showing the toolkit’s installation page. Choose the Install button in the header to add the extension.

AWS Toolkit extension for Visual Studio Code

AWS Toolkit extension for Visual Studio Code

AWS Cloud9

Another option to build a development environment without having to install anything locally is to use AWS Cloud9. AWS Cloud9 is a cloud-based integrated development environment (IDE) for writing, running, and debugging code from within the browser.

It provides a seamless experience for developing serverless applications. It has a preconfigured development environment that includes AWS CLI, AWS SAM CLI, SDKs, code libraries, and many useful plugins. AWS Cloud9 also provides an environment for locally testing and debugging AWS Lambda functions. This eliminates the need to upload your code to the Lambda console. It allows developers to iterate on code directly, saving time, and improving code quality.

Follow this guide to set up AWS Cloud9 in your AWS environment.

Advanced tooling

Efficient configuration of Lambda functions is critical when expecting optimal cost and performance of your serverless applications. Lambda allows you to control the memory (RAM) allocation for each function.

Lambda charges based on the number of function requests and the duration, the time it takes for your code to run. The price for duration depends on the amount of RAM you allocate to your function. A smaller RAM allocation may reduce the performance of your application if your function is running compute-heavy workloads. If performance needs outweigh cost, you can increase the memory allocation.

Cost and performance optimization tooling

AWS Lambda power tuner is an open source tool that uses an AWS Step Functions state machine to suggest cost and performance optimizations for your Lambda functions. It invokes a given function with multiple memory configurations. It analyzes the execution log results to determine and suggest power configurations that minimize cost and maximize performance.

To deploy the tool:

  1. Clone the repository as follows:
    $ git clone https://github.com/alexcasalboni/aws-lambda-power-tuning.git
  2. Create an Amazon S3 bucket and enter the deployment configurations in /scripts/deploy.sh:
    # config
    BUCKET_NAME=your-sam-templates-bucket
    STACK_NAME=lambda-power-tuning
    PowerValues='128,512,1024,1536,3008'
  3. Run the deploy.sh script from your terminal, this uses the AWS SAM CLI to deploy the application:
    $ bash scripts/deploy.sh
  4. Run the power tuning tool from the terminal using the AWS CLI:
    aws stepfunctions start-execution \
    --state-machine-arn arn:aws:states:us-east-1:0123456789:stateMachine:powerTuningStateMachine-Vywm3ozPB6Am \
    --input "{\"lambdaARN\": \"arn:aws:lambda:us-east-1:1234567890:function:testytest\", \"powerValues\":[128,256,512,1024,2048],\"num\":50,\"payload\":{},\"parallelInvocation\":true,\"strategy\":\"cost\"}" \
    --output json
  5. The Step Functions execution output produces a link to a visual summary of the suggested results:

    AWS Lambda power tuning results

    AWS Lambda power tuning results

Monitoring and debugging tooling

Sls-dev-tools is an open source serverless tool that delivers serverless metrics directly to the terminal. It provides developers with feedback on their serverless application’s metrics and key bindings that deploy, open, and manipulate stack resources. Bringing this data directly to your terminal or IDE, reduces context switching between the developer environment and the web interfaces. This can increase application development speed and improve user experience.

Follow these instructions to install the tool onto your development environment.

To open the tool, run the following command:

$ Sls-dev-tools

Follow the in-terminal interface to choose which stack to monitor or edit.

The following example shows how the tool can be used to invoke a Lambda function with a custom payload from within the IDE.

Invoke an AWS Lambda function with a custom payload using sls-dev-tools

Invoke an AWS Lambda function with a custom payload using sls-dev-tools

Serverless database tooling

NoSQL Workbench for Amazon DynamoDB is a GUI application for modern database development and operations. It provides a visual IDE tool for data modeling and visualization with query development features to help build serverless applications with Amazon DynamoDB tables. Define data models using one or more tables and visualize the data model to see how it works in different scenarios. Run or simulate operations and generate the code for Python, JavaScript (Node.js), or Java.

Choose the correct operating system link to download and install NoSQL Workbench on your development machine.

The following example illustrates a connection to a DynamoDB table. A data scan is built using the GUI, with Node.js code generated for inclusion in a Lambda function:

Connecting to an Amazon DynamoBD table with NoSQL Workbench for AmazonDynamoDB

Connecting to an Amazon DynamoDB table with NoSQL Workbench for Amazon DynamoDB

Generating query code with NoSQL Workbench for Amazon DynamoDB

Generating query code with NoSQL Workbench for Amazon DynamoDB

Conclusion

Building serverless applications allows developers to focus on business logic instead of managing and operating infrastructure. This is achieved by using managed services. Developers often struggle with knowing which tools, libraries, and frameworks are available to help with this new approach to building applications. This post shows tools that builders can use to create a serverless developer environment to help accelerate software development.

This list represents AWS and open source tools but does not include our APN Partners. For partner offers, check here.

Read more to start building serverless applications.

Raspberry Pi + Furby = ‘Furlexa’ voice assistant

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-furby-furlexa-voice-assistant/

How can you turn a redundant, furry, slightly annoying tech pet into a useful home assistant? Zach took to howchoo to show you how to combine a Raspberry Pi Zero W with Amazon’s Alexa Voice Service software and a Furby to create Furlexa.

Furby was pretty impressive technology, considering that it’s over 20 years old. It could learn to speak English, sort of, by listening to humans. It communicated with other Furbies via infrared sensor. It even slept when its light sensor registered that it was dark.

Furby innards, exploded

Zach explains why Furby is so easy to hack:

Furby is comprised of a few primary components — a microprocessor, infrared and light sensors, microphone, speaker, and — most impressively — a single motor that uses an elaborate system of gears and cams to drive Furby’s ears, eyes, mouth and rocker. A cam position sensor (switch) tells the microprocessor what position the cam system is in. By driving the motor at varying speeds and directions and by tracking the cam position, the microprocessor can tell Furby to dance, sing, sleep, or whatever.

The original CPU and related circuitry were replaced with a Raspberry Pi Zero W

Zach continues: “Though the microprocessor isn’t worth messing around with (it’s buried inside a blob of resin to protect the IP), it would be easy to install a small Raspberry Pi computer inside of Furby, use it to run Alexa, and then track Alexa’s output to make Furby move.”

What you’ll need:

Harrowing

Running Alexa

The Raspberry Pi is running Alexa Voice Service (AVS) to provide full Amazon Echo functionality. Amazon AVS doesn’t officially support the tiny Raspberry Pi Zero, so lots of hacking was required. Point 10 on Zach’s original project walkthrough explains how to get AVS working with the Pimoroni Speaker pHAT.

Animating Furby

A small motor driver board is connected to the Raspberry Pi’s GPIO pins, and controls Furby’s original DC motor and gearbox: when Alexa speaks, so does Furby. The Raspberry Pi Zero can’t supply enough juice to power the motor, so instead, it’s powered by Furby’s original battery pack.

Software

There are three key pieces of software that make Furlexa possible:

  1. Amazon Alexa on Raspberry Pi – there are tonnes of tutorials showing you how to get Amazon Alexa up and running on your Raspberry Pi. Try this one on instructables.
  2. A script to control Furby’s motor howchooer Tyler wrote the Python script that Zach is using to drive the motor, and you can copy and paste it from Zach’s howchoo walkthrough.
  3. A script that detects when Alexa is speaking and calls the motor program – Furby detects when Alexa is speaking by monitoring the contents of a file whose contents change when audio is being output. Zach has written a separate guide for driving a DC motor based on Linux sound output.
Teeny tiny living space

The real challenge was cramming the Raspberry Pi Zero plus the Speaker pHAT, the motor controller board, and all the wiring back inside Furby, where space is at a premium. Soldering wires directly to the GPIO saved a bit of room, and foam tape holds everything above together nice and tightly. It’s a squeeze!

Zach is a maker extraordinaire, so check out his projects page on howchoo.

The post Raspberry Pi + Furby = ‘Furlexa’ voice assistant appeared first on Raspberry Pi.