Tag Archives: Uncategorized

More on NIST’s Post-Quantum Cryptography

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/more_on_nists_p.html

Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.

Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.

NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.

Nandu’s lockdown Raspberry Pi robot project

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/nandus-lockdown-raspberry-pi-robot-project/

Nandu Vadakkath was inspired by a line-following robot built (literally) entirely from salvage materials that could wait patiently and purchase beer for its maker in Tamil Nadu, India. So he set about making his own, but with the goal of making it capable of slightly more sophisticated tasks.

“Robot, can you play a song?”

Hardware

Robot comes when called, and recognises you as its special human

Software

Nandu had ambitious plans for his robot: navigation, speech and listening, recognition, and much more were on the list of things he wanted it to do. And in order to make it do everything he wanted, he incorporated a lot of software, including:

Robot shares Nandu’s astrological chart
  • Python 3
  • virtualenv, a tool for creating isolating virtual Python environments
  • the OpenCV open source computer vision library
  • the spaCy open source natural language processing library
  • the TensorFlow open source machine learning platform
  • Haar cascade algorithms for object detection
  • A ResNet neural network with the COCO dataset for object detection
  • DeepSpeech, an open source speech-to-text engine
  • eSpeak NG, an open source speech synthesiser
  • The MySQL database service

So how did Nandu go about trying to make the robot do some of the things on his wishlist?

Context and intents engine

The engine uses spaCy to analyse sentences, classify all the elements it identifies, and store all this information in a MySQL database. When the robot encounters a sentence with a series of possible corresponding actions, it weighs them to see what the most likely context is, based on sentences it has previously encountered.

Getting to know you

The robot has been trained to follow Nandu around but it can get to know other people too. When it meets a new person, it takes a series of photos and processes them in the background, so it learns to remember them.

Nandu's home made robot
There she blows!

Speech

Nandu didn’t like the thought of a basic robotic voice, so he searched high and low until he came across the MBROLA UK English voice. Have a listen in the videos above!

Object and people detection

The robot has an excellent group photo function: it looks for a person, calculates the distance between the top of their head and the top of the frame, then tilts the camera until this distance is about 60 pixels. This is a lot more effort than some human photographers put into getting all of everyone’s heads into the frame.

Nandu has created a YouTube channel for his robot companion, so be sure to keep up with its progress!

The post Nandu’s lockdown Raspberry Pi robot project appeared first on Raspberry Pi.

Schneier.com is Moving

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/schneiercom_is_.html

I’m switching my website software from Movable Type to WordPress, and moving to a new host.

The migration is expected to last from approximately 3 AM EST Monday until 4 PM EST Tuesday. The site will still be visible during that time, but comments will be disabled. (This is to prevent any new comments from disappearing in the move.)

This is not a site redesign, so you shouldn’t notice many differences. Even the commenting system is pretty much the same, though you’ll be able to use Markdown instead of HTML if you want to.

The conversion to WordPress was done by Automattic, who did an amazing job of getting all of the site’s customizations and complexities — this website is 17 years old — to work on a new platform. Automattic is also providing the new hosting on their Pressable service. I’m not sure I could have done it without them.

Hopefully everything will work smoothly.

Troubleshooting Amazon API Gateway with enhanced observability variables

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/troubleshooting-amazon-api-gateway-with-enhanced-observability-variables/

Amazon API Gateway is often used for managing access to serverless applications. Additionally, it can help developers reduce code and increase security with features like AWS WAF integration and authorizers at the API level.

Because more is handled by API Gateway, developers tell us they would like to see more data points on the individual parts of the request. This data helps developers understand each phase of the API request and how it affects the request as a whole. In response to this request, the API Gateway team has added new enhanced observability variables to the API Gateway access logs. With these new variables, developers can troubleshoot on a more granular level to quickly isolate and resolve request errors and latency issues.

The phases of an API request

API Gateway divides requests into phases, reflected by the variables that have been added. Depending upon the features configured for the application, an API request goes through multiple phases. The phases appear in a specific order as follows:

Phases of an API request

Phases of an API request

  • WAF: the WAF phase only appears when an AWS WAF web access control list (ACL) is configured for enhanced security. During this phase, WAF rules are evaluated and a decision is made on whether to continue or cancel the request.
  • Authenticate: the authenticate phase is only present when AWS Identity and Access Management (IAM) authorizers are used. During this phase, the credentials of the signed request are verified. Access is granted or denied based on the client’s right to assume the access role.
  • Authorizer: the authorizer phase is only present when a Lambda, JWT, or Amazon Cognito authorizer is used. During this phase, the authorizer logic is processed to verify the user’s right to access the resource.
  • Authorize: the authorize phase is only present when a Lambda or IAM authorizer is used. During this phase, the results from the authenticate and authorizer phase are evaluated and applied.
  • Integration: during this phase, the backend integration processes the request.

Each phrase can add latency to the request, return a status, or raise an error. To capture this data, API Gateway now provides enhanced observability variables based on each phase. The variables are named according to the phase they occur in and follow the naming structure, $context.phase.property. Therefore, you can get data about WAF latency by using $context.waf.latency.

Some existing variables have also been given aliases to match this naming schema. For example, $context.integrationErrorMessage has a new alias of $context.integration.error. The resulting list of variables is as follows:

Phases and variables for API Gateway requests

Phases and variables for API Gateway requests

API Gateway provides status, latency, and error data for each phase. In the authorizer and integration phases, there are additional variables you can use in logs. The $context.phase.requestId provides the request ID from that service and the $context.phase.integrationStatus provide the status code.

For example, when using an AWS Lambda function as the integration, API Gateway receives two status codes. The first, $context.integration.integrationStatus, is the status of the Lambda service itself. This is usually 200, unless there is a service or permissions error. The second, $context.integration.status, is the status of the Lambda function and reports on the success or failure of the code.

A full list of access log variables is in the documentation for REST APIs, WebSocket APIs, and HTTP APIs.

A troubleshooting example

In this example, an application is built using an API Gateway REST API with a Lambda function for the backend integration. The application uses an IAM authorizer to require AWS account credentials for application access. The application also uses an AWS WAF ACL to rate limit requests to 100 requests per IP, per five minutes. The demo application and deployment instructions can be found in the Sessions With SAM repository.

Because the application involves an AWS WAF and IAM authorizer for security, the request passes through four phases: waf, authenticate, authorize, and integration. The access log format is configured to capture all the data regarding these phases:

{
  "requestId":"$context.requestId",
  "waf-error":"$context.waf.error",
  "waf-status":"$context.waf.status",
  "waf-latency":"$context.waf.latency",
  "waf-response":"$context.wafResponseCode",
  "authenticate-error":"$context.authenticate.error",
  "authenticate-status":"$context.authenticate.status",
  "authenticate-latency":"$context.authenticate.latency",
  "authorize-error":"$context.authorize.error",
  "authorize-status":"$context.authorize.status",
  "authorize-latency":"$context.authorize.latency",
  "integration-error":"$context.integration.error",
  "integration-status":"$context.integration.status",
  "integration-latency":"$context.integration.latency",
  "integration-requestId":"$context.integration.requestId",
  "integration-integrationStatus":"$context.integration.integrationStatus",
  "response-latency":"$context.responseLatency",
  "status":"$context.status"
}

Once the application is deployed, use Postman to test the API with a sigV4 request.

Configuring Postman authorization

Configuring Postman authorization

To show troubleshooting with the new enhanced observability variables, the first request sent through contains invalid credentials. The user receives a 403 Forbidden error.

Client response view with invalid tokens

Client response view with invalid tokens

The access log for this request is:

{
    "requestId": "70aa9606-26be-4396-991c-405a3671fd9a",
    "waf-error": "-",
    "waf-status": "200",
    "waf-latency": "8",
    "waf-response": "WAF_ALLOW",
    "authenticate-error": "-",
    "authenticate-status": "403",
    "authenticate-latency": "17",
    "authorize-error": "-",
    "authorize-status": "-",
    "authorize-latency": "-",
    "integration-error": "-",
    "integration-status": "-",
    "integration-latency": "-",
    "integration-requestId": "-",
    "integration-integrationStatus": "-",
    "response-latency": "48",
    "status": "403"
}

The request passed through the waf phase first. Since this is the first request and the rate limit has not been exceeded, the request is passed on to the next phase, authenticate. During the authenticate phase, the user’s credentials are verified. In this case, the credentials are invalid and the request is rejected with a 403 response before invoking the downstream phases.

To correct this, the next request uses valid credentials, but those credentials do not have access to invoke the API. Again, the user receives a 403 Forbidden error.

Client response view with unauthorized tokens

Client response view with unauthorized tokens

The access log for this request is:

{
  "requestId": "c16d9edc-037d-4f42-adf3-eaadf358db2d",
  "waf-error": "-",
  "waf-status": "200",
  "waf-latency": "7",
  "waf-response": "WAF_ALLOW",
  "authenticate-error": "-",
  "authenticate-status": "200",
  "authenticate-latency": "8",
  "authorize-error": "The client is not authorized to perform this operation.",
  "authorize-status": "403",
  "authorize-latency": "0",
  "integration-error": "-",
  "integration-status": "-",
  "integration-latency": "-",
  "integration-requestId": "-",
  "integration-integrationStatus": "-",
  "response-latency": "52",
  "status": "403"
}

This time, the access logs show that the authenticate phase returns a 200. This indicates that the user credentials are valid for this account. However, the authorize phase returns a 403 and states, “The client is not authorized to perform this operation”. Again, the request is rejected with a 403 response before invoking downstream phases.

The last request for the API contains valid credentials for a user that has rights to invoke this API. This time the user receives a 200 OK response and the requested data.

Client response view with valid request

Client response view with valid request

The log for this request is:

{
  "requestId": "ac726ce5-91dd-4f1d-8f34-fcc4ae0bd622",
  "waf-error": "-",
  "waf-status": "200",
  "waf-latency": "7",
  "waf-response": "WAF_ALLOW",
  "authenticate-error": "-",
  "authenticate-status": "200",
  "authenticate-latency": "1",
  "authorize-error": "-",
  "authorize-status": "200",
  "authorize-latency": "0",
  "integration-error": "-",
  "integration-status": "200",
  "integration-latency": "16",
  "integration-requestId": "8dc58335-fa13-4d48-8f99-2b1c97f41a3e",
  "integration-integrationStatus": "200",
  "response-latency": "48",
  "status": "200"
}

This log contains a 200 status code from each of the phases and returns a 200 response to the user. Additionally, each of the phases reports latency. This request had a total of 48 ms of latency. The latency breaks down according to the following:

Request latency breakdown

Request latency breakdown

Developers can use this information to identify the cause of latency within the API request and adjust accordingly. While some phases like authenticate or authorize are immutable, optimizing the integration phase of this request could remove a large chunk of the latency involved.

Conclusion

This post covers the enhanced observability variables, the phases they occur in, and the order of those phases. With these new variables, developers can quickly isolate the problem and focus on resolving issues.

When configured with the proper access logging variables, API Gateway access logs can provide a detailed story of API performance. They can help developers to continually optimize that performance. To learn how to configure logging in API Gateway with AWS SAM, see the demonstration app for this blog.

#ServerlessForEveryone

Building a serverless document scanner using Amazon Textract and AWS Amplify

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-serverless-document-scanner-using-amazon-textract-and-aws-amplify/

This guide demonstrates creating and deploying a production ready document scanning application. It allows users to manage projects, upload images, and generate a PDF from detected text. The sample can be used as a template for building expense tracking applications, handling forms and legal documents, or for digitizing books and notes.

The frontend application is written in Vue.js and uses the Amplify Framework. The backend is built using AWS serverless technologies and consists of an Amazon API Gateway REST API that invokes AWS Lambda functions. Amazon Textract is used to analyze text from uploaded images to an Amazon S3 bucket. Detected text is stored in Amazon DynamoDB.

An architectural diagram of the application.

An architectural diagram of the application.

Prerequisites

You need the following to complete the project:

Deploy the application

The solution consists of two parts, the frontend application and the serverless backend. The Amplify CLI deploys all the Amazon Cognito authentication, and hosting resources for the frontend. The backend requires the Amazon Cognito user pool identifier to configure an authorizer on the API. This enables an authorization workflow, as shown in the following image.

A diagram showing how an Amazon Cognito authorization workflow works

A diagram showing how an Amazon Cognito authorization workflow works

First, configure the frontend. Complete the following steps using a terminal running on a computer or by using the AWS Cloud9 IDE. If using AWS Cloud9, create an instance using the default options.

From the terminal:

  1. Install the Amplify CLI by running this command.
    npm install -g @aws-amplify/cli
  2. Configure the Amplify CLI using this command. Follow the guided process to completion.
    amplify configure
  3. Clone the project from GitHub.
    git clone https://github.com/aws-samples/aws-serverless-document-scanner.git
  4. Navigate to the amplify-frontend directory and initialize the project using the Amplify CLI command. Follow the guided process to completion.
    cd aws-serverless-document-scanner/amplify-frontend
    
    amplify init
  5. Deploy all the frontend resources to the AWS Cloud using the Amplify CLI command.
    amplify push
  6. After the resources have finishing deploying, make note of the StackName and UserPoolId properties in the amplify-frontend/amplify/backend/amplify-meta.json file. These are required when deploying the serverless backend.

Next, deploy the serverless backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the document-scanner application in the AWS Serverless Application Repository.
  2. In Application settings, name the application and provide the StackName and UserPoolId from the frontend application for the UserPoolID and AmplifyStackName parameters. Provide a unique name for the BucketName parameter.
  3. Choose Deploy.
  4. Once complete, copy the API endpoint so that it can be configured on the frontend application in the next section.

Configure and run the frontend application

  1. Create a file, amplify-frontend/src/api-config.js, in the frontend application with the following content. Include the API endpoint and the unique BucketName from the previous step. The s3_region value must be the same as the Region where your serverless backend is deployed.
    const apiConfig = {
    	"endpoint": "<API ENDPOINT>",
    	"s3_bucket_name": "<BucketName>",
    	"s3_region": "<Bucket Region>"
    };
    
    export default apiConfig;
  2. In a terminal, navigate to the root directory of the frontend application and run it locally for testing.
    cd aws-serverless-document-scanner/amplify-frontend
    
    npm install
    
    npm run serve

    You should see an output like this:

  3. To publish the frontend application to cloud hosting, run the following command.
    amplify publish

    Once complete, a URL to the hosted application is provided.

Using the frontend application

Once the application is running locally or hosted in the cloud, navigating to it presents a user login interface with an option to register. The registration flow requires a code sent to the provided email for verification. Once verified you’re presented with the main application interface.

Once you create a project and choose it from the list, you are presented with an interface for uploading images by page number.

On mobile, it uses the device camera to capture images. On desktop, images are provided by the file system. You can replace an image and the page selector also lets you go back and change an image. The corresponding analyzed text is updated in DynamoDB as well.

Each time you upload an image, the page is incremented. Choosing “Generate PDF” calls the endpoint for the GeneratePDF Lambda function and returns a PDF in base64 format. The download begins automatically.

You can also open the PDF in another window, if viewing a preview in a desktop browser.

Understanding the serverless backend

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

In the GitHub project, the folder serverless-backend/ contains the AWS SAM template file and the Lambda functions. It creates an API Gateway endpoint, six Lambda functions, an S3 bucket, and two DynamoDB tables. The template also defines an Amazon Cognito authorizer for the API using the UserPoolID passed in as a parameter:

Parameters:
  UserPoolID:
    Type: String
    Description: (Required) The user pool ID created by the Amplify frontend.

  AmplifyStackName:
    Type: String
    Description: (Required) The stack name of the Amplify backend deployment. 

  BucketName:
    Type: String
    Default: "ds-userfilebucket"
    Description: (Required) A unique name for the user file bucket. Must be all lowercase.  


Globals:
  Api:
    Cors:
      AllowMethods: "'*'"
      AllowHeaders: "'*'"
      AllowOrigin: "'*'"

Resources:

  DocumentScannerAPI:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      Auth:
        DefaultAuthorizer: CognitoAuthorizer
        Authorizers:
          CognitoAuthorizer:
            UserPoolArn: !Sub 'arn:aws:cognito-idp:${AWS::Region}:${AWS::AccountId}:userpool/${UserPoolID}'
            Identity:
              Header: Authorization
        AddDefaultAuthorizerToCorsPreflight: False

This only allows authenticated users of the frontend application to make requests with a JWT token containing their user name and email. The backend uses that information to fetch and store data in DynamoDB that corresponds to the user making the request.

Two DynamoDB tables are created. A Project table, which tracks all the project names by user, and a Pages table, which tracks pages by project and user. The DynamoDB tables are created by the AWS SAM template with the partition key and range key defined for each table. These are used by the Lambda functions to query and sort items. See the documentation to learn more about DynamoDB table key schema.

ProjectsTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "project_name"
          AttributeType: "S"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: project_name
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

  PagesTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "project"
          AttributeType: "S"
        - 
          AttributeName: "page"
          AttributeType: "N"
      KeySchema: 
        - AttributeName: project
          KeyType: HASH
        - AttributeName: page
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

When an API Gateway endpoint is called, it passes the user credentials in the request context to a Lambda function. This is used by the CreateProject Lambda function, which also receives a project name in the request body, to create an item in the Project Table and associate it with a user.

The endpoint for the FetchProjects Lambda function is called to retrieve the list of projects associated with a user. The DeleteProject Lambda function removes a specific project from the Project table and any associated pages in the Pages table. It also deletes the folder in the S3 bucket containing all images for the project.

When a user enters a Project, the API endpoint calls the FetchPageCount Lambda function. This returns the number of pages for a project to update the current page number in the upload selector. The project is retrieved from the path parameters, as defined in the AWS SAM template:

FetchPageCount:
    Type: AWS::Serverless::Function
    Properties:
      Handler: app.handler
      Runtime: python3.8
      CodeUri: lambda_functions/fetchPageCount/
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref PagesTable
      Environment:
        Variables:
          PAGES_TABLE_NAME: !Ref PagesTable
      Events:
        GetResource:
          Type: Api
          Properties:
            RestApiId: !Ref DocumentScannerAPI
            Path: /pages/count/{project+}
            Method: get  

The template creates an S3 bucket and two AWS IAM managed policies. The policies are applied to the AuthRole and UnauthRole created by Amplify. This allows users to upload images directly to the S3 bucket. To understand how Amplify works with Storage, see the documentation.

The template also sets an S3 event notification on the bucket for all object create events with a “.png” suffix. Whenever the frontend uploads an image to S3, the object create event invokes the ProcessDocument Lambda function.

The function parses the object key to get the project name, user, and page number. Amazon Textract then analyzes the text of the image. The object returned by Amazon Textract contains the detected text and detailed information, such as the positioning of text in the image. Only the raw lines of text are stored in the Pages table.

import os
import json, decimal
import boto3
import urllib.parse
from boto3.dynamodb.conditions import Key, Attr

client = boto3.resource('dynamodb')
textract = boto3.client('textract')

tableName = os.environ.get('PAGES_TABLE_NAME')

def handler(event, context):

  table = client.Table(tableName)

  print(table.table_status)
 
  key = urllib.parse.unquote(event['Records'][0]['s3']['object']['key'])
  bucket = event['Records'][0]['s3']['bucket']['name']
  project = key.split('/')[3]
  page = key.split('/')[4].split('.')[0]
  user = key.split('/')[2]
  
  response = textract.detect_document_text(
    Document={
        'S3Object': {
            'Bucket': bucket,
            'Name': key
        }
    })
    
  fullText = ""
  
  for item in response["Blocks"]:
    if item["BlockType"] == "LINE":
        fullText = fullText + item["Text"] + '\n'
  
  print(fullText)

  table.put_item(Item= {
    'project': user + '/' + project,
    'page': int(page), 
    'text': fullText
    })

  # print(response)
  return

The GeneratePDF Lambda function retrieves the detected text for each page in a project from the Pages table. It combines the text into a PDF and returns it as a base64-encoded string for download. This function can be modified if your document structure differs.

Understanding the frontend

In the GitHub repo, the folder amplify-frontend/src/ contains all the code for the frontend application. In main.js, the Amplify VueJS modules are configured to use the resources defined in aws-exports.js. It also configures the endpoint and S3 bucket of the serverless backend, defined in api-config.js.

In components/DocumentScanner.vue, the API module is imported and the API is defined.

API calls are defined as Vue methods that can be called by various other components and elements of the application.

In components/Project.vue, the frontend uses the Storage module for Amplify to upload images. For more information on how to use S3 in an Amplify project see the documentation.

Conclusion

This blog post shows how to create a multiuser application that can analyze text from images and generate PDF documents. This guide demonstrates how to do so in a secure and scalable way using a serverless approach. The example also shows an event driven pattern for handling high volume image processing using S3, Lambda, and Amazon Textract.

The Amplify Framework simplifies the process of implementing authentication, storage, and backend integration. Explore the full solution on GitHub to modify it for your next project or startup idea.

To learn more about AWS serverless and keep up to date on the latest features, subscribe to the YouTube channel.

#ServerlessForEveryone

Recreate Q*bert’s cube-hopping action | Wireframe #42

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-qberts-cube-hopping-action-wireframe-42/

Code the mechanics of an eighties arcade hit in Python and Pygame Zero. Mark Vanstone shows you how

Players must change the colour of every cube to complete the level.

Late in 1982, a funny little orange character with a big nose landed in arcades. The titular Q*bert’s task was to jump around a network of cubes arranged in a pyramid formation, changing the colours of each as they went. Once the cubes were all the same colour, it was on to the next level; to make things more interesting, there were enemies like Coily the snake, and objects which helped Q*bert: some froze enemies in their tracks, while floating discs provided a lift back to the top of the stage.

Q*bert was designed by Warren Davis and Jeff Lee at the American company Gottlieb, and soon became such a smash hit that, the following year, it was already being ported to most of the home computer platforms available at the time. New versions and remakes continued to appear for years afterwards, with a mobile phone version appearing in 2003. Q*bert was by far Gottlieb’s most popular game, and after several changes in company ownership, the firm is now part of Sony’s catalogue – Q*bert’s main character even made its way into the 2015 film, Pixels.

Q*bert uses isometric-style graphics to draw a pseudo-3D display – something we can easily replicate in Pygame Zero by using a single cube graphic with which we make a pyramid of Actor objects. Starting with seven cubes on the bottom row, we can create a simple double loop to create the pile of cubes. Our Q*bert character will be another Actor object which we’ll position at the top of the pile to start. The game screen can then be displayed in the draw() function by looping through our 28 cube Actors and then drawing Q*bert.

Our homage to Q*bert. Try not to fall into the terrifying void.

We need to detect player input, and for this we use the built-in keyboard object and check the cursor keys in our update() function. We need to make Q*bert move from cube to cube so we can move the Actor 32 pixels on the x-axis and 48 pixels on the y-axis. If we do this in steps of 2 for x and 3 for y, we will have Q*bert on the next cube in 16 steps. We can also change his image to point in the right direction depending on the key pressed in our jump() function. If we use this linear movement in our move() function, we’ll see the Actor go in a straight line to the next block. To add a bit of bounce to Q*bert’s movement, we add or subtract (depending on the direction) the values in the bounce[] list. This will make a bit more of a curved movement to the animation.

Now that we have our long-nosed friend jumping around, we need to check where he’s landing. We can loop through the cube positions and check whether Q*bert is over each one. If he is, then we change the image of the cube to one with a yellow top. If we don’t detect a cube under Q*bert, then the critter’s jumped off the pyramid, and the game’s over. We can then do a quick loop through all the cube Actors, and if they’ve all been changed, then the player has completed the level. So those are the basic mechanics of jumping around on a pyramid of cubes. We just need some snakes and other baddies to annoy Q*bert – but we’ll leave those for you to add. Good luck!

Here’s Mark’s code for a Q*bert-style, cube-hopping platform game. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 42

You can read more features like this one in Wireframe issue 42, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 42 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate Q*bert’s cube-hopping action | Wireframe #42 appeared first on Raspberry Pi.

Raspberry Pi retro player

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-retro-player/

We found this project at TeCoEd and we loved the combination of an OLED display housed inside a retro Argus slide viewer. It uses a Raspberry Pi 3 with Python and OpenCV to pull out single frames from a video and write them to the display in real time.​

TeCoEd names this creation the Raspberry Pi Retro Player, or RPRP, or – rather neatly – RP squared. The Argus viewer, he tells us, was a charity-shop find that cost just 50p.  It sat collecting dust for a few years until he came across an OLED setup guide on hackster.io, which inspired the birth of the RPRP.

Timelapse of the build and walk-through of the code

At the heart of the project is a Raspberry Pi 3 which is running a Python program that uses the OpenCV computer vision library.  The code takes a video clip and breaks it down into individual frames. Then it resizes each frame and converts it to black and white, before writing it to the OLED display. The viewer sees the video play in pleasingly retro monochrome on the slide viewer.

Tiny but cute, like us!

TeCoEd ran into some frustrating problems with the OLED display, which, he discovered, uses the SH1106 driver, rather than the standard SH1306 driver that the Adafruit CircuitPython library expects. Many OLED displays use the SH1306 driver, but it turns out that cheaper displays like the one in this project use the SH1106. He has made a video to spare other makers this particular throw-it-all-in-the-bin moment.

Tutorial for using the SH1106 driver for cheap OLED displays

If you’d like to try this build for yourself, here’s all the code and setup advice on GitHub.

Wiring diagram

TeCoEd is, as ever, our favourite kind of maker – the sharing kind! He has collated everything you’ll need to get to grips with OpenCV, connecting the SH1106 OLED screen over I2C, and more. He’s even told us where we can buy the OLED board.

The post Raspberry Pi retro player appeared first on Raspberry Pi.

Jump-starting your serverless development environment

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/jump-starting-your-serverless-development-environment/

Developers building serverless applications often wonder how they can jump-start their local development environment. This blog post provides a broad guide for those developers wanting to set up a development environment for building serverless applications.

serverless development environment

AWS and open source tools for a serverless development environment .

To use AWS Lambda and other AWS services, create and activate an AWS account.

Command line tooling

Command line tools are scripts, programs, and libraries that enable rapid application development and interactions from within a command line shell.

The AWS CLI

The AWS Command Line Interface (AWS CLI) is an open source tool that enables developers to interact with AWS services using a command line shell. In many cases, the AWS CLI increases developer velocity for building cloud resources and enables automating repetitive tasks. It is an important piece of any serverless developer’s toolkit. Follow these instructions to install and configure the AWS CLI on your operating system.

AWS enables you to build infrastructure with code. This provides a single source of truth for AWS resources. It enables development teams to use version control and create deployment pipelines for their cloud infrastructure. AWS CloudFormation provides a common language to model and provision these application resources in your cloud environment.

AWS Serverless Application Model (AWS SAM CLI)

AWS Serverless Application Model (AWS SAM) is an extension for CloudFormation that further simplifies the process of building serverless application resources.

It provides shorthand syntax to define Lambda functions, APIs, databases, and event source mappings. During deployment, the AWS SAM syntax is transformed into AWS CloudFormation syntax, enabling you to build serverless applications faster.

The AWS SAM CLI is an open source command line tool used to locally build, test, debug, and deploy serverless applications defined with AWS SAM templates.

Install AWS SAM CLI on your operating system.

Test the installation by initializing a new quick start project with the following command:

$ sam init
  1. Choose 1 for the “Quick Start Templates
  2. Choose 1 for the “Node.js runtime
  3. Use the default name.

The generated /sam-app/template.yaml contains all the resource definitions for your serverless application. This includes a Lambda function with a REST API endpoint, along with the necessary IAM permissions.

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: hello-world/
      Handler: app.lambdaHandler
      Runtime: nodejs12.x
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get

Deploy this application using the AWS SAM CLI guided deploy:

$ sam deploy -g

Local testing with AWS SAM CLI

The AWS SAM CLI requires Docker containers to simulate the AWS Lambda runtime environment on your local development environment. To test locally, install Docker Engine and run the Lambda function with following command:

$ sam local invoke "HelloWorldFunction" -e events/event.json

The first time this function is invoked, Docker downloads the lambci/lambda:nodejs12.x container image. It then invokes the Lambda function with a pre-defined event JSON file.

Helper tools

There are a number of open source tools and packages available to help you monitor, author, and optimize your Lambda-based applications. Some of the most popular tools are shown in the following list.

Template validation tooling

CloudFormation Linter is a validation tool that helps with your CloudFormation development cycle. It analyses CloudFormation YAML and JSON templates to resolve and validate intrinsic functions and resource properties. By analyzing your templates before deploying them, you can save valuable development time and build automated validation into your deployment release cycle.

Follow these instructions to install the tool.

Once, installed, run the cfn-lint command with the path to your AWS SAM template provided as the first argument:

cfn-lint template.yaml
AWS SAM template validation with cfn-lint

AWS SAM template validation with cfn-lint

The following example shows that the template is not valid because the !GettAtt function does not evaluate correctly.

IDE tooling

Use AWS IDE plugins to author and invoke Lambda functions from within your existing integrated development environment (IDE). AWS IDE toolkits are available for PyCharm, IntelliJ. Visual Studio.

The AWS Toolkit for Visual Studio Code provides an integrated experience for developing serverless applications. It enables you to invoke Lambda functions, specify function configurations, locally debug, and deploy—all conveniently from within the editor. The toolkit supports Node.js, Python, and .NET.

The AWS Toolkit for Visual Studio Code

From Visual Studio Code, choose the Extensions icon on the Activity Bar. In the Search Extensions in Marketplace box, enter AWS Toolkit and then choose AWS Toolkit for Visual Studio Code as shown in the following example. This opens a new tab in the editor showing the toolkit’s installation page. Choose the Install button in the header to add the extension.

AWS Toolkit extension for Visual Studio Code

AWS Toolkit extension for Visual Studio Code

AWS Cloud9

Another option to build a development environment without having to install anything locally is to use AWS Cloud9. AWS Cloud9 is a cloud-based integrated development environment (IDE) for writing, running, and debugging code from within the browser.

It provides a seamless experience for developing serverless applications. It has a preconfigured development environment that includes AWS CLI, AWS SAM CLI, SDKs, code libraries, and many useful plugins. AWS Cloud9 also provides an environment for locally testing and debugging AWS Lambda functions. This eliminates the need to upload your code to the Lambda console. It allows developers to iterate on code directly, saving time, and improving code quality.

Follow this guide to set up AWS Cloud9 in your AWS environment.

Advanced tooling

Efficient configuration of Lambda functions is critical when expecting optimal cost and performance of your serverless applications. Lambda allows you to control the memory (RAM) allocation for each function.

Lambda charges based on the number of function requests and the duration, the time it takes for your code to run. The price for duration depends on the amount of RAM you allocate to your function. A smaller RAM allocation may reduce the performance of your application if your function is running compute-heavy workloads. If performance needs outweigh cost, you can increase the memory allocation.

Cost and performance optimization tooling

AWS Lambda power tuner is an open source tool that uses an AWS Step Functions state machine to suggest cost and performance optimizations for your Lambda functions. It invokes a given function with multiple memory configurations. It analyzes the execution log results to determine and suggest power configurations that minimize cost and maximize performance.

To deploy the tool:

  1. Clone the repository as follows:
    $ git clone https://github.com/alexcasalboni/aws-lambda-power-tuning.git
  2. Create an Amazon S3 bucket and enter the deployment configurations in /scripts/deploy.sh:
    # config
    BUCKET_NAME=your-sam-templates-bucket
    STACK_NAME=lambda-power-tuning
    PowerValues='128,512,1024,1536,3008'
  3. Run the deploy.sh script from your terminal, this uses the AWS SAM CLI to deploy the application:
    $ bash scripts/deploy.sh
  4. Run the power tuning tool from the terminal using the AWS CLI:
    aws stepfunctions start-execution \
    --state-machine-arn arn:aws:states:us-east-1:0123456789:stateMachine:powerTuningStateMachine-Vywm3ozPB6Am \
    --input "{\"lambdaARN\": \"arn:aws:lambda:us-east-1:1234567890:function:testytest\", \"powerValues\":[128,256,512,1024,2048],\"num\":50,\"payload\":{},\"parallelInvocation\":true,\"strategy\":\"cost\"}" \
    --output json
  5. The Step Functions execution output produces a link to a visual summary of the suggested results:

    AWS Lambda power tuning results

    AWS Lambda power tuning results

Monitoring and debugging tooling

Sls-dev-tools is an open source serverless tool that delivers serverless metrics directly to the terminal. It provides developers with feedback on their serverless application’s metrics and key bindings that deploy, open, and manipulate stack resources. Bringing this data directly to your terminal or IDE, reduces context switching between the developer environment and the web interfaces. This can increase application development speed and improve user experience.

Follow these instructions to install the tool onto your development environment.

To open the tool, run the following command:

$ Sls-dev-tools

Follow the in-terminal interface to choose which stack to monitor or edit.

The following example shows how the tool can be used to invoke a Lambda function with a custom payload from within the IDE.

Invoke an AWS Lambda function with a custom payload using sls-dev-tools

Invoke an AWS Lambda function with a custom payload using sls-dev-tools

Serverless database tooling

NoSQL Workbench for Amazon DynamoDB is a GUI application for modern database development and operations. It provides a visual IDE tool for data modeling and visualization with query development features to help build serverless applications with Amazon DynamoDB tables. Define data models using one or more tables and visualize the data model to see how it works in different scenarios. Run or simulate operations and generate the code for Python, JavaScript (Node.js), or Java.

Choose the correct operating system link to download and install NoSQL Workbench on your development machine.

The following example illustrates a connection to a DynamoDB table. A data scan is built using the GUI, with Node.js code generated for inclusion in a Lambda function:

Connecting to an Amazon DynamoBD table with NoSQL Workbench for AmazonDynamoDB

Connecting to an Amazon DynamoDB table with NoSQL Workbench for Amazon DynamoDB

Generating query code with NoSQL Workbench for Amazon DynamoDB

Generating query code with NoSQL Workbench for Amazon DynamoDB

Conclusion

Building serverless applications allows developers to focus on business logic instead of managing and operating infrastructure. This is achieved by using managed services. Developers often struggle with knowing which tools, libraries, and frameworks are available to help with this new approach to building applications. This post shows tools that builders can use to create a serverless developer environment to help accelerate software development.

This list represents AWS and open source tools but does not include our APN Partners. For partner offers, check here.

Read more to start building serverless applications.

Raspberry Pi + Furby = ‘Furlexa’ voice assistant

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-furby-furlexa-voice-assistant/

How can you turn a redundant, furry, slightly annoying tech pet into a useful home assistant? Zach took to howchoo to show you how to combine a Raspberry Pi Zero W with Amazon’s Alexa Voice Service software and a Furby to create Furlexa.

Furby was pretty impressive technology, considering that it’s over 20 years old. It could learn to speak English, sort of, by listening to humans. It communicated with other Furbies via infrared sensor. It even slept when its light sensor registered that it was dark.

Furby innards, exploded

Zach explains why Furby is so easy to hack:

Furby is comprised of a few primary components — a microprocessor, infrared and light sensors, microphone, speaker, and — most impressively — a single motor that uses an elaborate system of gears and cams to drive Furby’s ears, eyes, mouth and rocker. A cam position sensor (switch) tells the microprocessor what position the cam system is in. By driving the motor at varying speeds and directions and by tracking the cam position, the microprocessor can tell Furby to dance, sing, sleep, or whatever.

The original CPU and related circuitry were replaced with a Raspberry Pi Zero W

Zach continues: “Though the microprocessor isn’t worth messing around with (it’s buried inside a blob of resin to protect the IP), it would be easy to install a small Raspberry Pi computer inside of Furby, use it to run Alexa, and then track Alexa’s output to make Furby move.”

What you’ll need:

Harrowing

Running Alexa

The Raspberry Pi is running Alexa Voice Service (AVS) to provide full Amazon Echo functionality. Amazon AVS doesn’t officially support the tiny Raspberry Pi Zero, so lots of hacking was required. Point 10 on Zach’s original project walkthrough explains how to get AVS working with the Pimoroni Speaker pHAT.

Animating Furby

A small motor driver board is connected to the Raspberry Pi’s GPIO pins, and controls Furby’s original DC motor and gearbox: when Alexa speaks, so does Furby. The Raspberry Pi Zero can’t supply enough juice to power the motor, so instead, it’s powered by Furby’s original battery pack.

Software

There are three key pieces of software that make Furlexa possible:

  1. Amazon Alexa on Raspberry Pi – there are tonnes of tutorials showing you how to get Amazon Alexa up and running on your Raspberry Pi. Try this one on instructables.
  2. A script to control Furby’s motor howchooer Tyler wrote the Python script that Zach is using to drive the motor, and you can copy and paste it from Zach’s howchoo walkthrough.
  3. A script that detects when Alexa is speaking and calls the motor program – Furby detects when Alexa is speaking by monitoring the contents of a file whose contents change when audio is being output. Zach has written a separate guide for driving a DC motor based on Linux sound output.
Teeny tiny living space

The real challenge was cramming the Raspberry Pi Zero plus the Speaker pHAT, the motor controller board, and all the wiring back inside Furby, where space is at a premium. Soldering wires directly to the GPIO saved a bit of room, and foam tape holds everything above together nice and tightly. It’s a squeeze!

Zach is a maker extraordinaire, so check out his projects page on howchoo.

The post Raspberry Pi + Furby = ‘Furlexa’ voice assistant appeared first on Raspberry Pi.

North Korea ATM Hack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/09/north_korea_atm.html

The US Cybersecurity and Infrastructure Security Agency (CISA) published a long and technical alert describing a North Korea hacking scheme against ATMs in a bunch of countries worldwide:

This joint advisory is the result of analytic efforts among the Cybersecurity and Infrastructure Security Agency (CISA), the Department of the Treasury (Treasury), the Federal Bureau of Investigation (FBI) and U.S. Cyber Command (USCYBERCOM). Working with U.S. government partners, CISA, Treasury, FBI, and USCYBERCOM identified malware and indicators of compromise (IOCs) used by the North Korean government in an automated teller machine (ATM) cash-out scheme­ — referred to by the U.S. Government as “FASTCash 2.0: North Korea’s BeagleBoyz Robbing Banks.”

The level of detail is impressive, as seems to be common in CISA’s alerts and analysis reports.

Modernizing and containerizing a legacy MVC .NET application with Entity Framework to .NET Core with Entity Framework Core: Part 2

Post Syndicated from Pratip Bagchi original https://aws.amazon.com/blogs/devops/modernizing-and-containerizing-a-legacy-mvc-net-application-with-entity-framework-to-net-core-with-entity-framework-core-part-2/

This is the second post in a two-part series in which you migrate and containerize a modernized enterprise application. In Part 1, we walked you through a step-by-step approach to re-architect a legacy ASP.NET MVC application and ported it to .NET Core Framework. In this post, you will deploy the previously re-architected application to Amazon Elastic Container Service (Amazon ECS) and run it as a task with AWS Fargate.

Overview of solution

In the first post, you ported the legacy MVC ASP.NET application to ASP.NET Core, you will now modernize the same application as a Docker container and host it in the ECS cluster.

The following diagram illustrates this architecture.

Architecture Diagram

 

You first launch a SQL Server Express RDS (1) instance and create a Cycle Store database on that instance with tables of different categories and subcategories of bikes. You use the previously re-architected and modernized ASP.NET Core application as a starting point for this post, this app is using AWS Secrets Manager (2) to fetch database credentials to access Amazon RDS instance. In the next step, you build a Docker image of the application and push it to Amazon Elastic Container Registry (Amazon ECR) (3). After this you create an ECS cluster (4) to run the Docker image as a AWS Fargate task.

Prerequisites

For this walkthrough, you should have the following prerequisites:

This post implements the solution in Region us-east-1.

Source Code

Clone the source code from the GitHub repo. The source code folder contains the re-architected source code and the AWS CloudFormation template to launch the infrastructure, and Amazon ECS task definition.

Setting up the database server

To make sure that your database works out of the box, you use a CloudFormation template to create an instance of Microsoft SQL Server Express and AWS Secrets Manager secrets to store database credentials, security groups, and IAM roles to access Amazon Relational Database Service (Amazon RDS) and Secrets Manager. This stack takes approximately 15 minutes to complete, with most of that time being when the services are being provisioned.

  1. On the AWS CloudFormation console, choose Create stack.
  2. For Prepare template, select Template is ready.
  3. For Template source, select Upload a template file.
  4. Upload SqlServerRDSFixedUidPwd.yaml, which is available in the GitHub repo.
  5. Choose Next.
    Create AWS CloudFormation stack
  6. For Stack name, enter SQLRDSEXStack.
  7. Choose Next.
  8. Keep the rest of the options at their default.
  9. Select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  10. Choose Create stack.
    Add IAM Capabilities
  11. When the status shows as CREATE_COMPLETE, choose the Outputs tab.
  12. Record the value for the SQLDatabaseEndpoint key.
    CloudFormation output
  13. Connect the database from the SQL Server Management Studio with the following credentials:User id: DBUserand Password:DBU$er2020

Setting up the CYCLE_STORE database

To set up your database, complete the following steps:

  1. On the SQL Server Management console, connect to the DB instance using the ID and password you defined earlier.
  2. Under File, choose New.
  3. Choose Query with Current Connection.Alternatively, choose New Query from the toolbar.
    Run database restore script
  4. Open CYCLE_STORE_Schema_data.sql from the GitHub repository and run it.

This creates the CYCLE_STORE database with all the tables and data you need.

Setting up the ASP.NET MVC Core application

To set up your ASP.NET application, complete the following steps:

  1. Open the re-architected application code that you cloned from the GitHub repo. The Dockerfile added to the solution enables Docker support.
  2. Open the appsettings.Development.json file and replace the RDS endpoint present in the ConnectionStrings section with the output of the AWS CloudFormation stack without the port number which is :1433 for SQL Server.

The ASP.NET application should now load with bike categories and subcategories. See the following screenshot.

Final run

Setting up Amazon ECR

To set up your repository in Amazon ECR, complete the following steps:

  1. On the Amazon ECR console, choose Repositories.
  2. Choose Create repository.
  3. For Repository name, enter coretoecsrepo.
  4. Choose Create repository Create repositiory
  5. Copy the repository URI to use later.
  6. Select the repository you just created and choose View push commands.View Push Commands
  7. In the folder where you cloned the repo, navigate to the AdventureWorksMVCCore.Web folder.
  8. In the View push commands popup window, complete steps 1–4 to push your Docker image to Amazon ECR. Push to Amazon Elastic Repository

The following screenshot shows completion of Steps 1 and 2 and ensure your working directory is set to AdventureWorksMVCCore.Web as below.

Login
The following screenshot shows completion of Steps 3 and 4.

Amazon ECR Push

Setting up Amazon ECS

To set up your ECS cluster, complete the following steps:

    1. On the Amazon ECS console, choose Clusters.
    2. Choose Create cluster.
    3. Choose the Networking only cluster template.
    4. Name your cluster cycle-store-cluster.
    5. Leave everything else as its default.
    6. Choose Create cluster.
    7. Select your cluster.
    8. Choose Task Definitions and choose Create new Task Definition.
    9. On the Select launch type compatibility page choose FARGATE and click Next step.
    10. On the Configure task and container definitions page, scroll to the bottom of the page and choose Configure via JSON.
    11. In the text area, enter the task definition JSON (task-definition.json) provided in the GitHub repo. Make sure to replace [YOUR-AWS-ACCOUNT-NO] in task-definition.json with your AWS account number on the line number 44, 68 and 71. The task definition file assumes that you named your repository as coretoecsrepo. If you named it something else, modify this file accordingly. It also assumes that you are using us-east-1 as your default region, so please consider replacing the region in the task-definition.json on line number 15 and 44 if you are not using us-east-1 region. Replace AWS Account number
    12. Choose Save.
    13. On the Task Definitions page, select cycle-store-td.
    14. From the Actions drop-down menu, choose Run Task.Run task
    15. Choose Launch type is equal to Fargate.
    16. Choose your default VPC as Cluster VPC.
    17. Select at least one Subnet.
    18. Choose Edit Security Groups and select ECSSecurityGroup (created by the AWS CloudFormation stack).Select security group
    19. Choose Run Task

Running your application

Choose the link under the task and find the public IP. When you navigate to the URL http://your-public-ip, you should see the .NET Core Cycle Store web application user interface running in Amazon ECS.

See the following screenshot.

Final run

Cleaning up

To avoid incurring future charges, delete the stacks you created for this post.

  1. On the AWS CloudFormation console, choose Stacks.
  2. Select SQLRDSEXStack.
  3. In the Stack details pane, choose Delete.

Conclusion

This post concludes your journey towards modernizing a legacy enterprise MVC ASP.NET web application using .NET Core and containerizing using Amazon ECS using the AWS Fargate compute engine on a Linux container. Portability to .NET Core helps you run enterprise workload without any dependencies on windows environment and AWS Fargate gives you a way to run containers directly without managing any EC2 instances and giving you full control. Additionally, couple of recent launched AWS tools in this area.

About the Author

Saleha Haider is a Senior Partner Solution Architect with Amazon Web Services.
Pratip Bagchi is a Partner Solutions Architect with Amazon Web Services.

 

Self-driving trash can controlled by Raspberry Pi

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/self-driving-trash-can-controlled-by-raspberry-pi/

YouTuber extraordinaire Ahad Cove HATES taking out the rubbish, so he decided to hack a rubbish bin/trash can – let’s go with trash can from now on – to take itself out to be picked up.

Sounds simple enough? The catch is that Ahad wanted to create an AI that can see when the garbage truck is approaching his house and trigger the garage door to open, then tell the trash can to drive itself out and stop in the right place. This way, Ahad doesn’t need to wake up early enough to spot the truck and manually trigger the trash can to drive itself.

Hardware

The trash can’s original wheels weren’t enough on their own, so Ahad brought in an electronic scooter wheel with a hub motor, powered by a 36V lithium ion battery, to guide and pull them. Check out this part of the video to hear how tricky it was for Ahad to install a braking system using a very strong servo motor.

The new wheel sits at the front of the trash can and drags the original wheels at the back along with

An affordable driver board controls the speed, power, and braking system of the garbage can.

The driver board

Tying everything together is a Raspberry Pi 3B+. Ahad uses one of the GPIO pins on the Raspberry Pi to send the signal to the driver board. He started off the project with a Raspberry Pi Zero W, but found that it was too fiddly to get it to handle the crazy braking power needed to stop the garbage can on his sloped driveway.

The Raspberry Pi Zero W, which ended up getting replaced in an upgrade

Everything is kept together and dry with a plastic snap-close food container Ahad lifted from his wife’s kitchen collection. Ssh, don’t tell.

Software

Ahad uses an object detection machine learning model to spot when the garbage truck passes his house. He handles this part of the project with an Nvidia Jetson Xavier NX board, connected to a webcam positioned to look out of the window watching for garbage trucks.

Object detected!

Opening the garage door

Ahad’s garage door has a wireless internet connection, so he connected the door to an app that communicates with his home assistant device. The app opens the garage door when the webcam and object detection software see the garbage truck turning into his street. All this works with the kit inside the trash can to get it to drive itself out to the end of Ahad’s driveway.

There she goes! (With her homemade paparazzi setup behind her)

Check out the end of Ahad’s YouTube video to see how human error managed to put a comical damper on the maiden voyage of this epic build.

The post Self-driving trash can controlled by Raspberry Pi appeared first on Raspberry Pi.

US Postal Service Files Blockchain Voting Patent

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/08/us_postal_servi.html

The US Postal Service has filed a patent on a blockchain voting method:

Abstract: A voting system can use the security of blockchain and the mail to provide a reliable voting system. A registered voter receives a computer readable code in the mail and confirms identity and confirms correct ballot information in an election. The system separates voter identification and votes to ensure vote anonymity, and stores votes on a distributed ledger in a blockchain

I wasn’t going to bother blogging this, but I’ve received enough emails about it that I should comment.

As is pretty much always the case, blockchain adds nothing. The security of this system has nothing to do with blockchain, and would be better off without it. For voting in particular, blockchain adds to the insecurity. Matt Blaze is most succinct on that point:

Why is blockchain voting a dumb idea?

Glad you asked.

For starters:

  • It doesn’t solve any problems civil elections actually have.
  • It’s basically incompatible with “software independence”, considered an essential property.
  • It can make ballot secrecy difficult or impossible.

Both Ben Adida and Matthew Green have written longer pieces on blockchain and voting.

News articles.

Announcing a second Local Zone in Los Angeles

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/announcing-a-second-local-zone-in-los-angeles/

In December 2019, Jeff Barr published this post announcing the launch of a new Local Zone in Los Angeles, California. A Local Zone extends existing AWS regions closer to end-users, providing single-digit millisecond latency to a subset of AWS services in the zone. Local Zones are attached to parent regions – in this case US West (Oregon) – and access to services and resources is performed through the parent region’s endpoints. This makes Local Zones transparent to your applications and end-users. Applications running in a Local Zone have access to all AWS services, not just the subset in the zone, via Amazon’s redundant and very high bandwidth private network backbone to the parent region.

At the end of the post, Jeff wrote (and I quote) – “In the fullness of time (as Andy Jassy often says), there could very well be more than one Local Zone in any given geographic area. In 2020, we will open a second one in Los Angeles (us-west-2-lax-1b), and are giving consideration to other locations.”

Well – that time has come! I’m pleased to announce that following customer requests, AWS today launched a second Local Zone in Los Angeles to further help customers in the area (and Southern California generally) to achieve even greater high-availability and fault tolerance for their applications, in conjunction with very low latency. These customers have workloads that require very low latency, such as artist workstations, local rendering, gaming, financial transaction processing, and more.

The new Los Angeles Local Zone contains a subset of services, such as Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon FSx for Windows File Server and Amazon FSx for Lustre, Elastic Load Balancing, Amazon Relational Database Service (RDS), and Amazon Virtual Private Cloud, and remember, applications running in the zone can access all AWS services and other resources through the zone’s association with the parent region.

Enabling Local Zones
Once you opt-in to use one or more Local Zones in your account, they appear as additional Availability Zones for you to use when deploying your applications and resources. The original zone, launched last December, was us-west-2-lax-1a. The additional zone, available to all customers now, is us-west-2-lax-1b. Local Zones can be enabled using the new Settings section of the EC2 Management Console, as shown below

As noted in Jeff’s original post, you can also opt-in (or out) of access to Local Zones with the AWS Command Line Interface (CLI) (aws ec2 modify-availability-zone-group command), AWS Tools for PowerShell (Edit-EC2AvailabilityZoneGroup cmdlet), or by calling the ModifyAvailabilityZoneGroup API from one of the AWS SDKs.

Using Local Zones for Highly-Available, Low-Latency Applications
Local Zones can be used to further improve high availability for applications, as well as ultra-low latency. One scenario is for enterprise migrations using a hybrid architecture. These enterprises have workloads that currently run in existing on-premises data centers in the Los Angeles metro area and it can be daunting to migrate these application portfolios, many of them interdependent, to the cloud. By utilizing AWS Direct Connect in conjunction with a Local Zone, these customers can establish a hybrid environment that provides ultra-low latency communication between applications running in the Los Angeles Local Zone and the on-premises installations without needing a potentially expensive revamp of their architecture. As time progresses, the on-premises applications can be migrated to the cloud in an incremental fashion, simplifying the overall migration process. The diagram below illustrates this type of enterprise hybrid architecture.

Anecdotally, we have heard from customers using this type of hybrid architecture, combining Local Zones with AWS Direct Connect, that they are achieving sub-1.5ms latency communication between the applications hosted in the Local Zone and those in the on-premises data center in the LA metro area.

Virtual desktops for rendering and animation workloads is another scenario for Local Zones. For these workloads, latency is critical and the addition of a second Local Zone for Los Angeles gives additional failover capability, without sacrificing latency, should the need arise.

As always, the teams are listening to your feedback – thank you! – and are working on adding Local Zones in other locations, along with the availability of additional services, including Amazon ECS Amazon Elastic Kubernetes Service, Amazon ElastiCache, Amazon Elasticsearch Service, and Amazon Managed Streaming for Apache Kafka. If you’re an AWS customer based in Los Angeles or Southern California, with a requirement for even greater high-availability and very low latency for your workloads, then we invite you to check out the new Local Zone. More details and pricing information can be found at Local Zone.

— Steve