Tag Archives: python

Code Hyper Sports’ shooting minigame | Wireframe #35

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-hyper-sports-shooting-minigame-wireframe-35/

Gun down the clay pigeons in our re-creation of a classic minigame from Konami’s Hyper Sports. Take it away, Mark Vanstone

Hyper Sports

Hyper Sports’ Japanese release was tied in with the 1984 Summer Olympics.

Hyper Sports

Konami’s sequel to its 1983 arcade hit, Track & Field, Hyper Sports offered seven games – or events – in which up to four players could participate. Skeet shooting was perhaps the most memorable game in the collection, and required just two buttons: fire left and fire right.

The display showed two target sights, and each moved up and down to come into line with the next clay disc’s trajectory. When the disc was inside the red target square, the player pressed the fire button, and if their timing was correct, the clay disc exploded. Points were awarded for being on target, and every now and then, a parrot flew across the screen, which could be gunned down for a bonus.

Making our game

To make a skeet shooting game with Pygame Zero, we need a few graphical elements. First, a static background of hills and grass, with two clay disc throwers each side of the screen, and a semicircle where our shooter stands – this can be displayed first, every time our draw() function is called.

We can then draw our shooter (created as an Actor) in the centre near the bottom of the screen. The shooter has three images: one central while no keys are pressed, and two for the directions left and right when the player presses the left or right keys. We also need to have two square target sights to the left and right above the shooter, which we can create as Actors.

When the clay targets appear, the player uses the left and right buttons to shoot either the left or right target respectively.

To make the clay targets, we create an array to hold disc Actor objects. In our update() function we can trigger the creation of a new disc based on a random number, and once created, start an animation to move it across the screen in front of the shooter. We can add a shadow to the discs by tracking a path diagonally across the screen so that the shadow appears at the correct Y coordinate regardless of the disc’s height – this is a simple way of giving our game the illusion of depth. While we’re in the update() function, looping around our disc object list, we can calculate the distance of the disc to the nearest target sight frame, and from that, work out which is the closest.

When we’ve calculated which disc is closest to the right-hand sight, we want to move the sight towards the disc so that their paths intersect. All we need to do is take the difference of the Y coordinates, divide by two, and apply that offset to the target sight. We also do the same for the left-hand sight. If the correct key (left or right arrows) is pressed at the moment a disc crosses the path of the sight frame, we register a hit and cycle the disc through a sequence of exploding frames. We can keep a score and display this with an overlay graphic so that the player knows how well they’ve done.

And that’s it! You may want to add multiple players and perhaps a parrot bonus, but we’ll leave that up to you.

Here’s Mark’s code snippet, which creates a simple shooting game in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, go here.

Get your copy of Wireframe issue 35

You can read more features like this one in Wireframe issue 35, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 35 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code Hyper Sports’ shooting minigame | Wireframe #35 appeared first on Raspberry Pi.

Building a Raspberry Pi telepresence robot using serverless: Part 1

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-raspberry-pi-telepresence-robot-using-serverless-part-1/

A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.

A Pimoroni STS-Pi Robot Kit connected to AWS for remote control and viewing.

A telepresence robot allows you to explore remote environments from the comfort of your home through live stream video and remote control. These types of robots can improve the lives of the disabled, elderly, or those that simply cannot be with their coworkers or loved ones in person. Some are used to explore off-world terrain and others for search and rescue.

This guide walks through building a simple telepresence robot using a Pimoroni STS-PI Raspberry Pi robot kit. A Raspberry Pi is a small low-cost device that runs Linux. Add-on modules for Raspberry Pi are called “hats”. You can substitute this kit with any mobile platform that uses two motors wired to an Adafruit Motor Hat or a Pimoroni Explorer Hat.

The sample serverless application uses AWS Lambda and Amazon API Gateway to create a REST API for driving the robot. A Python application running on the robot uses AWS IoT Core to receive drive commands and authenticate with Amazon Kinesis Video Streams with WebRTC using an IoT Credentials Provider. In the next blog I walk through deploying a web frontend to both view the livestream and control the robot via the API.

Prerequisites

You need the following to complete the project:

A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.

A Pimoroni STS-Pi robot kit, Explorer Hat, Raspberry Pi, camera, and battery.

Estimated Cost: $120

There are three major parts to this project. First deploy the serverless backend using the AWS Serverless Application Repository. Then assemble the robot and run an installer on the Raspberry Pi. Finally, configure and run the Python application on the robot to confirm it can be driven through the API and is streaming video.

Deploy the serverless application

In this section, use the Serverless Application Repository to deploy the backend resources for the robot. The resources to deploy are defined using the AWS Serverless Application Model (SAM), an open-source framework for building serverless applications using AWS CloudFormation. To deeper understand how this application is built, look at the SAM template in the GitHub repository.

An architecture diagram of the AWS IoT and Amazon Kinesis Video Stream resources of the deployed application.

The Python application that runs on the robot requires permissions to connect as an IoT Thing and subscribe to messages sent to a specific topic on the AWS IoT Core message broker. The following policy is created in the SAM template:

RobotIoTPolicy:
      Type: "AWS::IoT::Policy"
      Properties:
        PolicyName: !Sub "${RobotName}Policy"
        PolicyDocument:
          Version: "2012-10-17"
          Statement:
            - Effect: Allow
              Action:
                - iot:Connect
                - iot:Subscribe
                - iot:Publish
                - iot:Receive
              Resource:
                - !Sub "arn:aws:iot:*:*:topicfilter/${RobotName}/action"
                - !Sub "arn:aws:iot:*:*:topic/${RobotName}/action"
                - !Sub "arn:aws:iot:*:*:topic/${RobotName}/telemetry"
                - !Sub "arn:aws:iot:*:*:client/${RobotName}"

To transmit video, the Python application runs the amazon-kinesis-video-streams-webrtc-sdk-c sample in a subprocess. Instead of using separate credentials to authenticate with Kinesis Video Streams, a Role Alias policy is created so that IoT credentials can be used.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "iot:Connect",
        "iot:AssumeRoleWithCertificate"
      ],
      "Resource": "arn:aws:iot:Region:AccountID:rolealias/robot-camera-streaming-role-alias",
      "Effect": "Allow"
    }
  ]
}

When the above policy is attached to a certificate associated with an IoT Thing, it can assume the following role:

 KVSCertificateBasedIAMRole:
      Type: 'AWS::IAM::Role'
      Properties:
        AssumeRolePolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: 'Allow'
            Principal:
              Service: 'credentials.iot.amazonaws.com'
            Action: 'sts:AssumeRole'
        Policies:
        - PolicyName: !Sub "KVSIAMPolicy-${AWS::StackName}"
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
            - Effect: Allow
              Action:
                - kinesisvideo:ConnectAsMaster
                - kinesisvideo:GetSignalingChannelEndpoint
                - kinesisvideo:CreateSignalingChannel
                - kinesisvideo:GetIceServerConfig
                - kinesisvideo:DescribeSignalingChannel
              Resource: "arn:aws:kinesisvideo:*:*:channel/${credentials-iot:ThingName}/*"

This role grants access to connect and transmit video over WebRTC using the Kinesis Video Streams signaling channel deployed by the serverless application. An architecture diagram of the API endpoint in the deployed application.

A deployed API Gateway endpoint, when called with valid JSON, invokes a Lambda function that publishes to an IoT message topic, RobotName/action. The Python application on the robot subscribes to this topic and drives the motors based on any received message that maps to a command.

  1. Navigate to the aws-serverless-telepresence-robot application in the Serverless Application Repository.
  2. Choose Deploy.
  3. On the next page, under Application Settings, fill out the parameter, RobotName.
  4. Choose Deploy.
  5. Once complete, choose View CloudFormation Stack.
  6. Select the Outputs tab. Copy the ApiURL and the EndpointURL for use when configuring the robot.

Create and download the AWS IoT device certificate

The robot requires an AWS IoT root CA (fetched by the install script), certificate, and private key to authenticate with AWS IoT Core. The certificate and private key are not created by the serverless application since they can only be downloaded on creation. Create a new certificate and attach the IoT policy and Role Alias policy deployed by the serverless application.

  1. Navigate to the AWS IoT Core console.
  2. Choose Manage, Things.
  3. Choose the Thing that corresponds with the name of the robot.
  4. Under Security, choose Create certificate.
  5. Choose Activate.
  6. Download the Private Key and Thing Certificate. Save these securely, as this is the only time you can download this certificate.
  7. Choose Attach Policy.
  8. Two policies are created and must be attached. From the list, select
    <RobotName>Policy
    AliasPolicy-<AppName>
  9. Choose Done.

Flash an operating system to an SD card

The Raspberry Pi single-board Linux computer uses an SD card as the main file system storage. Raspbian Buster Lite is an officially supported Debian Linux operating system that must be flashed to an SD card. Balena.io has created an application called balenaEtcher for the sole purpose of accomplishing this safely.

  1. Download the latest version of Raspbian Buster Lite.
  2. Download and install balenaEtcher.
  3. Insert the SD card into your computer and run balenaEtcher.
  4. Choose the Raspbian image. Choose Flash to burn the image to the SD card.
  5. When flashing is complete, balenaEtcher dismounts the SD card.

Configure Wi-Fi and SSH headless

Typically, a keyboard and monitor are used to configure Wi-Fi or to access the command line on a Raspberry Pi. Since it is on a mobile platform, configure the Raspberry Pi to connect to a Wi-Fi network and enable remote access headless by adding configuration files to the SD card.

  1. Re-insert the SD card to your computer so that it shows as volume boot.
  2. Create a file in the boot volume of the SD card named wpa_supplicant.conf.
  3. Paste in the following contents, substituting your Wi-Fi credentials.
    ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
            update_config=1
            country=<Insert country code here>
    
            network={
             ssid="<Name of your WiFi>"
             psk="<Password for your WiFi>"
            }

  4. Create an empty file without a file extension in the boot volume named ssh. At boot, the Raspbian operating system looks for this file and enables remote access if it exists. This can be done from a command line:
    cd path/to/volume/boot
    touch ssh

  5. Safely eject the SD card from your computer.

Assemble the robot

For this section, you can use the Pimoroni STS-Pi robot kit with a Pimoroni Explorer Hat, along with a Raspberry Pi Model 3 B+ or newer, and a camera module. Alternatively, you can use any two motor robot platform that uses the Explorer Hat or Adafruit Motor Hat.

  1. Follow the instructions in this video to assemble the Pimoroni STS-Pi robot kit.
  2. Place the SD card in the Raspberry Pi.
  3. Since the installation may take some time, power the Raspberry Pi using a USB 5V power supply connected to a wall plug rather than a battery.

Connect remotely using SSH

Use your computer to gain remote command line access of the Raspberry Pi using SSH. Both devices must be on the same network.

  1. Open a terminal application with SSH installed. It is already built into Linux and Mac OS, to enable SSH on Windows follow these instructions.
  2. Enter the following to begin a secure shell session as user pi on the default local hostname raspberrypi, which resolves to the IP address of the device using MDNS:
  3. If prompted to add an SSH key to the list of known hosts, type yes.
  4. When prompted for a password, type raspberry. This is the default password and can be changed using the raspi-config utility.
  5. Upon successful login, you now have shell access to your Raspberry Pi device.

Enable the camera using raspi-config

A built-in utility, raspi-config, provides an easy to use interface for configuring Raspbian. You must enable the camera module, along with I2C, a serial bus used for communicating with the motor driver.

  1. In an open SSH session, type the following to open the raspi-config utility:
    sudo raspi-config

  2. Using the arrows, choose Interfacing Options.
  3. Choose Camera. When prompted, choose Yes to enable the camera module.
  4. Repeat the process to enable the I2C interface.
  5. Select Finish and reboot.

Run the install script

An installer script is provided for building and installing the Kinesis Video Stream WebRTC producer, AWSIoTPythonSDK and Pimoroni Explorer Hat Python libraries. Upon completion, it creates a directory with the following structure:

├── /home/pi/Projects/robot
│  └── main.py // The main Python application
│  └── config.json // Parameters used by main.py
│  └── kvsWebrtcClientMasterGstSample //Kinesis Video Stream producer
│  └── /certs
│     └── cacert.pem // Amazon SFSRootCAG2 Certificate Authority
│     └── certificate.pem // AWS IoT certificate placeholder
│     └── private.pem.key // AWS IoT private key placeholder
  1. Open an SSH session on the Raspberry Pi.
  2. (Optional) If using the Adafruit Motor Hat, run this command, otherwise the script defaults to the Pimoroni Explorer Hat.
    export MOTOR_DRIVER=adafruit  

  3. Run the following command to fetch and execute the installer script.
    wget -O - https://raw.githubusercontent.com/aws-samples/aws-serverless-telepresence-robot/master/scripts/install.sh | bash

  4. While the script installs, proceed to the next section.

Configure the code

The Python application on the robot subscribes to AWS IoT Core to receive messages. It requires the certificate and private key created for the IoT thing to authenticate. These files must be copied to the directory where the Python application is stored on the Raspberry Pi.

It also requires the IoT Credentials endpoint is added to the file config.json to assume permissions necessary to transmit video to Amazon Kinesis Video Streams.

  1. Open an SSH session on the Raspberry Pi.
  2. Open the certificate.pem file with the nano text editor and paste in the contents of the certificate downloaded earlier.
    cd/home/pi/Projects/robot/certs
    nano certificate.pem

  3. Press CTRL+X and then Y to save the file.
  4. Repeat the process with the private.key.pem file.
    nano private.key.pem

  5. Open the config.json file.
    cd/home/pi/Projects/robot
    nano config.json

  6. Provide the following information:
    IOT_THINGNAME: The name of your robot, as set in the serverless application.
    IOT_CORE_ENDPOINT: This is found under the Settings page in the AWS IoT Core console.
    IOT_GET_CREDENTIAL_ENDPOINT: Provided by the serverless application.
    ROLE_ALIAS: This is already set to match the Role Alias deployed by the serverless application.
    AWS_DEFAULT_REGION: Corresponds to the Region the application is deployed in.
  7. Save the file using CTRL+X and Y.
  8. To start the robot, run the command:
    python3 main.py

  9. To stop the script, press CTRL+C.

View the Kinesis video stream

The following steps create a WebRTC connection with the robot to view the live stream.

  1. Navigate to the Amazon Kinesis Video Streams console.
  2. Choose Signaling channels from the left menu.
  3. Choose the channel that corresponds with the name of your robot.
  4. Open the Media Playback card.
  5. After a moment, a WebRTC peer to peer connection is negotiated and live video is displayed.
    An animated gif demonstrating a live video stream from the robot.

Sending drive commands

The serverless backend includes an Amazon API Gateway REST endpoint that publishes JSON messages to the Python script on the robot.

The robot expects a message:

{ “action”: <direction> }

Where direction can be “forward”, “backwards”, “left”, or “right”.

  1. While the Python script is running on the robot, open another terminal window.
  2. Run this command to tell the robot to drive forward. Replace <API-URL> using the endpoint listed under Outputs in the CloudFormation stack for the serverless application.
    curl -d '{"action":"forward"}' -H "Content-Type: application/json" -X POST https://<API-URL>/publish

    An animated gif demonstrating the robot being driven from a REST request.

Conclusion

In this post, I show how to build and program a telepresence robot with remote control and a live video feed in the cloud. I did this by installing a Python application on a Raspberry Pi robot and deploying a serverless application.

The Python application uses AWS IoT credentials to receive remote commands from the cloud and transmit live video using Kinesis Video Streams with WebRTC. The serverless application deploys a REST endpoint using API Gateway and a Lambda function. Any application that can connect to the endpoint can drive the robot.

In part two, I build on this project by deploying a web interface for the robot using AWS Amplify.

A preview of the web frontend built in the next blog.

A preview of the web frontend built in the next blog.

 

 

Recreate Flappy Bird’s flight mechanic | Wireframe #29

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-flappy-birds-flight-mechanic-wireframe-29/

From last year’s issue 29 of Wireframe magazine: learn how to create your own version of the simple yet addictive side-scroller Flappy Bird. Raspberry Pi’s Rik Cross shows you how.

Flappy Bird: ridiculously big in 2014, at least for a while.

Flappy Bird was released by programmer Dong Nguyen in 2013, and made use of a straightforward game mechanic to create an addictive hit. Tapping the screen provided ‘lift’ to the main character, which is used strategically to navigate through a series of moving pipes. A point is scored for each pipe successfully passed. The idea proved so addictive that Nguyen eventually regretted his creation and removed it from the Google and Apple app stores. In this article, I’ll show you how to recreate this simple yet time-consuming game, using Python and Pygame Zero.

The player’s motion is very similar to that employed in a standard platformer: falling down towards the bottom of the screen under gravity. See the article, Super Mario-style jumping physics in Wireframe #7 for more on creating this type of movement. Pressing a button (in our case, the SPACE bar) gives the player some upward thrust by setting its velocity to a negative value (i.e. upwards) larger than the value of gravity acting downwards. I’ve adapted and used two different images for the sprite (made by Imaginary Perception and available on opengameart.org), so that it looks like it’s flapping its wings to generate lift and move upwards.

Pressing the SPACE bar gives the bird ‘lift’ against gravity, allowing it to navigate through moving pipes.

Sets of pipes are set equally spaced apart horizontally, and move towards the player slowly each frame of the game. These pipes are stored as two lists of rectangles, top_pipes and bottom_pipes, so that the player can attempt to fly through gaps between the top and bottom pipes. Once a pipe in the top_pipes list reaches the left side of the screen past the player’s position, a score is incremented and the top and corresponding bottom pipes are removed from their respective lists. A new set of pipes is created at the right edge of the screen, creating a continuous challenge for the player. The y-position of the gap between each newly created pair of pipes is decided randomly (between minimum and maximum limits), which is used to calculate the position and height of the new pipes.

The game stops and a ‘Game over’ message appears if the player collides with either a pipe or the ground. The collision detection in the game uses the player.colliderect() method, which checks whether two rectangles overlap. As the player sprite isn’t exactly rectangular, it means that the collision detection isn’t pixel-perfect, and improvements could be made by using a different approach. Changing the values for GRAVITY, PIPE_GAP, PIPE_SPEED, and player.flap_velocity through a process of trial and error will result in a game that has just the right amount of frustration! You could even change these values as the player’s score increases, to add another layer of challenge.

Here’s Rik’s code, which gets an homage to Flappy Bird running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

If you’d like to read older issues of Wireframe magazine, you can find the complete back catalogue as free PDF downloads.

The latest issue of Wireframe is available in print to buy online from the Raspberry Pi Press store, with older physical issues heavily discounted too. You can also find Wireframe at local newsagents, but we should all be staying home as much as possible right now, so why not get your copy online and save yourself the trip?

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. And subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate Flappy Bird’s flight mechanic | Wireframe #29 appeared first on Raspberry Pi.

Code a homage to Marble Madness | Wireframe #34

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-homage-to-marble-madness-wireframe-34/

Code the map and movement basics of the innovative marble-rolling arcade game. Mark Vanstone shows you how.

The original Marble Madness

Each of Marble Madness’ six levels got progressively harder to navigate and had to be completed within a time limit.

Marble Madness

Hitting arcades in 1984, Atari’s Marble Madness presented a rather different control mechanism than other games of the time. The original arcade cabinet provided players with a trackball controller rather than a conventional joystick, and the aim was to guide a marble through a three-dimensional course in the fastest possible time. This meant that a player could change the angle and speed of the marble as it rolled and avoid various obstacles and baddies.

During development, designer Mark Cerny had to shelve numerous ideas for Marble Madness, since the hardware just wasn’t able to achieve the level of detail and interaction he wanted. The groundbreaking 3D display was one idea that made it through to the finished game: its pre-rendered, ray-traced isometric levels.

Marble Madness was the first game to use Atari’s System 1 upgradeable hardware platform, and also boasted the first use of an FM sound chip produced by Yamaha to create its distinctive stereo music. The game was popular in arcades to start with, but interest appeared to drop off after a few months – something Cerny attributed to the fact that the game didn’t take long to play. Marble Madness’s popularity endured in the home market, though, with ports made for most computers and consoles of the time – although inevitably, most of these didn’t support the original’s trackball controls.

Our Python version of Marble Madness

In our sample level, you can control the movement of the marble using the left and right arrow keys.

Making our game

For our version of Marble Madness, we’re going to use a combination of a rendered background and a heightmap in Pygame Zero, and write some simple physics code to simulate the marble rolling over the terrain’s flats and slopes. We can produce the background graphic using a 3D modelling program such as Blender. The camera needs to be set to Orthographic to get the forced perspective look we’re after. The angle of the camera is also important, in that we need an X rotation of 54.7 degrees and a Y rotation of 45 degrees to get the lines of the terrain correct. The heightmap can be derived from an overhead view of the terrain, but you’ll probably want to draw the heights of the blocks in a drawing package such as GIMP to give you precise colour values on the map.

The ball rolling physics are calculated from the grey-shaded heightmap graphic. We’ve left a debug mode in the code; by changing the debug variable to True, you can see how the marble moves over the terrain from the overhead viewpoint of the heightmap. The player can move the marble left and right with the arrow keys – on a level surface it will gradually slow down if no keys are pressed. If the marble is on a gradient on the heightmap, it will increase speed in the direction of the gradient. If the marble hits a section of black on the heightmap, it falls out of play, and we stop the game.

That takes care of the movement of the marble in two dimensions, but now we have to translate this to the rendered background’s terrain. The way we do this is to translate the Y coordinate of the marble as if the landscape was all at the same level – we multiply it by 0.6 – and then move it down the screen according to the heightmap data, which in this case moves the marble down 1.25 pixels for each shade of colour. We can use an overlay for items the marble always rolls behind, such as the finish flag. And with that, we have the basics of a Marble Madness level.

The code you'll need to make Marble Madness

Here’s Mark’s code snippet, which creates a Marble Madness level in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code, go here.

Module Madness

We use the image module from Pygame to sample the colour of the pixel directly under the marble on the heightmap. We also take samples from the left diagonal and the right diagonal to see if there is a change of height. We are only checking for left and right movement, but this sample could be expanded to deal with the two other directions and moving up the gradients, too. Other obstacles and enemies can be added using the same heightmap translations used for the marble, and other overlay objects can be added to the overlay graphic.

Get your copy of Wireframe issue 34

You can read more features like this one in Wireframe issue 34, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 34 for free in PDF format.

Wireframe #34

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a homage to Marble Madness | Wireframe #34 appeared first on Raspberry Pi.

Code a Zaxxon-style axonometric level | Wireframe #33

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-zaxxon-style-axonometric-level-wireframe-33/

Fly through the space fortress in this 3D retro forced scrolling arcade sample. Mark Vanstone has the details

A shot from Sega's arcade hit, Zaxxon

Zaxxon was the first arcade game to use an axonometric viewpoint, which made it look very different from its 2D rivals.

Zaxxon

When Zaxxon was first released by Sega in 1982, it was hailed as a breakthrough thanks to its pseudo-3D graphics. This axonometric projection ensured that Zaxxon looked unlike any other shooter around in arcades.

Graphics aside, Zaxxon offered a subtly different twist on other shooting games of the time, like Defender and Scramble; the player flew over either open space or a huge fortress, where they had to avoid obstacles of varying heights. Players could tell how high they were flying with the aid of an altimeter, and also the shadow beneath their ship (shadows were another of Zaxxon’s innovations). The aim of the game was to get to the end of each level without running out of fuel or getting shot down; if the player did this, they’d encounter an area boss called Zaxxon. Points were awarded for destroying gun turrets and fuel silos, and extra lives could be gained as the player progressed through the levels.

A shot of our Pygame version of Zaxxon

Our Zaxxon homage running in Pygame Zero: fly the spaceship through the fortress walls and obstacles with your cursor keys.

Making our level

For this code sample, we can borrow some of the techniques used in a previous Source Code article about Ant Attack (see Wireframe issue 15) since it also used an isometric display. Although the way the map display is built up is very similar, we’ll use a JSON file to store the map data. If you’ve not come across JSON before, it’s well worth learning about, as a number of web and mobile apps use it, and it can be read by Python very easily. All we need to do is load the JSON file, and Python automatically puts the data into a Python dictionary object for us to use.

In the sample, there’s a short run of map data 40 squares long with blocks for the floor, some low walls, higher walls, and a handful of fuel silos. To add more block types, just add data to the blocktypes area of the JSON file. The codes used in the map data are the index numbers of the blocktypes, so the first blocktypes is index 0, the next index 1, and so on. Our drawMap() function takes care of rendering the data into visual form and blits blocks from the top right to the bottom left of the screen. When the draw loop gets to where the ship is, it draws first the shadow and then the ship a little higher up the screen, depending on the altitude of the ship. The equation to translate the ship’s screen coordinates to a block position on the map is a bit simplistic, but in this case, it does the job well enough.

Cursor keys guide the movement of the spaceship, which is limited by the width of the map and a height of 85 pixels. There’s some extra code to display the ship if it isn’t on the map – for example, at the start, before it reaches the map area. To make the code snippet into a true Zaxxon clone, you’ll have to add some laser fire and explosions, a fuel gauge, and a scoring system, but this code sample should provide the basis you’ll need to get started.

Code for our Zaxxon homage

Here’s Mark’s code snippet, which creates a side-scrolling beat-’em-up in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 33

You can read more features like this one in Wireframe issue 33, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 33 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Zaxxon-style axonometric level | Wireframe #33 appeared first on Raspberry Pi.

Generating REST APIs from data classes in Python

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/generating-rest-apis-from-data-classes-in-python/

This post is courtesy of Robert Enyedi – Senior Research Engineer – AI Labs

Implementing and managing public APIs is greatly simplified by API Gateway. Among the various features of API Gateway, the ability to import API definitions in the Open API format is powerful.

In this post, I show how you can automatically generate REST APIs directly from Python data classes. This method includes a highly automated workflow for exposing Python services as public APIs using the API Gateway. Recent changes in the Python language open the door for full automation of API publishing directly from code.

Open API and API Gateway

The Open API specification is a popular mechanism to declare the structure of REST APIs. It’s language-independent and allows you to determine API operations and their data types. Previously called Swagger, it is a standardization effort with benefits for the service developer and service consumer. It reduces repetitive tasks, increases API quality, and removes the guesswork from calling a service.

Examples shown here use data classes, which are supported in Python 3.7 or higher. There are backports of data classes to Python 3.6 available but they are beyond the scope of this post.

Python standard type annotations

The type hints syntax, defined in PEP 526 and implemented in Python 3.5, allow the declaration of a type for identifiers. This includes local variables, function and method parameters, and return type or class fields. They improve the readability of the code and provide useful information for tools. This allows your IDE to be more effective at auto-completion, semantic error detection, and refactoring.

Code checkers such as Mypy can better catch problems at build time. These are the typical advantages of statically typed languages. With Python, because type annotations are optional and a recent addition to the language, not all the project’s dependencies have types. That’s why tooling is less accurate in detecting all error conditions.

Python data classes

Data classes are an even more recent addition to the language. Described in PEP 557 and introduced in Python 3.7 they allow a simplified declaration of class data structures useful for storing state. Combined with type hints, one can use the @dataclass decorator:

@dataclass
class Person:
  name: str
  age: int

Then the Python implementation can generate:

  1. The constructor:
    Person(”Joe”, 12)
  2. Comparator methods to allow operations such as:
    Person(name=”Joe”, age=12) == Person(name=”Joe”, age=12)
  3. The __repr__() implementation to pretty print the object:
    Person(name='Joe', age=12)

Building an API using data classes

Data classes containing fields with type hints lend themselves to automation of API definitions. This solution uses data classes to generate Open API service definitions with AWS extensions and to create API Gateway configurations.

Similar solutions exist for strictly typed languages like Java, C# or Scala. In Python, this level of automation was not available until version 3.7. This code uses the Dataclasses JSON library to automate the serialization of data classes.

1. Start with the entity definition, in this case a person:

@dataclass
@dataclass_json
class Person:
  name: str
  age: int

2. Create one class for the request and another for the response to help payload serialization:

@dataclass
@dataclass_json
class CreatePersonRequest:
  person: Person

@dataclass
@dataclass_json
class CreatePersonResponse:
  person_id: int

3. Next, implement the route handler (this example uses the Flask Web framework):

OPERATION_CREATE_PERSON: str = 'create-person'
@app.route(f'/{OPERATION_CREATE_PERSON}', methods=['POST'])
def create_person():
    payload = request.get_json()
    logging.info(f"Incoming payload for {OPERATION_CREATE_PERSON}: {payload}")
    person = CreatePersonRequest.from_json(payload)

The payload is deserialized transparently using the schema derived from the data class definition of Person.

4. To generate a corresponding API definition, enter:

spec = {}

generate_operation(path=OPERATION_CREATE_PERSON,
                   request_schema=CreatePersonRequest.schema(),
                   request_schema_name=CreatePersonResponse.__name__,
                   response_schema=CreatePersonResponse.schema(),
                   response_schema_name=CreatePersonResponse.__name__,
                   spec=spec)

spec_dict = spec.to_dict()

The implementation of generate_operation() makes use of the apispec library to programmatically construct the Open API definition.

With spec_dict containing the Open API specification, it’s used to either create or update the API definition. You can also run any Open API tools on this definition, such as SDK generators, mock servers, or documentation generators. There’s a comprehensive catalog of tools maintained at https://openapi.tools/.

As a sensible default, the code generates API operations guarded by API keys supplied with the x-api-key header:

"securitySchemes": {
      "api_key": {
        "type": "apiKey",
        "name": "x-api-key",
        "in": "header"
      }
    }

The spec uses API Gateway extensions to include implementation-specific metadata. The most important is the one linking the API definition to the ECS backend:

"x-amazon-apigateway-integration": {
          "passthroughBehavior": "when_no_match",
          "type": "http_proxy",
          "httpMethod": "POST",
          "uri": "http://myecshost-1234567890.us-east-1.elb.amazonaws.com/create-person"
        }

You can use a similar pattern to connect the gateway to a different service, such as AWS Lambda:

"x-amazon-apigateway-integration": {
          "uri": "arn:aws:apigateway:...:lambda:path/.../functions/arn:aws:lambda:...:...:function:yourLambdaFunction/invocations",
          "responses": {
            "default": {
              "statusCode": "200"
            }
          },
          "passthroughBehavior": "when_no_match",
          "httpMethod": "POST",
          "contentHandling": "CONVERT_TO_TEXT",
          "type": "aws"
        }

For more information on the API Gateway extension to Open API, visit the AWS documentation.

Generating the API using API Gateway

This example uses the boto3 API Gateway API to expose a public API.

1. To create the API, enter the following:

api_definition = json.dumps(spec_dict, indent=2)
api_gateway_client.import_rest_api( body=api_definition )

2. To update the API, merge the changes into a manually modified API definition (mode='merge'), or completely overwrite the API (mode='overwrite'). It is often safer to merge the API, as follows:

api_gateway_client.put_rest_api(body=api_definition, mode='merge', restApiId=find_api_id(api_gateway_client, api_name))

The find_api_id() helper function looks up the API ID based on its name.

3. Check the API Gateway dashboard in the AWS Management Console for the new API definition. It shows the API and its resources:

API Gateway dashboard

Now you are ready to issue a test call to the external API to validate its security and functionality. The Open API definition of a manually created or modified API can be exported by various means, including from the stage editor.

Validate the API

The correct way to call the API is shown in test_get_dubbing_job_status_API() from test/ondemand_test_call_service.py:

response = _send_request(secure=True,
                         host='<<yourapi>>.execute-api.us-east-1.amazonaws.com',
                         service_port=80,
                         path='sample-generated-api',
                         operation=OPERATION_CREATE_PERSON,
                         request=CreatePersonRequest(Person(name='Jane Doe', age='40')),
                         api_key='<<yourapikey>>')

response_obj = CreatePersonResponse.from_json(response)

assert response_obj.person_id is not None

If you call the API without the api_key parameter, it returns an HTTP 403 code and the error message:

{"message":"Forbidden"}

Conclusion

This post shows how to automatically expose Python services as public APIs directly from the code. With the introduction of Python data classes, it is easy to automate JSON serialization.

Now you can fully automate the API generation and deployment tasks for API Gateway.  Introducing a new entity is trivial, and adding a new field to your API requires only writing its definition. You can develop a fully functional API based upon these building blocks.

Learn more from this sample repository, and adapt the code for your projects to achieve a high level of automation for your public APIs.

 

Code a Kung-Fu Master style beat-’em-up | Wireframe #32

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-kung-fu-master-style-beat-em-up-wireframe-32/

Punch and kick your way through a rabble of bad dudes in a simple scrolling beat-’em-up. Mark Vanstone shows you how

Although released to tie in with Jackie Chan’s Spartan X, Kung-Fu Master was originally inspired by the Bruce Lee film, Game of Death.

Kung-Fu Master

Kung-Fu Master hit arcades in 1984. Its side-scrolling action, punching and kicking through an army of knife-throwing goons, helped create the beat-’em-up genre. In fact, its designer, Takashi Nishiyama, would go on to kickstart the Street Fighter series at Capcom, and later start up the Fatal Fury franchise at SNK.

In true eighties arcade style, Kung-Fu Master distils the elements of a chop-socky action film to its essentials. Hero Thomas and his girlfriend are attacked, she’s kidnapped, and Thomas fights his way through successive levels of bad guys to rescue her. The screen scrolls from side to side, and Thomas must use his kicks and punches to get from one side of the level to the other and climb the stairs to the next floor of the building.

Our Kung-Fu Master homage features punches, kicks, and a host of goons to use them on.

Making our brawler

To recreate this classic with Pygame Zero, we’ll need quite a few frames of animation, both for the hero character and the enemies he’ll battle. For a reasonable walk cycle, we’ll need at least six frames in each direction. Any fewer than six won’t look convincing, but more frames can achieve a smoother effect. For this example, I’ve used the 3D package Poser, since it has a handy walk designer which makes generating sequences of animation much easier.

Once we have the animation frames for our characters, including a punch, kick, and any others you want to add, we need a background for the characters to walk along. The image we’re using is 2000×400 pixels, and we start the game by displaying the central part so our hero can walk either way. By detecting arrow key presses, the hero can ‘walk’ one way or the other by moving the background left and right, while cycling through the walk animation frames. Then if we detect a Q key press, we change the action string to kick; if it’s A, it’s punch. Then in our update() function, we use that action to set the Actor’s image to the indicated action frame.

Our enemy Actors will constantly walk towards the centre of the screen, and we can cycle through their walking frames the same way we do with the main hero. To give kicks and punches an effect, we put in collision checks. If the hero strikes while an enemy collides with him, we register a hit. This could be made more precise to require more skill, but once a strike’s registered, we can switch the enemy to a different status that will cause them to fall downwards and off the screen.

This sample is a starting point to demonstrate the basics of the beat-’em-up genre. With the addition of flying daggers, several levels, and a variety of bad guys, you’ll be on your way to creating a Pygame Zero version of this classic game.

The generation game

Because we’re moving the background when our hero walks left and right, we need to make sure we move our enemies with the background, otherwise they’ll look as though they’re sliding in mid-air – this also applies to any other objects that aren’t part of the background. The number of enemies can be governed in several ways: in our code, we just have a random number deciding if a new enemy will appear during each update, but we could use a predefined pattern for the enemy generation to make it a bit less random, or we use a combination of patterns and random numbers.

Here’s Mark’s code snippet, which creates a side-scrolling beat-’em-up in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 32

You can read more features like this one in Wireframe issue 32, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 32 for free in PDF format.

Look how lovely and glowy it is.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Kung-Fu Master style beat-’em-up | Wireframe #32 appeared first on Raspberry Pi.

Make a Spy Hunter-style scrolling road | Wireframe #31

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/make-a-spy-hunter-style-scrolling-road-wireframe-31/

Raspberry Pi’s own Mac Bowley shows you how to make the beginnings of a top-down driving game inspired by 1983’s Spy Hunter.

Spy Hunter, an arcade game from 1983

Spy Hunter was one of the very first games with both driving and shooting.

Spy Hunter

The 1983 arcade classic Spy Hunter put players at the wheel of a fictitious Interceptor vehicle and challenged them to navigate a vertically scrolling road, destroying enemy vehicles.

Here, I’ll show you how you can recreate the game’s scrolling road to use in your own driving games. The road will be created using the Rect class from Pygame, with the road built from stacked rectangles that are each two pixels high.

Making the scrolling road in Python

First, I create two lists; one to hold the pieces of road currently being drawn on screen, and another to hold a queue of pieces that will be added as the road scrolls. To create the scrolling road effect, each of the current pieces of road will need to move down the screen, while a new piece is added to the end of the list at position y = 0.

Pygame can schedule functions, which can then be called at set intervals – meaning I can scroll my road at a set frame rate. The scroll_road function will achieve this. First, I loop over each road piece, and move it down by two pixels. I then remove the first item in the queue list and append it to the end of the road. The Pygame clock is then set to call the function at intervals set by a frame_rate variable: mine is set to 1/60, meaning 60 frames per second.

Our top-down rolling road in Python

Our code snippet provides a solid basis for your own top-down driving game. All you need now are weapons. And a few other cars.

My road can either turn left or right, a random choice made whenever the queue is populated. Whichever way the road turns, it has to start from the same spot as the last piece in my queue. I can grab the last item in a list using -1 as an index and then store the x position; building from here will make sure my road is continuous. I use a buffer of 50 pixels to keep the road from moving off the edge of my screen – each time a turn is made, I check that the road doesn’t go beyond this point.

I want the turn amount to be random, so I’m also setting a minimum turn of 200 pixels. If this amount takes my car closer than the buffer, I’ll instead set the turn amount so that it takes it up to the buffer but no further. I do this for both directions, as well as setting a modifier to apply to my turn amount (-1 to turn left and 1 to turn right), which will save me duplicating my code. I also want to randomly choose how many pieces will be involved in my turn. Each piece is a step in the scroll, so the more pieces, the longer my turn will take. This will make sure I have a good mix of sharp and elongated turns in my road, keeping the player engaged.

Our rolling road Python code

Here’s Mac’s code snippet, which creates a winding road worthy of Spy Hunter in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code, go here.

Speeding up the game

To make things more exciting, the game can also be speeded up by decreasing the frame_rate variable. You could even gradually increase this over time, making the game feel more frantic the further you get.
Another improvement would be to make the turns more curvy, but make sure you’re comfortable with algebra before you do this!

Get your copy of Wireframe issue 31

You can read more features like this one in Wireframe issue 31, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 31 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Make a Spy Hunter-style scrolling road | Wireframe #31 appeared first on Raspberry Pi.

Code a Boulder Dash mining game | Wireframe #30

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-boulder-dash-mining-game-wireframe-30/

Learn how to code a simple Boulder Dash homage in Python and Pygame. Mark Vanstone shows you how. 

The original Boulder Dash was marked out by some devious level design, which threatened to squash the player at every turn.

Boulder Dash

Boulder Dash first appeared in 1984 for the Commodore 64, Apple II, and the Atari 400/800. It featured an energetic gem collector called Rockford who, thanks to some rather low-resolution graphics, looked a bit like an alien. His mission was to tunnel his way through a series of caves to find gems while avoiding falling rocks dislodged by his digging. Deadly creatures also inhabited the caves which, if destroyed by dropping rocks on them, turned into gems for Rockford to collect.

The ingenious level designs were what made Boulder Dash so addictive. Gems had to be collected within a time limit to unlock the exit, but some were positioned in places that would need planning to get to, often using the physics of falling boulders to block or clear areas. Of course, the puzzles got increasingly tough as the levels progressed.

Written by Peter Liepa and Chris Gray, Boulder Dash was published by First Star Software, which still puts out new versions of the game to this day. Due to its original success, Boulder Dash was ported to all kinds of platforms, and the years since have seen no fewer than 20 new iterations of Boulder Dash, and a fair few clones, too.

Our homage to Boulder Dash running in Pygame Zero. Dig through the caves to find gems – while avoiding death from above.

Making Boulder Dash in Python

We’re going to have a look at the boulder physics aspect of the game, and make a simple level where Rockford can dig out some gems and hopefully not get flattened under an avalanche of rocks. Writing our code in Pygame Zero, we’ll automatically create an 800 by 600-size window to work with. We can make our game screen by defining a two-dimensional list, which, in this case, we will fill with soil squares and randomly position the rocks and gems.

Each location in the list matrix will have a name: either wall for the outside boundary, soil for the diggable stuff, rock for a round, moveable boulder, gem for a collectable item, and finally, rockford to symbolise our hero. We can also define an Actor for Rockford, as this will make things like switching images and tracking other properties easier.

Here’s Mark’s code, which gets an homage to Boulder Dash running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Our draw() function is just a nested loop to iterate through the list matrix and blit to the screen whatever is indicated in each square. The Rockford Actor is then drawn over the top. We can also keep a count of how many gems have been collected and provide a congratulatory message if all of them are found. In the update() function, there are only two things we really need to worry about: the first being to check for keypresses from the player and move Rockford accordingly, and the second to check rocks to see if they need to move.

Rockford is quite easy to test for movement, as he can only move onto an empty square – a soil square or a gem square. It’s also possible for him to push a boulder if there’s an empty space on the other side. For the boulders, we need to first test if there’s an empty space below it, and if so, the boulder must move downwards. We also test to see if a boulder is on top of another boulder – if it is, the top boulder can roll off and down onto a space either to the left or the right of the one beneath.
There’s not much to add to this snippet of code to turn it into a playable game of Boulder Dash. See if you can add a timer, some monsters, and, of course, some puzzles for players to solve on each level.

Testing for movement

An important thing to notice about the process of scanning through the list matrix to test for boulder movement is that we need to read the list from the bottom upwards; otherwise, because the boulders move downwards, we may end up testing a boulder multiple times if we test from the beginning to the end of the list. Similarly, if we read the list matrix from the top down, we may end up moving a boulder down and then when reading the next row, coming across the same one again, and moving it a second time.

Get your copy of Wireframe issue 30

You can read more features like this one in Wireframe issue 30, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 30 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Boulder Dash mining game | Wireframe #30 appeared first on Raspberry Pi.

Upcoming changes to the Python SDK in AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/upcoming-changes-to-the-python-sdk-in-aws-lambda/

This blog post describes an upcoming change to the AWS SDK that affects Python developers using the requests module in Botocore. This post explains why the changes are happening, and describes what Python developers must do to continue using the requests library.

The upcoming changes to AWS SDK

Botocore is a low-level interface to many services in the AWS Cloud. The package is the foundation for the AWS CLI and also Boto3, which is the AWS SDK for Python. In August 2018, Botocore was refactored to allow pluggable HTTP clients.

One of the main changes is that the requests library was replaced with urllib3. Additionally, the requests dependency was also unvendored, meaning Botocore can now support a range of versions of urllib3, instead of relying on a specific version. From version 1.13.0, the requests module is no longer part of the AWS SDK for Python. These changes create additional flexibility for Python developers, and can result in performance improvements for applications using Botocore.

Although the SDK has removed the requests module, the Lambda service continues to bundle the requests module in the AWS SDK until March 30, 2020. This is so builders have additional time to decide on the best course of action for their Python Lambda functions that rely on the requests module.

Best practices for using the AWS SDK

For convenience, the Lambda service includes the AWS SDK in its execution environment. This allows Lambda console users to easily use the SDK functionality within the integrated code window. It also allows Lambda to interoperate with the growing number of AWS services and features released to customers.

The best practice for Lambda development is to bundle all dependencies used by your Lambda function, including the AWS SDK. By doing this, your code uses the bundled version and is not affected when the version in the execution environment is upgraded. This is preferable to using the included version of the SDK, since this version can change, and in rare cases might affect compatibility with your code.

If you are currently bundling the SDK version in your Lambda version, you do not need to take any further action. Your code continues to use the bundled version and the upcoming changes do not affect you.

Importing the requests module directly into your code

If you are using the AWS SDK in the Lambda execution environment and do not want to bundle a version into your zipped deployment package, you have a couple of additional options.

First, you can install the requests module into your Python environment and import the module directly. Currently, you may be importing the library from Botocore in your Lambda function using this code:

from botocore.vendored import requests

First, to install the requests module, enter the following in a terminal window:

pip install requests -t ./

After the installation, update the import statement in your code as shown below:

import requests

This updates your dependency from the Botocore vendored version to a locally packaged version of the module. As a result, your code is unaffected after the requests module is no longer available via the AWS SDK in the execution environment.

For more information on bundling libraries, read how to build an AWS Lambda deployment package for Python.

Using AWS Lambda Layers

AWS has also published Lambda layers containing version 1.12.221 of the AWS SDK, which includes the requests module in Botocore. To use this, first identify the layer ARN you need, using the Python runtime version and AWS Region.

To add this layer to your function in the Lambda console:

  1. Select your function from the Functions list.
  2. Locate the function’s ARN near the top of the display. In the ARN, note the AWS Region code after arn:aws:lambda:. In the Function code panel, note the Python version in the Runtime dropdown.Identifying region and runtime from Lambda function
  3. Using both the AWS Region code and Python version, find the ARN from the lists below. For Python versions 2.7, 3.6 and 3.7, locate the layer ARN for your Region:

    RegionARN
    ap-northeast-1arn:aws:lambda:ap-northeast-1:249908578461:layer:AWSLambda-Python-AWS-SDK:4
    us-east-1arn:aws:lambda:us-east-1:668099181075:layer:AWSLambda-Python-AWS-SDK:4
    ap-southeast-1arn:aws:lambda:ap-southeast-1:468957933125:layer:AWSLambda-Python-AWS-SDK:4
    eu-west-1arn:aws:lambda:eu-west-1:399891621064:layer:AWSLambda-Python-AWS-SDK:4
    us-west-1arn:aws:lambda:us-west-1:325793726646:layer:AWSLambda-Python-AWS-SDK:4
    ap-east-1arn:aws:lambda:ap-east-1:118857876118:layer:AWSLambda-Python-AWS-SDK:4
    ap-northeast-2arn:aws:lambda:ap-northeast-2:296580773974:layer:AWSLambda-Python-AWS-SDK:4
    ap-northeast-3arn:aws:lambda:ap-northeast-3:961244031340:layer:AWSLambda-Python-AWS-SDK:4
    ap-south-1arn:aws:lambda:ap-south-1:631267018583:layer:AWSLambda-Python-AWS-SDK:4
    ap-southeast-2arn:aws:lambda:ap-southeast-2:817496625479:layer:AWSLambda-Python-AWS-SDK:4
    ca-central-1arn:aws:lambda:ca-central-1:778625758767:layer:AWSLambda-Python-AWS-SDK:4
    eu-central-1arn:aws:lambda:eu-central-1:292169987271:layer:AWSLambda-Python-AWS-SDK:4
    eu-north-1arn:aws:lambda:eu-north-1:642425348156:layer:AWSLambda-Python-AWS-SDK:4
    eu-west-2arn:aws:lambda:eu-west-2:142628438157:layer:AWSLambda-Python-AWS-SDK:4
    eu-west-3arn:aws:lambda:eu-west-3:959311844005:layer:AWSLambda-Python-AWS-SDK:4
    sa-east-1arn:aws:lambda:sa-east-1:640010853179:layer:AWSLambda-Python-AWS-SDK:4
    us-east-2arn:aws:lambda:us-east-2:259788987135:layer:AWSLambda-Python-AWS-SDK:4
    us-west-2arn:aws:lambda:us-west-2:420165488524:layer:AWSLambda-Python-AWS-SDK:5
    cn-north-1arn:aws-cn:lambda:cn-north-1:683298794825:layer:AWSLambda-Python-AWS-SDK:4
    cn-northwest-1arn:aws-cn:lambda:cn-northwest-1:382066503313:layer:AWSLambda-Python-AWS-SDK:4
    us-gov-westarn:aws-us-gov:lambda:us-gov-west-1:556739011827:layer:AWSLambda-Python-AWS-SDK:4
    us-gov-eastarn:aws-us-gov:lambda:us-gov-east-1:138526772879:layer:AWSLambda-Python-AWS-SDK:4

    For Python version 3.8, locate the layer ARN for your Region:

    RegionARN
    ap-northeast-1arn:aws:lambda:ap-northeast-1:249908578461:layer:AWSLambda-Python-AWS-SDK:5
    us-east-1arn:aws:lambda:us-east-1:668099181075:layer:AWSLambda-Python-AWS-SDK:5
    ap-southeast-1arn:aws:lambda:ap-southeast-1:468957933125:layer:AWSLambda-Python-AWS-SDK:5
    eu-west-1arn:aws:lambda:eu-west-1:399891621064:layer:AWSLambda-Python-AWS-SDK:5
    us-west-1arn:aws:lambda:us-west-1:325793726646:layer:AWSLambda-Python-AWS-SDK:5
    ap-east-1arn:aws:lambda:ap-east-1:118857876118:layer:AWSLambda-Python-AWS-SDK:5
    ap-northeast-2arn:aws:lambda:ap-northeast-2:296580773974:layer:AWSLambda-Python-AWS-SDK:5
    ap-northeast-3arn:aws:lambda:ap-northeast-3:961244031340:layer:AWSLambda-Python-AWS-SDK:5
    ap-south-1arn:aws:lambda:ap-south-1:631267018583:layer:AWSLambda-Python-AWS-SDK:5
    ap-southeast-2arn:aws:lambda:ap-southeast-2:817496625479:layer:AWSLambda-Python-AWS-SDK:5
    ca-central-1arn:aws:lambda:ca-central-1:778625758767:layer:AWSLambda-Python-AWS-SDK:5
    eu-central-1arn:aws:lambda:eu-central-1:292169987271:layer:AWSLambda-Python-AWS-SDK:5
    eu-north-1arn:aws:lambda:eu-north-1:642425348156:layer:AWSLambda-Python-AWS-SDK:5
    eu-west-2arn:aws:lambda:eu-west-2:142628438157:layer:AWSLambda-Python-AWS-SDK:5
    eu-west-3arn:aws:lambda:eu-west-3:959311844005:layer:AWSLambda-Python-AWS-SDK:5
    sa-east-1arn:aws:lambda:sa-east-1:640010853179:layer:AWSLambda-Python-AWS-SDK:5
    us-east-2arn:aws:lambda:us-east-2:259788987135:layer:AWSLambda-Python-AWS-SDK:5
    us-west-2arn:aws:lambda:us-west-2:420165488524:layer:AWSLambda-Python-AWS-SDK:6
    cn-north-1arn:aws-cn:lambda:cn-north-1:683298794825:layer:AWSLambda-Python-AWS-SDK:5
    cn-northwest-1arn:aws-cn:lambda:cn-northwest-1:382066503313:layer:AWSLambda-Python-AWS-SDK:5
    us-gov-westarn:aws-us-gov:lambda:us-gov-west-1:556739011827:layer:AWSLambda-Python-AWS-SDK:5
    us-gov-eastarn:aws-us-gov:lambda:us-gov-east-1:138526772879:layer:AWSLambda-Python-AWS-SDK:5

    In this example, using Python 3.6 in us-west-2, the layer ARN is arn:aws:lambda:us-west-2:420165488524:layer:AWSLambda-Python-AWS-SDK:5.

  4. Select the Layers view in the Designer panel, then choose Add a layer.Choose Add a layer from Lambda.
  5. Select the Provide a layer version ARN option, and then paste the relevant ARN from the list above into the Layer version ARN field. Choose Add.Add layer to function

The Lambda function is now using the layer.

Alternatively, you can use the CLI to add a Lambda Layer to your existing Lambda function. To do this, first identify the correct layer ARN using the procedure above. Next, execute the following command from a terminal window, replacing the layer parameter with the ARN:

aws lambda update-function-configuration --function-name yourFunctionName --layers arn:aws:lambda:xxxxxxxxxxxxxxxxxxxx

If you are using the AWS Serverless Application Model, you can add the layer to an existing function using the following YAML in the SAM template, replacing arn with the ARN:

Layers:
  - arn

To learn more, see our documentation describing how to add layers to Lambda functions.

Conclusion

The change to Botocore helps improve flexibility and performance for the AWS SDK. If you are bundling a fixed AWS SDK version with your Python version, you do not need to take any action. If you are using the AWS SDK included in the execution environment and want to continue using the requests module, you can include the Lambda Layer with the appropriate AWS SDK version, or include the requests module directly in your application package.

Continued support for Python 2.7 on AWS Lambda

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/continued-support-for-python-2-7-on-aws-lambda/

The Python Software Foundation (PSF), which is the Python language governing body, ended support for Python 2.7 on January 1, 2020. Additionally, many popular open source software packages also ended their support for Python 2.7 then.

What is AWS Lambda doing?

We recognize that Python 2 and 3 differ across several core language aspects, and the application binary interface is not always compatible. We also recognize that these differences can make migration challenging. To allow you additional time to prepare, AWS Lambda will continue to provide critical security patches for the Python 2.7 runtime until at least December 31, 2020. Lambda’s scope of support includes the Python interpreter and standard library, but does not extend to third-party packages.

What do I need to do?

There is no action required to continue using your existing Python 2.7 Lambda functions. Python 2.7 function creates and updates will continue to work as normal. However, we highly recommend that you migrate your Lambda functions to Python 3. AWS Lambda supports Python 3.6, 3.7 and version 3.8. As always, you should test your functions for Python 3 language compatibility before applying changes to your production functions.

The Python community offers helpful guides and tools to help you port Python 2 code to Python 3:

What if I have issues or need help?

Please contact us through AWS Support or the AWS Developer Forums with any questions.

Happy coding!

How to run a script at start-up on a Raspberry Pi using crontab

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/how-to-run-a-script-at-start-up-on-a-raspberry-pi-using-crontab/

Do you need to run a script whenever your Raspberry Pi turns on? Here’s Estefannie to explain how to edit crontab to do exactly that.

How to start a script at start-up on a Raspberry Pi // LEARN SOMETHING

Do you want your Raspberry Pi to automatically run your code when it is connected to power? Then you are in the right place. In this new #LEARNSOMETHING video I show you how to make you Raspberry Pi run your script automatically when it is connected to a power source.

Running script on startup

While there are many ways of asking your Raspberry Pi to run a script on start-up, crontab -e is definitely one of the easiest.

AND, as Estefannie explains (in part thanks to me bugging asking her to do so), if you create a run folder on your desktop, you can switch out the Python scripts you want to run at start-up whenever you like and will never have to edit crontab again!

Weeeeee!

Now go write some wonderful and inspiring festive scripts while I take a well-earned nap. I just got off a plane yet here I am, writing blog posts for y’all because I love you THAT DARN MUCH!

A fluffy cat

This is Teddy. Teddy is also in the video.

And don’t forget to like and subscribe for more Estefannie Explains it All goodness!

The post How to run a script at start-up on a Raspberry Pi using crontab appeared first on Raspberry Pi.

Create a turn-based combat system | Wireframe #28

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/create-a-turn-based-combat-system-wireframe-28/

Learn how to create the turn-based combat system found in games like Pokémon, Final Fantasy, and Undertale. Raspberry Pi’s Rik Cross shows you how.

With their emphasis on trading and collecting as well as turn-based combat, the Pokémon games helped bring RPG concepts to the masses.

In the late 1970s, high school student Richard Garriott made a little game called Akalabeth. Programmed in Applesoft BASIC, it helped set the template for the role-playing genre on computers. Even today, turn-based combat is still a common sight in games, with this autumn’s Pokémon Sword and Shield revolving around a battle system which sees opponents take turns to plan and execute attacks or defensive moves.

The turn-based combat system in this article is text-only, and works by allowing players to choose to defend against or attack their opponent in turn. The battle ends when only one player has some health remaining.

Each Player taking part in the battle is added to the static players list as it’s created. Players have a name, a health value (initially set to 100) and a Boolean defending value (initially set to False) to indicate whether a player is using their shield. Players also have an inputmethod attribute, which is the function used for getting player input for making various choices in the game. This function is passed to the object when created, and means that we can have human players that give their input through the keyboard, as well as computer players that make choices (in our case simply by making a random choice between the available options).

Richard Garriott’s Akalabeth laid the groundwork for Ultima, and was one of the earliest CRPGs.

A base Action class specifies an action owner and an opponent, as well as an execute() method which has no effect on the game. Subclasses of the base class override this execute() method to specify the effect the action has on the owner and/or the opponent of the action. As a basic example, two actions have been created: Defend, which sets the owner’s defending attribute to True, and Attack, which sets the owner’s defending attribute to False, and lowers the opponent’s health by a random amount depending on whether or not they are defending.

Players take turns to choose a single action to perform in the battle, starting with the human ‘Hero’ player. The choose_action() method is used to decide what to do next (in this case either attack or defend), as well as an opponent if the player has chosen to attack. A player can only be selected as an opponent if they have a health value greater than 0, and are therefore still in the game. This choose_action() method returns an Action, which is then executed using its execute() method. A few time.sleep() commands have also been thrown in here  to ramp up the suspense!

After each player has had their turn, a check is done to make sure that at least two players still have a health value greater than 0, and therefore that the battle can continue. If so, the static get_next_player() method finds the next player still in the game to take their turn in the battle, otherwise, the game ends and the winner is announced.

Our example battle can be easily extended in lots of interesting ways. The AI for choosing an action could also be made more sophisticated, by looking at opponents’ health or defending attributes before choosing an action. You could also give each action a ‘cost’, and give players a number of action ‘points’ per turn. Chosen actions would be added to a list, until all of the points have been used. These actions would then be executed one after the other, before moving on to the next player’s turn.

Here’s Rik’s code, which gets a simple turn-based combat system running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

You can read more features like this one in Wireframe issue 28, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 28 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Create a turn-based combat system | Wireframe #28 appeared first on Raspberry Pi.

Open-Sourcing Metaflow, a Human-Centric Framework for Data Science

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/open-sourcing-metaflow-a-human-centric-framework-for-data-science-fa72e04a5d9?source=rss----2615bd06b42e---4

by David Berg, Ravi Kiran Chirravuri, Romain Cledat, Savin Goyal, Ferras Hamad, Ville Tuulos

tl;dr Metaflow is now open-source! Get started at metaflow.org.

Netflix applies data science to hundreds of use cases across the company, including optimizing content delivery and video encoding. Data scientists at Netflix relish our culture that empowers them to work autonomously and use their judgment to solve problems independently. We want our data scientists to be curious and take smart risks that have the potential for high business impact.

About two years ago, we, at our newly formed Machine Learning Infrastructure team started asking our data scientists a question: “What is the hardest thing for you as a data scientist at Netflix?” We were expecting to hear answers related to large-scale data and models, and maybe issues related to modern GPUs. Instead, we heard stories about projects where getting the first version to production took surprisingly long — mainly because of mundane reasons related to software engineering. We heard many stories about difficulties related to data access and basic data processing. We sat in meetings where data scientists discussed with their stakeholders how to best version different versions of their models without impacting production. We saw how excited data scientists were about modern off-the-shelf machine learning libraries, but we also witnessed various issues caused by these libraries when they were casually included as dependencies in production workflows.

We realized that nearly everything that data scientists wanted to do was already doable technically, but nothing was easy enough. Our job as a Machine Learning Infrastructure team would therefore not be mainly about enabling new technical feats. Instead, we should make common operations so easy that data scientists would not even realize that they were difficult before. We would focus our energy solely on improving data scientist productivity by being fanatically human-centric.

How could we improve the quality of life for data scientists? The following picture started emerging:

Our data scientists love the freedom of being able to choose the best modeling approach for their project. They know that feature engineering is critical for many models, so they want to stay in control of model inputs and feature engineering logic. In many cases, data scientists are quite eager to own their own models in production, since it allows them to troubleshoot and iterate the models faster.

On the other hand, very few data scientists feel strongly about the nature of the data warehouse, the compute platform that trains and scores their models, or the workflow scheduler. Preferably, from their point of view, these foundational components should “just work”. If, and when they fail, the error messages should be clear and understandable in the context of their work.

A key observation was that most of our data scientists had nothing against writing Python code. In fact, plain-and-simple Python is quickly becoming the lingua franca of data science, so using Python is preferable to domain specific languages. Data scientists want to retain their freedom to use arbitrary, idiomatic Python code to express their business logic — like they would do in a Jupyter notebook. However, they don’t want to spend too much time thinking about object hierarchies, packaging issues, or dealing with obscure APIs unrelated to their work. The infrastructure should allow them to exercise their freedom as data scientists but it should provide enough guardrails and scaffolding, so they don’t have to worry about software architecture too much.

Introducing Metaflow

These observations motivated Metaflow, our human-centric framework for data science. Over the past two years, Metaflow has been used internally at Netflix to build and manage hundreds of data-science projects from natural language processing to operations research.

By design, Metaflow is a deceptively simple Python library:

Data scientists can structure their workflow as a Directed Acyclic Graph of steps, as depicted above. The steps can be arbitrary Python code. In this hypothetical example, the flow trains two versions of a model in parallel and chooses the one with the highest score.

On the surface, this doesn’t seem like much. There are many existing frameworks, such as Apache Airflow or Luigi, which allow execution of DAGs consisting of arbitrary Python code. The devil is in the many carefully designed details of Metaflow: for instance, note how in the above example data and models are stored as normal Python instance variables. They work even if the code is executed on a distributed compute platform, which Metaflow supports by default, thanks to Metaflow’s built-in content-addressed artifact store. In many other frameworks, loading and storing of artifacts is left as an exercise for the user, which forces them to decide what should and should not be persisted. Metaflow removes this cognitive overhead.

Metaflow is packed with human-centric details like this, all of which aim at boosting data scientist productivity. For a comprehensive overview of all features of Metaflow, take a look at our documentation at docs.metaflow.org.

Metaflow on Amazon Web Services

Netflix’s data warehouse contains hundreds of petabytes of data. While a typical machine learning workflow running on Metaflow touches only a small shard of this warehouse, it can still process terabytes of data.

Metaflow is a cloud-native framework. It leverages elasticity of the cloud by design — both for compute and storage. Netflix has been one of the largest users of Amazon Web Services (AWS) for many years and we have accumulated plenty of operational experience and expertise in dealing with the cloud, AWS in particular. For the open-source release, we partnered with AWS to provide a seamless integration between Metaflow and various AWS services.

Metaflow comes with built-in capability to snapshot all code and data in Amazon S3 automatically, which is a key value proposition of our internal Metaflow setup. This provides us with a comprehensive solution for versioning and experiment tracking without any user intervention, which is core to any production-grade machine learning infrastructure.

In addition, Metaflow comes bundled with a high-performance S3 client, which can load data up to 10Gbps. This client has been massively popular amongst our users, who can now load data into their workflows an order of magnitude faster than before, enabling faster iteration cycles.

For general purpose data processing, Metaflow integrates with AWS Batch, which is a managed, container-based compute platform provided by AWS. The user can benefit from infinitely scalable compute clusters by adding a single line in their code: @batch. For training machine learning models, besides writing their own functions, the user has the choice to use AWS Sagemaker, which provides high-performance implementations of various models, many of which support distributed training.

Metaflow supports all common off-the-shelf machine learning frameworks through our @conda decorator, which allows the user to specify external dependencies for their steps safely. The @conda decorator freezes the execution environment, providing good guarantees of reproducibility, both when executed locally as well as in the cloud.

For more details, read this page about Metaflow’s integration with AWS.

From Prototype To Production

Out of the box, Metaflow provides a first-class local development experience. It allows data scientists to develop and test code quickly on your laptop, similar to any Python script. If your workflow supports parallelism, Metaflow takes advantage of all CPU cores available on your development machine.

We encourage our users to deploy their workflows to production as soon as possible. In our case, “production” means a highly available, centralized DAG scheduler, Meson, where users can export their Metaflow runs for execution with a single command. This allows them to start testing their workflow with regularly updating data quickly, which is a highly effective way to surface bugs and issues in the model. Since Meson is not available in open-source, we are working on providing a similar integration to AWS Step Functions, which is a highly available workflow scheduler.

In a complex business environment like Netflix’s, there are many ways to consume the results of a data science workflow. Often, the final results are written to a table, to be consumed by a dashboard. Sometimes, the resulting model is deployed as a microservice to support real-time inferencing. It is also common to chain workflows so that the results of a workflow are consumed by another. Metaflow supports all these modalities, although some of these features are not yet available in the open-source version.

When it comes to inspecting the results, Metaflow comes with a notebook-friendly client API. Most of our data scientists are heavy users of Jupyter notebooks, so we decided to focus our UI efforts on a seamless integration with notebooks, instead of providing a one-size-fits-all Metaflow UI. Our data scientists can build custom model UIs in notebooks, fetching artifacts from Metaflow, which provide just the right information about each model. A similar experience is available with AWS Sagemaker notebooks with open-source Metaflow.

Get Started With Metaflow

Metaflow has been eagerly adopted inside of Netflix, and today, we are making Metaflow available as an open-source project.

We hope that our vision of data scientist autonomy and productivity resonates outside Netflix as well. We welcome you to try Metaflow, start using it in your organization, and participate in its development.

You can find the project home page at metaflow.org and the code at github.com/Netflix/metaflow. Metaflow is comprehensively documented at docs.metaflow.org. The quickest way to get started is to follow our tutorial. If you want to learn more before getting your hands dirty, you can watch presentations about Metaflow at the high level or dig deeper into the internals of Metaflow.

If you have any questions, thoughts, or comments about Metaflow, you can find us at Metaflow chat room or you can reach us by email at [email protected]. We are eager to hear from you!


Open-Sourcing Metaflow, a Human-Centric Framework for Data Science was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Code a Frogger-style road-crossing game | Wireframe #27

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-frogger-style-road-crossing-game-wireframe-27/

Guide a frog across busy roads and rivers. Mark Vanstone shows you how to code a simple remake of Konami’s arcade game, Frogger.

Konami’s original Frogger: so iconic, it even featured in a 1998 episode of Seinfeld.

Frogger

Why did the frog cross the road? Because Frogger would be a pretty boring game if it didn’t. Released in 1981 by Konami, the game appeared in assorted bars, sports halls, and arcades across the world, and became an instant hit. The concept was simple: players used the joystick to move a succession of frogs from the bottom of the screen to the top, avoiding a variety of hazards – cars, lorries, and later, the occasional crocodile. Each frog had to be safely manoeuvred to one of five alcoves within a time limit, while extra points were awarded for eating flies along the way.

Before Frogger, Konami mainly focused on churning out clones of other hit arcade games like Space Invaders and Breakout; Frogger was one of its earliest original ideas, and the simplicity of its concept saw it ported to just about every home system available at the time. (Ironically, Konami’s game would fall victim to repeated cloning by other developers.) Decades later, developers still take inspiration from it; Hipster Whale’s Crossy Road turned Frogger into an endless running game; earlier this year, Konami returned to the creative well with Frogger in Toy Town, released on Apple Arcade.

Code your own Konami Frogger

We can recreate much of Frogger’s gameplay in just a few lines of Pygame Zero code. The key elements are the frog’s movement, which use the arrow keys, vehicles that move across the screen, and floating objects – logs and turtles – moving in opposite directions. Our background graphic will provide the road, river, and grass for our frog to move over. The frog’s movement will be triggered from an on_key_down() function, and as the frog moves, we switch to a second frame with legs outstretched, reverting back to a sitting position after a short delay. We can use the inbuilt Actor properties to change the image and set the angle of rotation.

In our Frogger homage, we move the frog with the arrow keys to avoid the traffic, and jump onto the floating logs and turtles.

For all the other moving elements, we can also use Pygame Zero Actors; we just need to make an array for our cars with different graphics for the various rows, and an array for our floating objects in the same way.
In our update() function, we need to move each Actor according to which row it’s in, and when an Actor disappears off the screen, set the x coordinate so that it reappears on the opposite side.

Handling the logic of the frog moving across the road is quite easy; we just check for collision with each of the cars, and if the frog hits a car, then we have a squashed frog. The river crossing is a little more complicated. Each time the frog moves on the river, we need to make sure that it’s on a floating Actor. We therefore check to make sure that the frog is in collision with one of the floating elements, otherwise it’s game over.

There are lots of other elements you could add to the example shown here: the original arcade game provided several frogs to guide to their alcoves on the other side of the river, while crocodiles also popped up from time to time to add a bit more danger. Pygame Zero has all the tools you need to make a fully functional version of Konami’s hit.

Here’s Mark’s code, which gets a Frogger homage running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 27

You can read more features like this one in Wireframe issue 27, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 27 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Frogger-style road-crossing game | Wireframe #27 appeared first on Raspberry Pi.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

Post Syndicated from Nadin El-Yabroudi original https://blog.cloudflare.com/introducing-flan-scan/

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

Today, we’re excited to open source Flan Scan, Cloudflare’s in-house lightweight network vulnerability scanner. Flan Scan is a thin wrapper around Nmap that converts this popular open source tool into a vulnerability scanner with the added benefit of easy deployment.

We created Flan Scan after two unsuccessful attempts at using “industry standard” scanners for our compliance scans. A little over a year ago, we were paying a big vendor for their scanner until we realized it was one of our highest security costs and many of its features were not relevant to our setup. It became clear we were not getting our money’s worth. Soon after, we switched to an open source scanner and took on the task of managing its complicated setup. That made it difficult to deploy to our entire fleet of more than 190 data centers.

We had a deadline at the end of Q3 to complete an internal scan for our compliance requirements but no tool that met our needs. Given our history with existing scanners, we decided to set off on our own and build a scanner that worked for our setup. To design Flan Scan, we worked closely with our auditors to understand the requirements of such a tool. We needed a scanner that could accurately detect the services on our network and then lookup those services in a database of CVEs to find vulnerabilities relevant to our services. Additionally, unlike other scanners we had tried, our tool had to be easy to deploy across our entire network.

We chose Nmap as our base scanner because, unlike other network scanners which sacrifice accuracy for speed, it prioritizes detecting services thereby reducing false positives. We also liked Nmap because of the Nmap Scripting Engine (NSE), which allows scripts to be run against the scan results. We found that the “vulners” script, available on NSE, mapped the detected services to relevant CVEs from a database, which is exactly what we needed.

The next step was to make the scanner easy to deploy while ensuring it outputted actionable and valuable results. We added three features to Flan Scan which helped package up Nmap into a user-friendly scanner that can be deployed across a large network.

  • Easy Deployment and ConfigurationTo create a lightweight scanner with easy configuration, we chose to run Flan Scan inside a Docker container. As a result, Flan Scan can be built and pushed to a Docker registry and maintains the flexibility to be configured at runtime. Flan Scan also includes sample Kubernetes configuration and deployment files with a few placeholders so you can get up and scanning quickly.
  • Pushing results to the Cloud Flan Scan adds support for pushing results to a Google Cloud Storage Bucket or an S3 bucket. All you need to do is set a few environment variables and Flan Scan will do the rest. This makes it possible to run many scans across a large network and collect the results in one central location for processing.
  • Actionable Reports – Flan Scan generates actionable reports from Nmap’s output so you can quickly identify vulnerable services on your network, the applicable CVEs, and the IP addresses and ports where these services were found. The reports are useful for engineers following up on the results of the scan as well as auditors looking for evidence of compliance scans.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample run of Flan Scan from start to finish. 

How has Scan Flan improved Cloudflare’s network security?

By the end of Q3, not only had we completed our compliance scans, we also used Flan Scan to tangibly improve the security of our network. At Cloudflare, we pin the software version of some services in production because it allows us to prioritize upgrades by weighing the operational cost of upgrading against the improvements of the latest version. Flan Scan’s results revealed that our FreeIPA nodes, used to manage Linux users and hosts, were running an outdated version of Apache with several medium severity vulnerabilities. As a result, we prioritized their update. Flan Scan also found a vulnerable instance of PostgreSQL leftover from a performance dashboard that no longer exists.

Flan Scan is part of a larger effort to expand our vulnerability management program. We recently deployed osquery to our entire network to perform host-based vulnerability tracking. By complementing osquery’s findings with Flan Scan’s network scans we are working towards comprehensive visibility of the services running at our edge and their vulnerabilities. With two vulnerability trackers in place, we decided to build a tool to manage the increasing number of vulnerability  sources. Our tool sends alerts on new vulnerabilities, filters out false positives, and tracks remediated vulnerabilities. Flan Scan’s valuable security insights were a major impetus for creating this vulnerability tracking tool.

How does Flan Scan work?

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

The first step of Flan Scan is running an Nmap scan with service detection. Flan Scan’s default Nmap scan runs the following scans:

  1. ICMP ping scan – Nmap determines which of the IP addresses given are online.
  2. SYN scan – Nmap scans the 1000 most common ports of the IP addresses which responded to the ICMP ping. Nmap marks ports as open, closed, or filtered.
  3. Service detection scan – To detect which services are running on open ports Nmap performs TCP handshake and banner grabbing scans.

Other types of scanning such as UDP scanning and IPv6 addresses are also possible with Nmap. Flan Scan allows users to run these and any other extended features of Nmap by passing in Nmap flags at runtime.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Nmap output

Flan Scan adds the “vulners” script tag in its default Nmap command to include in the output a list of vulnerabilities applicable to the services detected. The vulners script works by making API calls to a service run by vulners.com which returns any known vulnerabilities for the given service.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Nmap output with Vulners script

The next step of Flan Scan uses a Python script to convert the structured XML of Nmap’s output to an actionable report. The reports of the previous scanner we used listed each of the IP addresses scanned and present the vulnerabilities applicable to that location. Since we had multiple IP addresses running the same service, the report would repeat the same list of vulnerabilities under each of these IP addresses. This meant scrolling back and forth on documents hundreds of pages long to obtain a list of all IP addresses with the same vulnerabilities.  The results were impossible to digest.

Flan Scans results are structured around services. The report enumerates all vulnerable services with a list beneath each one of relevant vulnerabilities and all IP addresses running this service. This structure makes the report shorter and actionable since the services that need to be remediated can be clearly identified. Flan Scan reports are made using LaTeX because who doesn’t like nicely formatted reports that can be generated with a script? The raw LaTeX file that Flan Scan outputs can be converted to a beautiful PDF by using tools like pdf2latex or TeXShop.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Flan Scan report

What’s next?

Cloudflare’s mission is to help build a better Internet for everyone, not just Internet giants who can afford to buy expensive tools. We’re open sourcing Flan Scan because we believe it shouldn’t cost tons of money to have strong network security.

You can get started running a vulnerability scan on your network in a few minutes by following the instructions on the README. We welcome contributions and suggestions from the community.

Python 3.8 runtime now available in AWS Lambda

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/python-3-8-runtime-now-available-in-aws-lambda/

You can now develop your AWS Lambda functions using the Python 3.8 runtime. Start using this runtime today by specifying a runtime parameter value of python3.8 when creating or updating Lambda functions.

New Python runtime features

Python 3.8 is a stable release and brings several new features, including assignment expressions, positional-only arguments, and vectorcall.

Assignment expressions

Assignment expressions provide a way to assign values to variables within expressions using the new notation NAME := expr. This is informally known as the “walrus operator,” as it looks like the eyes and tusks of a walrus on its side.

Before:

>>> walrus = True
print(walrus)
True

After:

>>> print(walrus := True)
True

Previously, assignment was only available in statement form. With Python 3.8, it is available in list comprehensions and other expression contexts.

For examples, usage, and limitations, see PEP-572.

Positional-only arguments

Positional-only parameters give more control to library authors by introducing a new function parameter syntax `/` to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments.

When describing APIs, they can be used to better express the intended usage and allow the API to evolve in a safe, backward-compatible way. They also make the Python language more consistent with existing documentation and the behavior of various “builtin” and standard library functions.

For a full specification, including syntax and examples, see PEP-570 and PEP-457.

f-strings support

An f-string is a formatted string literal to enclose variables, and even expressions, inside curly braces. An F-string is evaluated at runtime and included in the string, recognized by the leading f:

>>> city = "London"
>>> f"Living in {city} is amazing"
'Living in London is amazing'

Python 3.8 adds the ability to use assignment expressions inside f-strings which are evaluated from left-to-right.

>>> import math
>>> radius = 3.8

>>> f"With a diameter of {( diameter := 2 * radius)}, the circumference is {math.pi * diameter:.2f}"
' With a diameter of 7.6 the circumference is 23.88'

Vectorcall

Vectorcall is a new protocol and calling convention based on the “fastcall” convention, which was previously used internally by CPython. The new features can be used by any user-defined extension class. Vectorcall generalizes the calling convention used internally for Python and builtin functions so that all calls can benefit from better performance. It is designed to remove the overhead of temporary object creation and multiple indirections.

For additional information on vectorcall, see PEP-590.

Amazon Linux 2

Python 3.8, like Node.js 10 and 12, and Java 11, is based on an Amazon Linux 2 execution environment. Amazon Linux 2 provides a secure, stable, and high-performance execution environment to develop and run cloud and enterprise applications.

Next steps

Get started building with Python 3.8 today by specifying a runtime parameter value of python3.8 when creating or updating your Lambda functions.

Enjoy, go build with Python 3.8!

New Automation Features In AWS Systems Manager

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/new-automation-features-in-aws-systems-manager/

Today we are announcing additional automation features inside of AWS Systems Manager. If you haven’t used Systems Manager yet, it’s a service that provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.

With this new release, it just got even more powerful. We have added additional capabilities to AWS Systems Manager that enables you to build, run, and share automations with others on your team or inside your organisation — making managing your infrastructure more repeatable and less error-prone.

Inside the AWS Systems Manager console on the navigation menu, there is an item called Automation if I click this menu item I will see the Execute automation button.

When I click on this I am asked what document I want to run. AWS provides a library of documents that I could choose from, however today, I am going to build my own so I will click on the Create document button.

This takes me to a a new screen that allows me to create a document (sometimes referred to as an automation playbook) that amongst other things executes Python or PowerShell scripts.

The console gives me two options for editing a document: A YAML editor or the “Builder” tool that provides a guided, step-by-step user interface with the ability to include documentation for each workflow step.

So, let’s take a look by building and running a simple automation. When I create a document using the Builder tool, the first thing required is a document name.

Next, I need to provide a description. As you can see below, I’m able to use Markdown to format the description. The description is an excellent opportunity to describe what your document does, this is valuable since most users will want to share these documents with others on their team and build a library of documents to solve everyday problems.

Optionally, I am asked to provide parameters for my document. These parameters can be used in all of the scripts that you will create later. In my example, I have created three parameters: imageId, tagValue, and instanceType. When I come to execute this document, I will have the opportunity to provide values for these parameters that will override any defaults that I set.

When someone executes my document, the scripts that are executed will interact with AWS services. A document runs with the user permissions for most of its actions along with the option of providing an Assume Role. However, for documents with the Run a Script action, the role is required when the script is calling any AWS API.

You can set the Assume role globally in the builder tool; however, I like to add a parameter called assumeRole to my document, this gives anyone that is executing it the ability to provide a different one.

You then wire this parameter up to the global assumeRole by using the {{assumeRole}}syntax in the Assume role property textbox (I have called my parameter name assumeRole but you could call it what you like, just make sure that the name you give the parameter is what you put in the double parentheses syntax e.g.{{yourParamName}}).

Once my document is set up, I then need to create the first step of my document. Your document can contain 1 or more steps, and you can create sophisticated workflows with branching, for example based on a parameter or failure of a step. Still, in this example, I am going to create three steps that execute one after another. Again you need to give the step a name and a description. This description can also include markdown. You need to select an Action Type, for this example I will choose Run a script.

With the ‘Run a script’ action type, I get to run a script in Python or PowerShell without requiring any infrastructure to run the script. It’s important to realise that this script will not be running on one of your EC2 instances. The scripts run in a managed compute environment. You can configure a Amazon CloudWatch log group on the preferences page to send outputs to a CloudWatch log group of your choice.

In this demo, I write some Python that creates an EC2 instance. You will notice that this script is using the AWS SDK for Python. I create an instance based upon an image_id, tag_value, and instance_type that are passed in as parameters to the script.

To pass parameters into the script, in the Additional Inputs section, I select InputPayload as the input type. I then use a particular YAML format in the Input Value text box to wire up the global parameters to the parameters that I am going to use in the script. You will notice that again I have used the double parentheses syntax to reference the global parameters e.g. {{imageId}}

In the Outputs section, I also wire up an output parameter than can be used by subsequent steps.

Next, I will add a second step to my document . This time I will poll the instance to see if its status has switched to ok. The exciting thing about this code is the InstanceId, is passed into the script from a previous step. This is an example of how the execution steps can be chained together to use outputs of earlier steps.

def poll_instance(events, context):
    import boto3
    import time

    ec2 = boto3.client('ec2')

    instance_id = events['InstanceId']

    print('[INFO] Waiting for instance to enter Status: Ok', instance_id)

    instance_status = "null"

    while True:
    res = ec2.describe_instance_status(InstanceIds=[instance_id])

    if len(res['InstanceStatuses']) == 0:
        print("Instance Status Info is not available yet")
        time.sleep(5)
        continue

    instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status']

    print('[INFO] Polling get status of the instance', instance_status)

    if instance_status == 'ok':
        break

    time.sleep(10)

    return {'Status': instance_status, 'InstanceId': instance_id}

To pass the parameters into the second step, notice that I use the double parentheses syntax to reference the output of a previous step. The value in the Input value textbox {{launchEc2Instance.payload}} is the name of the step launchEc2Instance and then the name of the output parameter payload.

Lastly, I will add a final step. This step will run a PowerShell script and use the AWS Tools for PowerShell. I’ve added this step purely to show that you can use PowerShell as an alternative to Python.

You will note on the first line that I have to Install the AWSPowerShell.NetCore module and use the -Force switch before I can start interacting with AWS services.

All this step does is take the InstanceId output from the LaunchEc2Instance step and use it to return the InstanceType of the ECS instance.

It’s important to note that I have to pass the parameters from LaunchEc2Instance step to this step by configuring the Additional inputs in the same way I did earlier.

Now that our document is created we can execute it. I go to the Actions & Change section of the menu and select Automation, from this screen, I click on the Execute automation button. I then get to choose the document I want to execute. Since this is a document I created, I can find it on the Owned by me tab.

If I click the LaunchInstance document that I created earlier, I get a document details screen that shows me the description I added. This nicely formatted description allows me to generate documentation for my document and enable others to understand what it is trying to achieve.

When I click Next, I am asked to provide any Input parameters for my document. I add the imageId and ARN for the role that I want to use when executing this automation. It’s important to remember that this role will need to have permissions to call any of the services that are requested by the scripts. In my example, that means it needs to be able to create EC2 instances.

Once the document executes, I am taken to a screen that shows the steps of the document and gives me details about how long each step took and respective success or failure of each step. I can also drill down into each step and examine the logs. As you can see, all three steps of my document completed successfully, and if I go to the Amazon Elastic Compute Cloud (EC2) console, I will now have an EC2 instance that I created with tag LaunchedBySsmAutomation.

These new features can be found today in all regions inside the AWS Systems Manager console so you can start using them straight away.

Happy Automating!

— Martin;

Code a Phoenix-style mothership battle | Wireframe #26

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-phoenix-style-mothership-battle-wireframe-26/

It was one of gaming’s first boss battles. Mark Vanstone shows you how to recreate the mothership from the 1980 arcade game, Phoenix.

Phoenix’s fifth stage offered a unique challenge in 1980: one of gaming’s first-ever boss battles.

First released in 1980, Phoenix was something of an arcade pioneer. The game was the kind of post-Space Invaders fixed-screen shooter that was ubiquitous at the time: players moved their ship from side to side, shooting at a variety of alien birds of different sizes and attack patterns. The enemies moved swiftly, and the player’s only defence was a temporary shield which could be activated when the birds swooped and strafed the lone defender. But besides all that, Phoenix had a few new ideas of its own: not only did it offer five distinct stages, but it also featured one of the earliest examples of a boss battle – its heavily armoured alien mothership, which required accurate shots to its shields before its weak spot could be exposed.

To recreate Phoenix’s boss, all we need is Pygame Zero. We can get a portrait style window with the WIDTH and HEIGHT variables and throw in some parallax stars (an improvement on the original’s static backdrop) with some blitting in the draw() function. The parallax effect is created by having a static background of stars with a second (repeated) layer of stars moving down the screen.

The mothership itself is made up of several Actor objects which move together down the screen towards the player’s spacecraft, which can be moved right and left using the mouse. There’s the main body of the mothership, in the centre is the alien that we want to shoot, and then we have two sets of moving shields.

Like the original Phoenix, our mothership boss battle has multiple shields that need to be taken out to expose the alien at the core.

In this example, rather than have all the graphics dimensions in multiples of eight (as we always did in the old days), we will make all our shield blocks 20 by 20 pixels, because computers simply don’t need to work in multiples of eight any more. The first set of shields is the purple rotating bar around the middle of the ship. This is made up of 14 Actor blocks which shift one place to the right each time they move. Every other block has a couple of portal windows which makes the rotation obvious, and when a block moves off the right-hand side, it is placed on the far left of the bar.

The second set of shields are in three yellow rows (you may want to add more), the first with 14 blocks, the second with ten blocks, and the last with four. These shield blocks are fixed in place but share a behaviour with the purple bar shields, in that when they are hit by a bullet, they change to a damaged version. There are four levels of damage before they are destroyed and the bullets can pass through. When enough shields have been destroyed for a bullet to reach the alien, the mothership is destroyed (in this version, the alien flashes).

Bullets can be fired by clicking the mouse button. Again, the original game had alien birds flying around the mothership and dive-bombing the player, making it harder to get a good shot in, but this is something you could try adding to the code yourself.

To really bring home that eighties Phoenix arcade experience, you could also add in some atmospheric shooting effects and, to round the whole thing off, have an 8-bit rendition of Beethoven’s Für Elise playing in the background.

Here’s Mark’s code, which gets a simple mothership battle running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 26

You can read more features like this one in Wireframe issue 26, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 26 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Phoenix-style mothership battle | Wireframe #26 appeared first on Raspberry Pi.

Make a Columns-style tile-matching game | Wireframe #25

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/make-a-columns-style-tile-matching-game-wireframe-25/

Raspberry Pi’s own Rik Cross shows you how to code your own Columns-style tile-matching puzzle game in Python and Pygame Zero.

Created by Hewlett-Packard engineer Jay Geertsen, Columns was Sega’s sparkly rival to Nintendo’s all-conquering Tetris.

Columns and tile-matching

Tile-matching games began with Tetris in 1984 and the less famous Chain Shot! the following year. The genre gradually evolved through games like Dr. Mario, Columns, Puyo Puyo, and Candy Crush Saga. Although their mechanics differ, the goals are the same: to organise a board of different-coloured tiles by moving them around until they match.

Here, I’ll show how you can create a simple tile-matching game using Python and Pygame. In it, any tile can be swapped with the tile to its right, with the aim being to make matches of three or more tiles of the same colour. Making a match causes the tiles to disappear from the board, with tiles dropping down to fill in the gaps.

At the start of a new game, a board of randomly generated tiles is created. This is made as an (initially empty) two-dimensional array, whose size is determined by the values of rows and columns. A specific tile on the board is referenced by its row and column number.

We want to start with a truly random board, but we also want to avoid having any matching tiles. Random tiles are added to each board position, therefore, but replaced if a tile is the same as the one above or to it’s left (if such a tile exists).

Our board consists of 12 rows and 8 columns of tiles. Pressing SPACE will swap the 2 selected tiles (outlined in white), and in this case, create a match of red tiles vertically.

In our game, two tiles are ‘selected’ at any one time, with the player pressing the arrow keys to change those tiles. A selected variable keeps track of the row and column of the left-most selected tile, with the other tile being one column to the right of the left-most tile. Pressing SPACE swaps the two selected tiles, checks for matches, clears any matched tiles, and fills any gaps with new tiles.

A basic ‘match-three’ algorithm would simply check whether any tiles on the board have a matching colour tile on either side, horizontally or vertically. I’ve opted for something a little more convoluted, though, as it allows us to check for matches on any length, as well as track multiple, separate matches. A currentmatch list keeps track of the (x,y) positions of a set of matching tiles. Whenever this list is empty, the next tile to check is added to the list, and this process is repeated until the next tile is a different colour.

If the currentmatch list contains three or more tiles at this point, then the list is added to the overall matches list (a list of lists of matches!) and the currentmatch list is reset. To clear matched tiles, the matched tile positions are set to None, which indicates the absence of a tile at that position. To fill the board, tiles in each column are moved down by one row whenever an empty board position is found, with a new tile being added to the top row of the board.

The code provided here is just a starting point, and there are lots of ways to develop the game, including adding a scoring system and animation to liven up your tiles.

Here’s Rik’s code, which gets a simple tile-match game running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 25

You can read more features like this one in Wireframe issue 25, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 25 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Make a Columns-style tile-matching game | Wireframe #25 appeared first on Raspberry Pi.