Tag Archives: How-to

Code a Boulder Dash mining game | Wireframe #30

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-boulder-dash-mining-game-wireframe-30/

Learn how to code a simple Boulder Dash homage in Python and Pygame. Mark Vanstone shows you how. 

The original Boulder Dash was marked out by some devious level design, which threatened to squash the player at every turn.

Boulder Dash

Boulder Dash first appeared in 1984 for the Commodore 64, Apple II, and the Atari 400/800. It featured an energetic gem collector called Rockford who, thanks to some rather low-resolution graphics, looked a bit like an alien. His mission was to tunnel his way through a series of caves to find gems while avoiding falling rocks dislodged by his digging. Deadly creatures also inhabited the caves which, if destroyed by dropping rocks on them, turned into gems for Rockford to collect.

The ingenious level designs were what made Boulder Dash so addictive. Gems had to be collected within a time limit to unlock the exit, but some were positioned in places that would need planning to get to, often using the physics of falling boulders to block or clear areas. Of course, the puzzles got increasingly tough as the levels progressed.

Written by Peter Liepa and Chris Gray, Boulder Dash was published by First Star Software, which still puts out new versions of the game to this day. Due to its original success, Boulder Dash was ported to all kinds of platforms, and the years since have seen no fewer than 20 new iterations of Boulder Dash, and a fair few clones, too.

Our homage to Boulder Dash running in Pygame Zero. Dig through the caves to find gems – while avoiding death from above.

Making Boulder Dash in Python

We’re going to have a look at the boulder physics aspect of the game, and make a simple level where Rockford can dig out some gems and hopefully not get flattened under an avalanche of rocks. Writing our code in Pygame Zero, we’ll automatically create an 800 by 600-size window to work with. We can make our game screen by defining a two-dimensional list, which, in this case, we will fill with soil squares and randomly position the rocks and gems.

Each location in the list matrix will have a name: either wall for the outside boundary, soil for the diggable stuff, rock for a round, moveable boulder, gem for a collectable item, and finally, rockford to symbolise our hero. We can also define an Actor for Rockford, as this will make things like switching images and tracking other properties easier.

Here’s Mark’s code, which gets an homage to Boulder Dash running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Our draw() function is just a nested loop to iterate through the list matrix and blit to the screen whatever is indicated in each square. The Rockford Actor is then drawn over the top. We can also keep a count of how many gems have been collected and provide a congratulatory message if all of them are found. In the update() function, there are only two things we really need to worry about: the first being to check for keypresses from the player and move Rockford accordingly, and the second to check rocks to see if they need to move.

Rockford is quite easy to test for movement, as he can only move onto an empty square – a soil square or a gem square. It’s also possible for him to push a boulder if there’s an empty space on the other side. For the boulders, we need to first test if there’s an empty space below it, and if so, the boulder must move downwards. We also test to see if a boulder is on top of another boulder – if it is, the top boulder can roll off and down onto a space either to the left or the right of the one beneath.
There’s not much to add to this snippet of code to turn it into a playable game of Boulder Dash. See if you can add a timer, some monsters, and, of course, some puzzles for players to solve on each level.

Testing for movement

An important thing to notice about the process of scanning through the list matrix to test for boulder movement is that we need to read the list from the bottom upwards; otherwise, because the boulders move downwards, we may end up testing a boulder multiple times if we test from the beginning to the end of the list. Similarly, if we read the list matrix from the top down, we may end up moving a boulder down and then when reading the next row, coming across the same one again, and moving it a second time.

Get your copy of Wireframe issue 30

You can read more features like this one in Wireframe issue 30, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 30 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Boulder Dash mining game | Wireframe #30 appeared first on Raspberry Pi.

Building an AWS IoT Core device using AWS Serverless and an ESP32

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-an-aws-iot-core-device-using-aws-serverless-and-an-esp32/

Using a simple Arduino sketch, an AWS Serverless Application Repository application, and a microcontroller, you can build a basic serverless workflow for communicating with an AWS IoT Core device.

A microcontroller is a programmable chip and acts as the brain of an electronic device. It has input and output pins for reading and writing on digital or analog components. Those components could be sensors, relays, actuators, or various other devices. It can be used to build remote sensors, home automation products, robots, and much more. The ESP32 is a powerful low-cost microcontroller with Wi-Fi and Bluetooth built in and is used this walkthrough.

The Arduino IDE, a lightweight development environment for hardware, now includes support for the ESP32. There is a large collection of community and officially supported libraries, from addressable LED strips to spectral light analysis.

The following walkthrough demonstrates connecting an ESP32 to AWS IoT Core to allow it to publish and subscribe to topics. This means that the device can send any arbitrary information, such as sensor values, into AWS IoT Core while also being able to receive commands.

Solution overview

This post walks through deploying an application from the AWS Serverless Application Repository. This allows an AWS IoT device to be messaged using a REST endpoint powered by Amazon API Gateway and AWS Lambda. The AWS SAR application also configures an AWS IoT rule that forwards any messages published by the device to a Lambda function that updates an Amazon DynamoDB table, demonstrating basic bidirectional communication.

The last section explores how to build an IoT project with real-world application. By connecting a thermal printer module and modifying a few lines of code in the example firmware, the ESP32 device becomes an AWS IoT–connected printer.

All of this can be accomplished within the AWS Free Tier, which is necessary for the following instructions.

An example of an AWS IoT project using an ESP32, AWS IoT Core, and an Arduino thermal printer

An example of an AWS IoT project using an ESP32, AWS IoT Core, and an Arduino thermal printer.

Required steps

To complete the walkthrough, follow these steps:

  • Create an AWS IoT device.
  • Install and configure the Arduino IDE.
  • Configure and flash an ESP32 IoT device.
  • Deploying the lambda-iot-rule AWS SAR application.
  • Monitor and test.
  • Create an IoT thermal printer.

Creating an AWS IoT device

To communicate with the ESP32 device, it must connect to AWS IoT Core with device credentials. You must also specify the topics it has permissions to publish and subscribe on.

  1. In the AWS IoT console, choose Register a new thing, Create a single thing.
  2. Name the new thing. Use this exact name later when configuring the ESP32 IoT device. Leave the remaining fields set to their defaults. Choose Next.
  3.  Choose Create certificate. Only the thing cert, private key, and Amazon Root CA 1 downloads are necessary for the ESP32 to connect. Download and save them somewhere secure, as they are used when programming the ESP32 device.
  4. Choose Activate, Attach a policy.
  5. Skip adding a policy, and choose Register Thing.
  6. In the AWS IoT console side menu, choose Secure, Policies, Create a policy.
  7. Name the policy Esp32Policy. Choose the Advanced tab.
  8. Paste in the following policy template.
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": "iot:Connect",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:client/THINGNAME"
        },
        {
          "Effect": "Allow",
          "Action": "iot:Subscribe",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topicfilter/esp32/sub"
        },
    	{
          "Effect": "Allow",
          "Action": "iot:Receive",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topic/esp32/sub"
        },
        {
          "Effect": "Allow",
          "Action": "iot:Publish",
          "Resource": "arn:aws:iot:REGION:ACCOUNT_ID:topic/esp32/pub"
        }
      ]
    }
  9. Replace REGION with the matching AWS Region you’re currently operating in. This can be found on the top right corner of the AWS console window.
  10.  Replace ACCOUNT_ID with your own, which can be found in Account Settings.
  11. Replace THINGNAME with the name of your device.
  12. Choose Create.
  13. In the AWS IoT console, choose Secure, Certification. Select the one created for your device and choose Actions, Attach policy.
  14. Choose Esp32Policy, Attach.

Your AWS IoT device is now configured to have permission to connect to AWS IoT Core. It can also publish to the topic esp32/pub and subscribe to the topic esp32/sub. For more information on securing devices, see AWS IoT Policies.

Installing and configuring the Arduino IDE

The Arduino IDE is an open-source development environment for programming microcontrollers. It supports a continuously growing number of platforms including most ESP32-based modules. It must be installed along with the ESP32 board definitions, MQTT library, and ArduinoJson library.

  1. Download the Arduino installer for the desired operating system.
  2. Start Arduino and open the Preferences window.
  3. For Additional Board Manager URLs, add
    https://dl.espressif.com/dl/package_esp32_index.json.
  4. Choose Tools, Board, Boards Manager.
  5. Search esp32 and install the latest version.
  6. Choose Sketch, Include Library, Manage Libraries.
  7. Search MQTT, and install the latest version by Joel Gaehwiler.
  8. Repeat the library installation process for ArduinoJson.

The Arduino IDE is now installed and configured with all the board definitions and libraries needed for this walkthrough.

Configuring and flashing an ESP32 IoT device

A collection of various ESP32 development boards.

A collection of various ESP32 development boards.

For this section, you need an ESP32 device. To check if your board is compatible with the Arduino IDE, see the boards.txt file. The following code connects to AWS IoT Core securely using MQTT, a publish and subscribe messaging protocol.

This project has been tested on the following devices:

  1. Install the required serial drivers for your device. Some boards use different USB/FTDI chips for interfacing. Here are the most commonly used with links to drivers.
  2. Open the Arduino IDE and choose File, New to create a new sketch.
  3. Add a new tab and name it secrets.h.
  4. Paste the following into the secrets file.
    #include <pgmspace.h>
    
    #define SECRET
    #define THINGNAME ""
    
    const char WIFI_SSID[] = "";
    const char WIFI_PASSWORD[] = "";
    const char AWS_IOT_ENDPOINT[] = "xxxxx.amazonaws.com";
    
    // Amazon Root CA 1
    static const char AWS_CERT_CA[] PROGMEM = R"EOF(
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
    )EOF";
    
    // Device Certificate
    static const char AWS_CERT_CRT[] PROGMEM = R"KEY(
    -----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----
    )KEY";
    
    // Device Private Key
    static const char AWS_CERT_PRIVATE[] PROGMEM = R"KEY(
    -----BEGIN RSA PRIVATE KEY-----
    -----END RSA PRIVATE KEY-----
    )KEY";
  5. Enter the name of your AWS IoT thing, as it is in the console, in the field THINGNAME.
  6. To connect to Wi-Fi, add the SSID and PASSWORD of the desired network. Note: The network name should not include spaces or special characters.
  7. The AWS_IOT_ENDPOINT can be found from the Settings page in the AWS IoT console.
  8. Copy the Amazon Root CA 1, Device Certificate, and Device Private Key to their respective locations in the secrets.h file.
  9. Choose the tab for the main sketch file, and paste the following.
    #include "secrets.h"
    #include <WiFiClientSecure.h>
    #include <MQTTClient.h>
    #include <ArduinoJson.h>
    #include "WiFi.h"
    
    // The MQTT topics that this device should publish/subscribe
    #define AWS_IOT_PUBLISH_TOPIC   "esp32/pub"
    #define AWS_IOT_SUBSCRIBE_TOPIC "esp32/sub"
    
    WiFiClientSecure net = WiFiClientSecure();
    MQTTClient client = MQTTClient(256);
    
    void connectAWS()
    {
      WiFi.mode(WIFI_STA);
      WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
    
      Serial.println("Connecting to Wi-Fi");
    
      while (WiFi.status() != WL_CONNECTED){
        delay(500);
        Serial.print(".");
      }
    
      // Configure WiFiClientSecure to use the AWS IoT device credentials
      net.setCACert(AWS_CERT_CA);
      net.setCertificate(AWS_CERT_CRT);
      net.setPrivateKey(AWS_CERT_PRIVATE);
    
      // Connect to the MQTT broker on the AWS endpoint we defined earlier
      client.begin(AWS_IOT_ENDPOINT, 8883, net);
    
      // Create a message handler
      client.onMessage(messageHandler);
    
      Serial.print("Connecting to AWS IOT");
    
      while (!client.connect(THINGNAME)) {
        Serial.print(".");
        delay(100);
      }
    
      if(!client.connected()){
        Serial.println("AWS IoT Timeout!");
        return;
      }
    
      // Subscribe to a topic
      client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);
    
      Serial.println("AWS IoT Connected!");
    }
    
    void publishMessage()
    {
      StaticJsonDocument<200> doc;
      doc["time"] = millis();
      doc["sensor_a0"] = analogRead(0);
      char jsonBuffer[512];
      serializeJson(doc, jsonBuffer); // print to client
    
      client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
    }
    
    void messageHandler(String &topic, String &payload) {
      Serial.println("incoming: " + topic + " - " + payload);
    
    //  StaticJsonDocument<200> doc;
    //  deserializeJson(doc, payload);
    //  const char* message = doc["message"];
    }
    
    void setup() {
      Serial.begin(9600);
      connectAWS();
    }
    
    void loop() {
      publishMessage();
      client.loop();
      delay(1000);
    }
  10. Choose File, Save, and give your project a name.

Flashing the ESP32

  1. Plug the ESP32 board into a USB port on the computer running the Arduino IDE.
  2. Choose Tools, Board, and then select the matching type of ESP32 module. In this case, a Sparkfun ESP32 Thing was used.
  3. Choose Tools, Port, and then select the matching port for your device.
  4. Choose Upload. Arduino reads Done uploading when the upload is successful.
  5. Choose the magnifying lens icon to open the Serial Monitor. Set the baud rate to 9600.

Keep the Serial Monitor open. When connected to Wi-Fi and then AWS IoT Core, any messages received on the topic esp32/sub are logged to this console. The device is also now publishing to the topic esp32/pub.

The topics are set at the top of the sketch. When changing or adding topics, remember to add permissions in the device policy.

// The MQTT topics that this device should publish/subscribe
#define AWS_IOT_PUBLISH_TOPIC   "esp32/pub"
#define AWS_IOT_SUBSCRIBE_TOPIC "esp32/sub"

Within this sketch, the relevant functions are publishMessage() and messageHandler().

The publishMessage() function creates a JSON object with the current time in milliseconds and the analog value of pin A0 on the device. It then publishes this JSON object to the topic esp32/pub.

void publishMessage()
{
  StaticJsonDocument<200> doc;
  doc["time"] = millis();
  doc["sensor_a0"] = analogRead(0);
  char jsonBuffer[512];
  serializeJson(doc, jsonBuffer); // print to client

  client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
}

The messageHandler() function prints out the topic and payload of any message from a subscribed topic. To see all the ways to parse JSON messages in Arduino, see the deserializeJson() example.

void messageHandler(String &topic, String &payload) {
  Serial.println("incoming: " + topic + " - " + payload);

//  StaticJsonDocument<200> doc;
//  deserializeJson(doc, payload);
//  const char* message = doc["message"];
}

Additional topic subscriptions can be added within the connectAWS() function by adding another line similar to the following.

// Subscribe to a topic
  client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);

  Serial.println("AWS IoT Connected!");

Deploying the lambda-iot-rule AWS SAR application

Now that an ESP32 device has been connected to AWS IoT, the following steps walk through deploying an AWS Serverless Application Repository application. This is a base for building serverless integration with a physical device.

  1. On the lambda-iot-rule AWS Serverless Application Repository application page, make sure that the Region is the same as the AWS IoT device.
  2. Choose Deploy.
  3. Under Application settings, for PublishTopic, enter esp32/sub. This is the topic to which the ESP32 device is subscribed. It receives messages published to this topic. Likewise, set SubscribeTopic to esp32/pub, the topic on which the device publishes.
  4. Choose Deploy.
  5. When creation of the application is complete, choose Test app to navigate to the application page. Keep this page open for the next section.

Monitoring and testing

At this stage, two Lambda functions, a DynamoDB table, and an AWS IoT rule have been deployed. The IoT rule forwards messages on topic esp32/pub to TopicSubscriber, a Lambda function, which inserts the messages on to the DynamoDB table.

  1. On the application page, under Resources, choose MyTable. This is the DynamoDB table that the TopicSubscriber Lambda function updates.
  2. Choose Items. If the ESP32 device is still active and connected, messages that it has published appear here.

The TopicPublisher Lambda function is invoked by the API Gateway endpoint and publishes to the AWS IoT topic esp32/sub.

1.     On the application page, find the Application endpoint.

2.     To test that the TopicPublisher function is working, enter the following into a terminal or command-line utility, replacing ENDPOINT with the URL from above.

curl -d '{"text":"Hello world!"}' -H "Content-Type: application/json" -X POST https://ENDPOINT/publish

Upon success, the request returns a copy of the message.

Back in the Serial Monitor, the message published to the topic esp32/sub prints out.

Creating an IoT thermal printer

With the completion of the previous steps, the ESP32 device currently logs incoming messages to the serial console.

The following steps demonstrate how the code can be modified to use incoming messages to interact with a peripheral component. This is done by wiring a thermal printer to the ESP32 in order to physically print messages. The REST endpoint from the previous section can be used as a webhook in third-party applications to interact with this device.

A wiring diagram depicting an ESP32 connected to a thermal printer.

A wiring diagram depicting an ESP32 connected to a thermal printer.

  1. Follow the product instructions for powering, wiring, and installing the correct Arduino library.
  2. Ensure that the thermal printer is working by holding the power button on the printer while connecting the power. A sample receipt prints. On that receipt, the default baud rate is specified as either 9600 or 19200.
  3. In the Arduino code from earlier, include the following lines at the top of the main sketch file. The second line defines what interface the thermal printer is connected to. &Serial2 is used to set the third hardware serial interface on the ESP32. For this example, the pins on the Sparkfun ESP32 Thing, GPIO16/GPIO17, are used for RX/TX respectively.
    #include "Adafruit_Thermal.h"
    
    Adafruit_Thermal printer(&Serial2);
  4. Replace the setup() function with the following to initialize the printer on device bootup. Change the baud rate of Serial2.begin() to match what is specified in the test print. The default is 19200.
    void setup() {
      Serial.begin(9600);
    
      // Start the thermal printer
      Serial2.begin(19200);
      printer.begin();
      printer.setSize('S');
    
      connectAWS();
    }
    
  5. Replace the messageHandler() function with the following. On any incoming message, it parses the JSON and prints the message on the thermal printer.
    void messageHandler(String &topic, String &payload) {
      Serial.println("incoming: " + topic + " - " + payload);
    
      // deserialize json
      StaticJsonDocument<200> doc;
      deserializeJson(doc, payload);
      String message = doc["message"];
    
      // Print the message on the thermal printer
      printer.println(message);
      printer.feed(2);
    }
  6. Choose Upload.
  7. After the firmware has successfully uploaded, open the Serial Monitor to confirm that the board has connected to AWS IoT.
  8. Enter the following into a command-line utility, replacing ENDPOINT, as in the previous section.
    curl -d '{"message": "Hello World!"}' -H "Content-Type: application/json" -X POST https://ENDPOINT/publish

If successful, the device prints out the message “Hello World” from the attached thermal printer. This is a fully serverless IoT printer that can be triggered remotely from a webhook. As an example, this can be used with GitHub Webhooks to print a physical readout of events.

Conclusion

Using a simple Arduino sketch, an AWS Serverless Application Repository application, and a microcontroller, this post demonstrated how to build a basic serverless workflow for communicating with a physical device. It also showed how to expand that into an IoT thermal printer with real-world applications.

With the use of AWS serverless, advanced compute and extensibility can be added to an IoT device, from machine learning to translation services and beyond. By using the Arduino programming environment, the vast collection of open-source libraries, projects, and code examples open up a world of possibilities. The next step is to explore what can be done with an Arduino and the capabilities of AWS serverless. The sample Arduino code for this project and more can be found at this GitHub repository.

Create a turn-based combat system | Wireframe #28

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/create-a-turn-based-combat-system-wireframe-28/

Learn how to create the turn-based combat system found in games like Pokémon, Final Fantasy, and Undertale. Raspberry Pi’s Rik Cross shows you how.

With their emphasis on trading and collecting as well as turn-based combat, the Pokémon games helped bring RPG concepts to the masses.

In the late 1970s, high school student Richard Garriott made a little game called Akalabeth. Programmed in Applesoft BASIC, it helped set the template for the role-playing genre on computers. Even today, turn-based combat is still a common sight in games, with this autumn’s Pokémon Sword and Shield revolving around a battle system which sees opponents take turns to plan and execute attacks or defensive moves.

The turn-based combat system in this article is text-only, and works by allowing players to choose to defend against or attack their opponent in turn. The battle ends when only one player has some health remaining.

Each Player taking part in the battle is added to the static players list as it’s created. Players have a name, a health value (initially set to 100) and a Boolean defending value (initially set to False) to indicate whether a player is using their shield. Players also have an inputmethod attribute, which is the function used for getting player input for making various choices in the game. This function is passed to the object when created, and means that we can have human players that give their input through the keyboard, as well as computer players that make choices (in our case simply by making a random choice between the available options).

Richard Garriott’s Akalabeth laid the groundwork for Ultima, and was one of the earliest CRPGs.

A base Action class specifies an action owner and an opponent, as well as an execute() method which has no effect on the game. Subclasses of the base class override this execute() method to specify the effect the action has on the owner and/or the opponent of the action. As a basic example, two actions have been created: Defend, which sets the owner’s defending attribute to True, and Attack, which sets the owner’s defending attribute to False, and lowers the opponent’s health by a random amount depending on whether or not they are defending.

Players take turns to choose a single action to perform in the battle, starting with the human ‘Hero’ player. The choose_action() method is used to decide what to do next (in this case either attack or defend), as well as an opponent if the player has chosen to attack. A player can only be selected as an opponent if they have a health value greater than 0, and are therefore still in the game. This choose_action() method returns an Action, which is then executed using its execute() method. A few time.sleep() commands have also been thrown in here  to ramp up the suspense!

After each player has had their turn, a check is done to make sure that at least two players still have a health value greater than 0, and therefore that the battle can continue. If so, the static get_next_player() method finds the next player still in the game to take their turn in the battle, otherwise, the game ends and the winner is announced.

Our example battle can be easily extended in lots of interesting ways. The AI for choosing an action could also be made more sophisticated, by looking at opponents’ health or defending attributes before choosing an action. You could also give each action a ‘cost’, and give players a number of action ‘points’ per turn. Chosen actions would be added to a list, until all of the points have been used. These actions would then be executed one after the other, before moving on to the next player’s turn.

Here’s Rik’s code, which gets a simple turn-based combat system running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

You can read more features like this one in Wireframe issue 28, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 28 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Create a turn-based combat system | Wireframe #28 appeared first on Raspberry Pi.

Decoupled Serverless Scheduler To Run HPC Applications At Scale on EC2

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/decoupled-serverless-scheduler-to-run-hpc-applications-at-scale-on-ec2/

This post is written by Ludvig Nordstrom and Mark Duffield | on November 27, 2019

In this blog post, we dive in to a cloud native approach for running HPC applications at scale on EC2 Spot Instances, using a decoupled serverless scheduler. This architecture is ideal for many workloads in the HPC and EDA industries, and can be used for any batch job workload.

At the end of this blog post, you will have two takeaways.

  1. A highly scalable environment that can run on hundreds of thousands of cores across EC2 Spot Instances.
  2. A fully serverless architecture for job orchestration.

We discuss deploying and running a pre-built serverless job scheduler that can run both Windows and Linux applications using any executable file format for your application. This environment provides high performance, scalability, cost efficiency, and fault tolerance. We introduce best practices and benefits to creating this environment, and cover the architecture, running jobs, and integration in to existing environments.

quick note about the term cloud native: we use the term loosely in this blog. Here, cloud native  means we use AWS Services (to include serverless and microservices) to build out our compute environment, instead of a traditional lift-and-shift method.

Let’s get started!

 

Solution overview

This blog goes over the deployment process, which leverages AWS CloudFormation. This allows you to use infrastructure as code to automatically build out your environment. There are two parts to the solution: the Serverless Scheduler and Resource Automation. Below are quick summaries of each part of the solutions.

Part 1 – The serverless scheduler

This first part of the blog builds out a serverless workflow to get jobs from SQS and run them across EC2 instances. The CloudFormation template being used for Part 1 is serverless-scheduler-app.template, and here is the Reference Architecture:

 

Serverless Scheduler Reference Architecture . Reference Architecture for Part 1. This architecture shows just the Serverless Schduler. Part 2 builds out the resource allocation architecture. Outlined Steps with detail from figure one

    Figure 1: Serverless Scheduler Reference Architecture (grayed-out area is covered in Part 2).

Read the GitHub Repo if you want to look at the Step Functions workflow contained in preceding images. The walkthrough explains how the serverless application retrieves and runs jobs on its worker, updates DynamoDB job monitoring table, and manages the worker for its lifetime.

 

Part 2 – Resource automation with serverless scheduler


This part of the solution relies on the serverless scheduler built in Part 1 to run jobs on EC2.  Part 2 simplifies submitting and monitoring jobs, and retrieving results for users. Jobs are spread across our cost-optimized Spot Instances. AWS Autoscaling automatically scales up the compute resources when jobs are submitted, then terminates them when jobs are finished. Both of these save you money.

The CloudFormation template used in Part 2 is resource-automation.template. Building on Figure 1, the additional resources launched with Part 2 are noted in the following image, they are an S3 Bucket, AWS Autoscaling Group, and two Lambda functions.

Resource Automation using Serverless Scheduler This is Part 2 of the deployment process, and leverages the Part 1 architecture. This provides the resource allocation, that allows for automated job submission and EC2 Auto Scaling. Detailed steps for the prior image

 

Figure 2: Resource Automation using Serverless Scheduler

                               

Introduction to decoupled serverless scheduling

HPC schedulers traditionally run in a classic master and worker node configuration. A scheduler on the master node orchestrates jobs on worker nodes. This design has been successful for decades, however many powerful schedulers are evolving to meet the demands of HPC workloads. This scheduler design evolved from a necessity to run orchestration logic on one machine, but there are now options to decouple this logic.

What are the possible benefits that decoupling this logic could bring? First, we avoid a number of shortfalls in the environment such as the need for all worker nodes to communicate with a single master node. This single source of communication limits scalability and creates a single point of failure. When we split the scheduler into decoupled components both these issues disappear.

Second, in an effort to work around these pain points, traditional schedulers had to create extremely complex logic to manage all workers concurrently in a single application. This stifled the ability to customize and improve the code – restricting changes to be made by the software provider’s engineering teams.

Serverless services, such as AWS Step Functions and AWS Lambda fix these major issues. They allow you to decouple the scheduling logic to have a one-to-one mapping with each worker, and instead share an Amazon Simple Queue Service (SQS) job queue. We define our scheduling workflow in AWS Step Functions. Then the workflow scales out to potentially thousands of “state machines.” These state machines act as wrappers around each worker node and manage each worker node individually.  Our code is less complex because we only consider one worker and its job.

We illustrate the differences between a traditional shared scheduler and decoupled serverless scheduler in Figures 3 and 4.

 

Traditional Scheduler Model This shows a traditional sceduler where there is one central schduling host, and then multiple workers.

Figure 3: Traditional Scheduler Model

 

Decoupled Serverless Scheduler on each instance This shows what a Decoupled Serverless Scheduler design looks like, wit

Figure 4: Decoupled Serverless Scheduler on each instance

 

Each decoupled serverless scheduler will:

  • Retrieve and pass jobs to its worker
  • Monitor its workers health and take action if needed
  • Confirm job success by checking output logs and retry jobs if needed
  • Terminate the worker when job queue is empty just before also terminating itself

With this new scheduler model, there are many benefits. Decoupling schedulers into smaller schedulers increases fault tolerance because any issue only affects one worker. Additionally, each scheduler consists of independent AWS Lambda functions, which maintains the state on separate hardware and builds retry logic into the service.  Scalability also increases, because jobs are not dependent on a master node, which enables the geographic distribution of jobs. This geographic distribution allows you to optimize use of low-cost Spot Instances. Also, when decoupling the scheduler, workflow complexity decreases and you can customize scheduler logic. You can leverage lower latency job monitoring and customize automated responses to job events as they happen.

 

Benefits

  • Fully managed –  With Part 2, Resource Automation deployed, resources for a job are managed. When a job is submitted, resources launch and run the job. When the job is done, worker nodes automatically shut down. This prevents you from incurring continuous costs.

 

  • Performance – Your application runs on EC2, which means you can choose any of the high performance instance types. Input files are automatically copied from Amazon S3 into local Amazon EC2 Instance Store for high performance storage during execution. Result files are automatically moved to S3 after each job finishes.

 

  • Scalability – A worker node combined with a scheduler state machine become a stateless entity. You can spin up as many of these entities as you want, and point them to an SQS queue. You can even distribute worker and state machine pairs across multiple AWS regions. These two components paired with fully managed services optimize your architecture for scalability to meet your desired number of workers.

 

  • Fault Tolerance –The solution is completely decoupled, which means each worker has its own state machine that handles scheduling for that worker. Likewise, each state machine is decoupled into Lambda functions that make up your state machine. Additionally, the scheduler workflow includes a Lambda function that confirms each successful job or resubmits jobs.

 

  • Cost Efficiency – This fault tolerant environment is perfect for EC2 Spot Instances. This means you can save up to 90% on your workloads compared to On-Demand Instance pricing. The scheduler workflow ensures little to no idle time of workers by closely monitoring and sending new jobs as jobs finish. Because the scheduler is serverless, you only incur costs for the resources required to launch and run jobs. Once the job is complete, all are terminated automatically.

 

  • Agility – You can use AWS fully managed Developer Tools to quickly release changes and customize workflows. The reduced complexity of a decoupled scheduling workflow means that you don’t have to spend time managing a scheduling environment, and can instead focus on your applications.

 

 

Part 1 – serverless scheduler as a standalone solution

 

If you use the serverless scheduler as a standalone solution, you can build clusters and leverage shared storage such as FSx for Lustre, EFS, or S3. Additionally, you can use AWS CloudFormation or to deploy more complex compute architectures that suit your application. So, the EC2 Instances that run the serverless scheduler can be launched in any number of ways. The scheduler only requires the instance id and the SQS job queue name.

 

Submitting Jobs Directly to serverless scheduler

The severless scheduler app is a fully built AWS Step Function workflow to pull jobs from an SQS queue and run them on an EC2 Instance. The jobs submitted to SQS consist of an AWS Systems Manager Run Command, and work with any SSM Document and command that you chose for your jobs. Examples of SSM Run Commands are ShellScript and PowerShell.  Feel free to read more about Running Commands Using Systems Manager Run Command.

The following code shows the format of a job submitted to SQS in JSON.

  {

    "job_id": "jobId_0",

    "retry": "3",

    "job_success_string": " ",

    "ssm_document": "AWS-RunPowerShellScript",

    "commands":

        [

            "cd C:\\ProgramData\\Amazon\\SSM; mkdir Result",

            "Copy-S3object -Bucket my-bucket -KeyPrefix jobs/date/jobId_0 -LocalFolder .\\",

            "C:\\ProgramData\\Amazon\\SSM\\jobId_0.bat",

            "Write-S3object -Bucket my-bucket -KeyPrefix jobs/date/jobId_0 –Folder .\\Result\\"

        ],

  }

 

Any EC2 Instance associated with a serverless scheduler it receives jobs picked up from a designated SQS queue until the queue is empty. Then, the EC2 resource automatically terminates. If the job fails, it retries until it reaches the specified number of times in the job definition. You can include a specific string value so that the scheduler searches for job execution outputs and confirms the successful completions of jobs.

 

Tagging EC2 workers to get a serverless scheduler state machine

In Part 1 of the deployment, you must manage your EC2 Instance launch and termination. When launching an EC2 Instance, tag it with a specific tag key that triggers a state machine to manage that instance. The tag value is the name of the SQS queue that you want your state machine to poll jobs from.

In the following example, “my-scheduler-cloudformation-stack-name” is the tag key that serverless scheduler app will for with any new EC2 instance that starts. Next, “my-sqs-job-queue-name” is the default job queue created with the scheduler. But, you can change this to any queue name you want to retrieve jobs from when an instance is launched.

{"my-scheduler-cloudformation-stack-name":"my-sqs-job-queue-name"}

 

Monitor jobs in DynamoDB

You can monitor job status in the following DynamoDB. In the table you can find job_id, commands sent to Amazon EC2, job status, job output logs from Amazon EC2, and retries among other things.

Alternatively, you can query DynamoDB for a given job_id via the AWS Command Line Interface:

aws dynamodb get-item --table-name job-monitoring \

                      --key '{"job_id": {"S": "/my-jobs/my-job-id.bat"}}'

 

Using the “job_success_string” parameter

For the prior DynamoDB table, we submitted two identical jobs using an example script that you can also use. The command sent to the instance is “echo Hello World.” The output from this job should be “Hello World.” We also specified three allowed job retries.  In the following image, there are two jobs in SQS queue before they ran.  Look closely at the different “job_success_strings” for each and the identical command sent to both:

DynamoDB CLI info This shows an example DynamoDB CLI output with job information.

From the image we see that Job2 was successful and Job1 retried three times before permanently labelled as failed. We forced this outcome to demonstrate how the job success string works by submitting Job1 with “job_success_string” as “Hello EVERYONE”, as that will not be in the job output “Hello World.” In “Job2” we set “job_success_string” as “Hello” because we knew this string will be in the output log.

Job outputs commonly have text that only appears if job succeeded. You can also add this text yourself in your executable file. With “job_success_string,” you can confirm a job’s successful output, and use it to identify a certain value that you are looking for across jobs.

 

Part 2 – Resource Automation with the serverless scheduler

The additional services we deploy in Part 2 integrate with existing architectures to launch resources for your serverless scheduler. These services allow you to submit jobs simply by uploading input files and executable files to an S3 bucket.

Likewise, these additional resources can use any executable file format you want, including proprietary application level scripts. The solution automates everything else. This includes creating and submitting jobs to SQS job queue, spinning up compute resources when new jobs come in, and taking them back down when there are no jobs to run. When jobs are done, result files are copied to S3 for the user to retrieve. Similar to Part 1, you can still view the DynamoDB table for job status.

This architecture makes it easy to scale out to different teams and departments, and you can submit potentially hundreds of thousands of jobs while you remain in control of resources and cost.

 

Deeper Look at the S3 Architecture

The following diagram shows how you can submit jobs, monitor progress, and retrieve results. To submit jobs, upload all the needed input files and an executable script to S3. The suffix of the executable file (uploaded last) triggers an S3 event to start the process, and this suffix is configurable.

The S3 key of the executable file acts as the job id, and is kept as a reference to that job in DynamoDB. The Lambda (#2 in diagram below) uses the S3 key of the executable to create three SSM Run Commands.

  1. Synchronize all files in the same S3 folder to a working directory on the EC2 Instance.
  2. Run the executable file on EC2 Instances within a specified working directory.
  3. Synchronize the EC2 Instances working directory back to the S3 bucket where newly generated result files are included.

This Lambda (#2) then places the job on the SQS queue using the schedulers JSON formatted job definition seen above.

IMPORTANT: Each set of job files should be given a unique job folder in S3 or more files than needed might be moved to the EC2 Instance.

 

Figure 5: Resource Automation using Serverless Scheduler - A deeper look A deeper dive in to Part 2, resource allcoation.

Figure 5: Resource Automation using Serverless Scheduler – A deeper look

 

EC2 and Step Functions workflow use the Lambda function (#3 in prior diagram) and the Auto Scaling group to scale out based on the number of jobs in the queue to a maximum number of workers (plus state machine), as defined in the Auto Scaling Group. When the job queue is empty, the number of running instances scale down to 0 as they finish their remaining jobs.

 

Process Submitting Jobs and Retrieving Results

  1. Seen in1, upload input file(s) and an executable file into a unique job folder in S3 (such as /year/month/day/jobid/~job-files). Upload the executable file last because it automatically starts the job. You can also use a script to upload multiple files at a time but each job will need a unique directory. There are many ways to make S3 buckets available to users including AWS Storage Gateway, AWS Transfer for SFTP, AWS DataSync, the AWS Console or any one of the AWS SDKs leveraging S3 API calls.
  2. You can monitor job status by accessing the DynamoDB table directly via the AWS Management Console or use the AWS CLI to call DynamoDB via an API call.
  3. Seen in step 5, you can retrieve result files for jobs from the same S3 directory where you left the input files. The DynamoDB table confirms when jobs are done. The SQS output queue can be used by applications that must automatically poll and retrieve results.

You no longer need to create or access compute nodes as compute resources. These automatically scale up from zero when jobs come in, and then back down to zero when jobs are finished.

 

Deployment

Read the GitHub Repo for deployment instructions. Below are CloudFormation templates to help:

AWS RegionLaunch Stack
eu-north-1link to zone
ap-south-1
eu-west-3
eu-west-2
eu-west-1
ap-northeast-3
ap-northeast-2
ap-northeast-1
sa-east-1
ca-central-1
ap-southeast-1
ap-southeast-2
eu-central-1
us-east-1
us-east-2
us-west-1
us-west-2

 

 

Additional Points on Usage Patterns

 

  • While the two solutions in this blog are aimed at HPC applications, they can be used to run any batch jobs. Many customers that run large data processing batch jobs in their data lakes could use the serverless scheduler.

 

  • You can build pipelines of different applications when the output of one job triggers another to do something else – an example being pre-processing, meshing, simulation, post-processing. You simply deploy the Resource Automation template several times, and tailor it so that the output bucket for one step is the input bucket for the next step.

 

  • You might look to use the “job_success_string” parameter for iteration/verification used in cases where a shot-gun approach is needed to run thousands of jobs, and only one has a chance of producing the right result. In this case the “job_success_string” would identify the successful job from potentially hundreds of thousands pushed to SQS job queue.

 

Scale-out across teams and departments

Because all services used are serverless, you can deploy as many run environments as needed without increasing overall costs. Serverless workloads only accumulate cost when the services are used. So, you could deploy ten job environments and run one job in each, and your costs would be the same if you had one job environment running ten jobs.

 

All you need is an S3 bucket to upload jobs to and an associated AMI that has the right applications and license configuration. Because a job configuration is passed to the scheduler at each job start, you can add new teams by creating an S3 bucket and pointing S3 events to a default Lambda function that pulls configurations for each job start.

 

Setup CI/CD pipeline to start continuous improvement of scheduler

If you are advanced, we encourage you to clone the git repo and customize this solution. The serverless scheduler is less complex than other schedulers, because you only think about one worker and the process of one job’s run.

Ways you could tailor this solution:

  • Add intelligent job scheduling using AWS Sagemaker  – It is hard to find data as ready for ML as log data because every job you run has different run times and resource consumption. So, you could tailor this solution to predict the best instance to use with ML when workloads are submitted.
  • Add Custom Licensing Checkout Logic – Simply add one Lambda function to your Step Functions workflow to make an API call a license server before continuing with one or more jobs. You can start a new worker when you have a license checked out or if a license is not available then the instance can terminate to remove any costs waiting for licenses.
  • Add Custom Metrics to DynamoDB – You can easily add metrics to DynamoDB because the solution already has baseline logging and monitoring capabilities.
  • Run on other AWS Services – There is a Lambda function in the Step Functions workflow called “Start_Job”. You can tailor this Lambda to run your jobs on AWS Sagemaker, AWS EMR, AWS EKS or AWS ECS instead of EC2.

 

Conclusion

 

Although HPC workloads and EDA flows may still be dependent on current scheduling technologies, we illustrated the possibilities of decoupling your workloads from your existing shared scheduling environments. This post went deep into decoupled serverless scheduling, and we understand that it is difficult to unwind decades of dependencies. However, leveraging numerous AWS Services encourages you to think completely differently about running workloads.

But more importantly, it encourages you to Think Big. With this solution you can get up and running quickly, fail fast, and iterate. You can do this while scaling to your required number of resources, when you want them, and only pay for what you use.

Serverless computing  catalyzes change across all industries, but that change is not obvious in the HPC and EDA industries. This solution is an opportunity for customers to take advantage of the nearly limitless capacity that AWS.

Please reach out with questions about HPC and EDA on AWS. You now have the architecture and the instructions to build your Serverless Decoupled Scheduling environment.  Go build!


About the Authors and Contributors

Authors 

 

Ludvig Nordstrom is a Senior Solutions Architect at AWS

 

 

 

 

Mark Duffield is a Tech Lead in Semiconductors at AWS

 

 

 

Contributors

 

Steve Engledow is a Senior Solutions Builder at AWS

 

 

 

 

Arun Thomas is a Senior Solutions Builder at AWS

 

 

Code a Frogger-style road-crossing game | Wireframe #27

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-frogger-style-road-crossing-game-wireframe-27/

Guide a frog across busy roads and rivers. Mark Vanstone shows you how to code a simple remake of Konami’s arcade game, Frogger.

Konami’s original Frogger: so iconic, it even featured in a 1998 episode of Seinfeld.

Frogger

Why did the frog cross the road? Because Frogger would be a pretty boring game if it didn’t. Released in 1981 by Konami, the game appeared in assorted bars, sports halls, and arcades across the world, and became an instant hit. The concept was simple: players used the joystick to move a succession of frogs from the bottom of the screen to the top, avoiding a variety of hazards – cars, lorries, and later, the occasional crocodile. Each frog had to be safely manoeuvred to one of five alcoves within a time limit, while extra points were awarded for eating flies along the way.

Before Frogger, Konami mainly focused on churning out clones of other hit arcade games like Space Invaders and Breakout; Frogger was one of its earliest original ideas, and the simplicity of its concept saw it ported to just about every home system available at the time. (Ironically, Konami’s game would fall victim to repeated cloning by other developers.) Decades later, developers still take inspiration from it; Hipster Whale’s Crossy Road turned Frogger into an endless running game; earlier this year, Konami returned to the creative well with Frogger in Toy Town, released on Apple Arcade.

Code your own Konami Frogger

We can recreate much of Frogger’s gameplay in just a few lines of Pygame Zero code. The key elements are the frog’s movement, which use the arrow keys, vehicles that move across the screen, and floating objects – logs and turtles – moving in opposite directions. Our background graphic will provide the road, river, and grass for our frog to move over. The frog’s movement will be triggered from an on_key_down() function, and as the frog moves, we switch to a second frame with legs outstretched, reverting back to a sitting position after a short delay. We can use the inbuilt Actor properties to change the image and set the angle of rotation.

In our Frogger homage, we move the frog with the arrow keys to avoid the traffic, and jump onto the floating logs and turtles.

For all the other moving elements, we can also use Pygame Zero Actors; we just need to make an array for our cars with different graphics for the various rows, and an array for our floating objects in the same way.
In our update() function, we need to move each Actor according to which row it’s in, and when an Actor disappears off the screen, set the x coordinate so that it reappears on the opposite side.

Handling the logic of the frog moving across the road is quite easy; we just check for collision with each of the cars, and if the frog hits a car, then we have a squashed frog. The river crossing is a little more complicated. Each time the frog moves on the river, we need to make sure that it’s on a floating Actor. We therefore check to make sure that the frog is in collision with one of the floating elements, otherwise it’s game over.

There are lots of other elements you could add to the example shown here: the original arcade game provided several frogs to guide to their alcoves on the other side of the river, while crocodiles also popped up from time to time to add a bit more danger. Pygame Zero has all the tools you need to make a fully functional version of Konami’s hit.

Here’s Mark’s code, which gets a Frogger homage running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 27

You can read more features like this one in Wireframe issue 27, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 27 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Frogger-style road-crossing game | Wireframe #27 appeared first on Raspberry Pi.

Code a Phoenix-style mothership battle | Wireframe #26

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-phoenix-style-mothership-battle-wireframe-26/

It was one of gaming’s first boss battles. Mark Vanstone shows you how to recreate the mothership from the 1980 arcade game, Phoenix.

Phoenix’s fifth stage offered a unique challenge in 1980: one of gaming’s first-ever boss battles.

First released in 1980, Phoenix was something of an arcade pioneer. The game was the kind of post-Space Invaders fixed-screen shooter that was ubiquitous at the time: players moved their ship from side to side, shooting at a variety of alien birds of different sizes and attack patterns. The enemies moved swiftly, and the player’s only defence was a temporary shield which could be activated when the birds swooped and strafed the lone defender. But besides all that, Phoenix had a few new ideas of its own: not only did it offer five distinct stages, but it also featured one of the earliest examples of a boss battle – its heavily armoured alien mothership, which required accurate shots to its shields before its weak spot could be exposed.

To recreate Phoenix’s boss, all we need is Pygame Zero. We can get a portrait style window with the WIDTH and HEIGHT variables and throw in some parallax stars (an improvement on the original’s static backdrop) with some blitting in the draw() function. The parallax effect is created by having a static background of stars with a second (repeated) layer of stars moving down the screen.

The mothership itself is made up of several Actor objects which move together down the screen towards the player’s spacecraft, which can be moved right and left using the mouse. There’s the main body of the mothership, in the centre is the alien that we want to shoot, and then we have two sets of moving shields.

Like the original Phoenix, our mothership boss battle has multiple shields that need to be taken out to expose the alien at the core.

In this example, rather than have all the graphics dimensions in multiples of eight (as we always did in the old days), we will make all our shield blocks 20 by 20 pixels, because computers simply don’t need to work in multiples of eight any more. The first set of shields is the purple rotating bar around the middle of the ship. This is made up of 14 Actor blocks which shift one place to the right each time they move. Every other block has a couple of portal windows which makes the rotation obvious, and when a block moves off the right-hand side, it is placed on the far left of the bar.

The second set of shields are in three yellow rows (you may want to add more), the first with 14 blocks, the second with ten blocks, and the last with four. These shield blocks are fixed in place but share a behaviour with the purple bar shields, in that when they are hit by a bullet, they change to a damaged version. There are four levels of damage before they are destroyed and the bullets can pass through. When enough shields have been destroyed for a bullet to reach the alien, the mothership is destroyed (in this version, the alien flashes).

Bullets can be fired by clicking the mouse button. Again, the original game had alien birds flying around the mothership and dive-bombing the player, making it harder to get a good shot in, but this is something you could try adding to the code yourself.

To really bring home that eighties Phoenix arcade experience, you could also add in some atmospheric shooting effects and, to round the whole thing off, have an 8-bit rendition of Beethoven’s Für Elise playing in the background.

Here’s Mark’s code, which gets a simple mothership battle running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 26

You can read more features like this one in Wireframe issue 26, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 26 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Phoenix-style mothership battle | Wireframe #26 appeared first on Raspberry Pi.

Setting up a CI/CD pipeline by integrating Jenkins with AWS CodeBuild and AWS CodeDeploy

Post Syndicated from Noha Ghazal original https://aws.amazon.com/blogs/devops/setting-up-a-ci-cd-pipeline-by-integrating-jenkins-with-aws-codebuild-and-aws-codedeploy/

In this post, I explain how to use the Jenkins open-source automation server to deploy AWS CodeBuild artifacts with AWS CodeDeploy, creating a functioning CI/CD pipeline. When properly implemented, the CI/CD pipeline is triggered by code changes pushed to your GitHub repo, automatically fed into CodeBuild, then the output is deployed on CodeDeploy.

Solution overview

The functioning pipeline creates a fully managed build service that compiles your source code. It then produces code artifacts that can be used by CodeDeploy to deploy to your production environment automatically.

The deployment workflow starts by placing the application code on the GitHub repository. To automate this scenario, I added source code management to the Jenkins project under the Source Code section. I chose the GitHub option, which by design clones a copy from the GitHub repo content in the Jenkins local workspace directory.

In the second step of my automation procedure, I enabled a trigger for the Jenkins server using an “Poll SCM” option. This option makes Jenkins check the configured repository for any new commits/code changes with a specified frequency. In this testing scenario, I configured the trigger to perform every two minutes. The automated Jenkins deployment process works as follows:

  1. Jenkins checks for any new changes on GitHub every two minutes.
  2. Change determination:
    1. If Jenkins finds no changes, Jenkins exits the procedure.
    2. If it does find changes, Jenkins clones all the files from the GitHub repository to the Jenkins server workspace directory.
  3. The File Operation plugin deletes all the files cloned from GitHub. This keeps the Jenkins workspace directory clean.
  4. The AWS CodeBuild plugin zips the files and sends them to a predefined Amazon S3 bucket location then initiates the CodeBuild project, which obtains the code from the S3 bucket. The project then creates the output artifact zip file, and stores that file again on the S3 bucket.
  5. The HTTP Request plugin downloads the CodeBuild output artifacts from the S3 bucket.
    I edited the S3 bucket policy to allow access from the Jenkins server IP address. See the following example policy:

    {
      "Version": "2012-10-17",
      "Id": "S3PolicyId1",
      "Statement": [
        {
          "Sid": "IPAllow",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:*",
          "Resource": "arn:aws:s3:::examplebucket/*",
          "Condition": {
             "IpAddress": {"aws:SourceIp": "x.x.x.x/x"},  <--- IP of the Jenkins server
          } 
        } 
      ]
    }
    
    

    This policy enables the HTTP request plugin to access the S3 bucket. This plugin doesn’t use the IAM instance profile or the AWS access keys (access key ID and secret access key).

  6. The output artifact is a compressed ZIP file. The CodeDeploy plugin by design requires the files to be unzipped to zip them and send them over to the S3 bucket for the CodeDeploy deployment. For that, I used the File Operation plugin to perform the following:
    1. Unzip the CodeBuild zipped artifact output in the Jenkins root workspace directory. At this point, the workspace directory should include the original zip file downloaded from the S3 bucket from Step 5 and the files extracted from this archive.
    2. Delete the original .zip file, and leave only the source bundle contents for the deployment.
  7. The CodeDeploy plugin selects and zips all workspace directory files. This plugin uses the CodeDeploy application name, deployment group name, and deployment configurations that you configured to initiate a new CodeDeploy deployment. The CodeDeploy plugin then uploads the newly zipped file according to the S3 bucket location provided to CodeDeploy as a source code for its new deployment operation.

Walkthrough

In this post, I walk you through the following steps:

  • Creating resources to build the infrastructure, including the Jenkins server, CodeBuild project, and CodeDeploy application.
  • Accessing and unlocking the Jenkins server.
  • Creating a project and configuring the CodeDeploy Jenkins plugin.
  • Testing the whole CI/CD pipeline.

Create the resources

In this section, I show you how to launch an AWS CloudFormation template, a tool that creates the following resources:

  • Amazon S3 bucket—Stores the GitHub repository files and the CodeBuild artifact application file that CodeDeploy uses.
  • IAM S3 bucket policy—Allows the Jenkins server access to the S3 bucket.
  • JenkinsRole—An IAM role and instance profile for the Amazon EC2 instance for use as a Jenkins server. This role allows Jenkins on the EC2 instance to access the S3 bucket to write files and access to create CodeDeploy deployments.
  • CodeDeploy application and CodeDeploy deployment group.
  • CodeDeploy service role—An IAM role to enable CodeDeploy to read the tags applied to the instances or the EC2 Auto Scaling group names associated with the instances.
  • CodeDeployRole—An IAM role and instance profile for the EC2 instances of CodeDeploy. This role has permissions to write files to the S3 bucket created by this template and to create deployments in CodeDeploy.
  • CodeBuildRole—An IAM role to be used by CodeBuild to access the S3 bucket and create the build projects.
  • Jenkins server—An EC2 instance running Jenkins.
  • CodeBuild project—This is configured with the S3 bucket and S3 artifact.
  • Auto Scaling group—Contains EC2 instances running Apache and the CodeDeploy agent fronted by an Elastic Load Balancer.
  • Auto Scaling launch configurations—For use by the Auto Scaling group.
  • Security groups—For the Jenkins server, the load balancer, and the CodeDeploy EC2 instances.

 

  1. To create the CloudFormation stack (for example in the AWS Frankfurt Region) click the below link:
    .

    .
  2. Choose Next and provide the following values on the Specify Details page:
    • For Stack name, name your stack as you prefer.
    • For CodedeployInstanceCount, choose the default of t2.medium.
      To check the supported instance types by AWS Region, see Supported Regions.
    • For InstanceCount, keep the default of 3, to launch three EC2 instances for CodeDeploy.
    • For JenkinsInstanceType, keep the default of t2.medium.
    • For KeyName, choose an existing EC2 key pair in your AWS account. Use this to connect by using SSH to the Jenkins server and the CodeDeploy EC2 instances. Make sure that you have access to the private key of this key pair.
    • For PublicSubnet1, choose a public subnet from which the load balancer, Jenkins server, and CodeDeploy web servers launch.
    • For PublicSubnet2, choose a public subnet from which the load balancers and CodeDeploy web servers launch.
    • For VpcId, choose the VPC for the public subnets you used in PublicSubnet1 and PublicSubnet2.
    • For YourIPRange, enter the CIDR block of the network from which you connect to the Jenkins server using HTTP and SSH. If your local machine has a static public IP address, go to https://www.whatismyip.com/ to find your IP address, and then enter your IP address followed by /32. If you don’t have a static IP address (or aren’t sure if you have one), enter 0.0.0.0/0. Then, any address can reach your Jenkins server.
      .
  3. Choose Next.
  4. On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box.
  5. Choose Create and wait for the CloudFormation stack status to change to CREATE_COMPLETE. This takes approximately 6–10 minutes.
  6. Check the resulting values on the Outputs tab. You need them later.
    .
  7. Browse to the ELBDNSName value from the Outputs tab, verifying that you can see the Sample page. You should see a congratulatory message.
  8. Your Jenkins server should be ready to deploy.

Access and unlock your Jenkins server

In this section, I discuss how to access, unlock, and customize your Jenkins server.

  1. Copy the JenkinsServerDNSName value from the Outputs tab of the CloudFormation stack, and paste it into your browser.
  2. To unlock the Jenkins server, SSH to the server using the IP address and key pair, following the instructions from Unlocking Jenkins.
  3. Use the root user to Cat the log file (/var/log/jenkins/jenkins.log) and copy the automatically generated alphanumeric password (between the two sets of asterisks). Then, use the password to unlock your Jenkins server, as shown in the following screenshots.
    .
  4. On the Customize Jenkins page, choose Install suggested plugins.

  5. Wait until Jenkins installs all the suggested plugins. When the process completes, you should see the check marks alongside all of the installed plugins.
    .
    .
  6. On the Create First Admin User page, enter a user name, password, full name, and email address of the Jenkins user.
  7. Choose Save and continue, Save and finish, and Start using Jenkins.
    .
    After you install all the needed Jenkins plugins along with their required dependencies, the Jenkins server restarts. This step should take about two minutes. After Jenkins restarts, refresh the page. Your Jenkins server should be ready to use.

Create a project and configure the CodeDeploy Jenkins plugin

Now, to create our project in Jenkins we need to configure the required Jenkins plugin.

  1. Sign in to Jenkins with the user name and password that you created earlier and click on Manage Jenkins then Manage Plugins.
  2. From the Available tab search for and select the below plugins then choose Install without restart:
    .
    AWS CodeDeploy
    AWS CodeBuild
    Http Request
    File Operations
    .
  3. Select the Restart Jenkins when installation is complete and no jobs are running.
    Jenkins will take couple of minutes to download the plugins along with their dependencies then will restart.
  4. Login then choose New Item, Freestyle project.
  5. Enter a name for the project (for example, CodeDeployApp), and choose OK.
    .

    .
  6. On the project configuration page, under Source Code Management, choose Git. For Repository URL, enter the URL of your GitHub repository.
    .

    .
  7. For Build Triggers, select the Poll SCM check box. In the Schedule, for testing enter H/2 * * * *. This entry tells Jenkins to poll GitHub every two minutes for updates.
    .

    .
  8. Under Build Environment, select the Delete workspace before build starts check box. Each Jenkins project has a dedicated workspace directory. This option allows you to wipe out your workspace directory with each new Jenkins build, to keep it clean.
    .

    .
  9. Under Build Actions, add a Build Step, and AWS CodeBuild. On the AWS Configurations, choose Manually specify access and secret keys and provide the keys.
    .
    .
  10. From the CloudFormation stack Outputs tab, copy the AWS CodeBuild project name (myProjectName) and paste it in the Project Name field. Also, set the Region that you are using and choose Use Jenkins source.
    It is a best practice is to store AWS credentials for CodeBuild in the native Jenkins credential store. For more information, see the Jenkins AWS CodeBuild Plugin wiki.
    .
    .
  11. To make sure that all files cloned from the GitHub repository are deleted choose Add build step and select File Operation plugin, then click Add and select File Delete. Under File Delete operation in the Include File Pattern, type an asterisk.
    .
    .
  12. Under Build, configure the following:
    1. Choose Add a Build step.
    2. Choose HTTP Request.
    3. Copy the S3 bucket name from the CloudFormation stack Outputs tab and paste it after (http://s3-eu-central-1.amazonaws.com/) along with the name of the zip file codebuild-artifact.zip as the value for HTTP Plugin URL.
      Example: (http://s3-eu-central-1.amazonaws.com/mybucketname/codebuild-artifact.zip)
    4. For Ignore SSL errors?, choose Yes.
      .

      .
  13. Under HTTP Request, choose Advanced and leave the default values for Authorization, Headers, and Body. Under Response, for Output response to file, enter the codebuild-artifact.zip file name.
    .

    .
  14. Add the two build steps for the File Operations plugin, in the following order:
    1. Unzip action: This build step unzips the codebuild-artifact.zip file and places the contents in the root workspace directory.
    2. File Delete action: This build step deletes the codebuild-artifact.zip file, leaving only the source bundle contents for deployment.
      .
      .
  15. On the Post-build Actions, choose Add post-build actions and select the Deploy an application to AWS CodeDeploy check box.
  16. Enter the following values from the Outputs tab of your CloudFormation stack and leave the other settings at their default (blank):
    • For AWS CodeDeploy Application Name, enter the value of CodeDeployApplicationName.
    • For AWS CodeDeploy Deployment Group, enter the value of CodeDeployDeploymentGroup.
    • For AWS CodeDeploy Deployment Config, enter CodeDeployDefault.OneAtATime.
    • For AWS Region, choose the Region where you created the CodeDeploy environment.
    • For S3 Bucket, enter the value of S3BucketName.
      The CodeDeploy plugin uses the Include Files option to filter the files based on specific file names existing in your current Jenkins deployment workspace directory. The plugin zips specified files into one file. It then sends them to the location specified in the S3 Bucket parameter for CodeDeploy to download and use in the new deployment.
      .
      As shown below, in the optional Include Files field, I used (**) so all files in the workspace directory get zipped.
      .
      .
  17. Choose Deploy Revision. This option registers the newly created revision to your CodeDeploy application and gets it ready for deployment.
  18. Select the Wait for deployment to finish? check box. This option allows you to view the CodeDeploy deployments logs and events on your Jenkins server console output.
    .
    .
    Now that you have created a project, you are ready to test deployment.

Testing the whole CI/CD pipeline

To test the whole solution, put an application on your GitHub repository. You can download the sample from here.

The following screenshot shows an application tree containing the application source files, including text and binary files, executables, and packages:

In this example, the application files are the templates directory, test_app.py file, and web.py file.

The appspec.yml file is the main application specification file telling CodeDeploy how to deploy your application. Jenkins uses the AppSpec file to manage each deployment as a series of lifecycle event “hooks”, as defined in the file. For information about how to create a well-formed AppSpec file, see AWS CodeDeploy AppSpec File Reference.

The buildspec.yml file is a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build. You can include a build spec as part of the source code, or you can define a build spec when you create a build project. For more information, see How AWS CodeBuild Works.

The scripts folder contains the scripts that you would like to run during the CodeDeploy LifecycleHooks execution with respect to your application requirements. For more information, see Plan a Revision for AWS CodeDeploy.

To test this solution, perform the following steps:

  1. Unzip the application files and send them to your GitHub repository, run the following git commands from the path where you placed your sample application:
    $ git add -A
    
    $ git commit -m 'Your first application'
    
    $ git push
  2. On the Jenkins server dashboard, wait for two minutes until the previously set project trigger starts working. After the trigger starts working, you should see a new build taking place.
    .

    .
  3. In the Jenkins server Console Output page, check the build events and review the steps performed by each Jenkins plugin. You can also review the CodeDeploy deployment in detail, as shown in the following screenshot:
    .

On completion, Jenkins should report that you have successfully deployed a web application. You can also use your ELBDNSName value to confirm that the deployed application is running successfully.

.

.Conclusion

In this post, I outlined how you can use a Jenkins open-source automation server to deploy CodeBuild artifacts with CodeDeploy. I showed you how to construct a functioning CI/CD pipeline with these tools. I walked you through how to build the deployment infrastructure and automatically deploy application version changes from GitHub to your production environment.

Hopefully, you have found this post informative and the proposed solution useful. As always, AWS welcomes all feedback or comment.

About the Author

.

 

Noha Ghazal is a Cloud Support Engineer at Amazon Web Services. She is is a subject matter expert for AWS CodeDeploy. In her role, she enjoys supporting customers with their CodeDeploy and other DevOps configurations. Outside of work she enjoys drawing portraits, fishing and playing video games.

 

 

Make a Columns-style tile-matching game | Wireframe #25

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/make-a-columns-style-tile-matching-game-wireframe-25/

Raspberry Pi’s own Rik Cross shows you how to code your own Columns-style tile-matching puzzle game in Python and Pygame Zero.

Created by Hewlett-Packard engineer Jay Geertsen, Columns was Sega’s sparkly rival to Nintendo’s all-conquering Tetris.

Columns and tile-matching

Tile-matching games began with Tetris in 1984 and the less famous Chain Shot! the following year. The genre gradually evolved through games like Dr. Mario, Columns, Puyo Puyo, and Candy Crush Saga. Although their mechanics differ, the goals are the same: to organise a board of different-coloured tiles by moving them around until they match.

Here, I’ll show how you can create a simple tile-matching game using Python and Pygame. In it, any tile can be swapped with the tile to its right, with the aim being to make matches of three or more tiles of the same colour. Making a match causes the tiles to disappear from the board, with tiles dropping down to fill in the gaps.

At the start of a new game, a board of randomly generated tiles is created. This is made as an (initially empty) two-dimensional array, whose size is determined by the values of rows and columns. A specific tile on the board is referenced by its row and column number.

We want to start with a truly random board, but we also want to avoid having any matching tiles. Random tiles are added to each board position, therefore, but replaced if a tile is the same as the one above or to it’s left (if such a tile exists).

Our board consists of 12 rows and 8 columns of tiles. Pressing SPACE will swap the 2 selected tiles (outlined in white), and in this case, create a match of red tiles vertically.

In our game, two tiles are ‘selected’ at any one time, with the player pressing the arrow keys to change those tiles. A selected variable keeps track of the row and column of the left-most selected tile, with the other tile being one column to the right of the left-most tile. Pressing SPACE swaps the two selected tiles, checks for matches, clears any matched tiles, and fills any gaps with new tiles.

A basic ‘match-three’ algorithm would simply check whether any tiles on the board have a matching colour tile on either side, horizontally or vertically. I’ve opted for something a little more convoluted, though, as it allows us to check for matches on any length, as well as track multiple, separate matches. A currentmatch list keeps track of the (x,y) positions of a set of matching tiles. Whenever this list is empty, the next tile to check is added to the list, and this process is repeated until the next tile is a different colour.

If the currentmatch list contains three or more tiles at this point, then the list is added to the overall matches list (a list of lists of matches!) and the currentmatch list is reset. To clear matched tiles, the matched tile positions are set to None, which indicates the absence of a tile at that position. To fill the board, tiles in each column are moved down by one row whenever an empty board position is found, with a new tile being added to the top row of the board.

The code provided here is just a starting point, and there are lots of ways to develop the game, including adding a scoring system and animation to liven up your tiles.

Here’s Rik’s code, which gets a simple tile-match game running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 25

You can read more features like this one in Wireframe issue 25, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 25 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Make a Columns-style tile-matching game | Wireframe #25 appeared first on Raspberry Pi.

Code your own Donkey Kong barrels | Wireframe issue 24

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-donkey-kong-barrels-wireframe-issue-24/

Replicate the physics of barrel rolling – straight out of the classic Donkey Kong. Mark Vanstone shows you how.

Released in 1981, Donkey Kong was one of the most important games in Nintendo’s history.

Nintendo’s Donkey Kong

Donkey Kong first appeared in arcades in 1981, and starred not only the titular angry ape, but also a bouncing, climbing character called Jumpman – who later went on to star in Nintendo’s little-known series of Super Mario games. Donkey Kong featured four screens per level, and the goal in each was to avoid obstacles and guide Mario (sorry, Jumpman) to the top of the screen to rescue the hapless Pauline. Partly because the game was so ferociously difficult from the beginning, Donkey Kong’s first screen is arguably the most recognisable today: Kong lobs an endless stream of barrels, which roll down a network of crooked girders and threaten to knock Jumpman flat.

Barrels in Pygame Zero

Donkey Kong may have been a relentlessly tough game, but we can recreate one of its most recognisable elements with relative ease. We can get a bit of code running with Pygame Zero – and a couple of functions borrowed from Pygame – to make barrels react to the platforms they’re on, roll down in the direction of a slope, and fall off the end onto the next platform. It’s a very simple physics simulation using an invisible bitmap to test where the platforms are and which way they’re sloping. We also have some ladders which the barrels randomly either roll past or sometimes use to descend to the next platform below.

Our Donkey Kong tribute up and running in Pygame Zero. The barrels roll down the platforms and sometimes the ladders.

Once we’ve created a barrel as an Actor, the code does three tests for its platform position on each update: one to the bottom-left of the barrel, one bottom-centre, and one bottom-right. It samples three pixels and calculates how much red is in those pixels. That tells us how much platform is under the barrel in each position. If the platform is tilted right, the number will be higher on the left, and the barrel must move to the right. If tilted left, the number will be higher on the right, and the barrel must move left. If there is no red under the centre point, the barrel is in the air and must fall downward.

There are just three frames of animation for the barrel rolling (you could add more for a smoother look): for rolling right, we increase the frame number stored with the barrel Actor; for rolling to the left, we decrease the frame number; and if the barrel’s going down a ladder, we use the front-facing images for the animation. The movement down a ladder is triggered by another test for the blue component of a pixel below the barrel. The code then chooses randomly whether to send the barrel down the ladder.

The whole routine will keep producing more barrels and moving them down the platforms until they reach the bottom. Again, this is a very simple physics system, but it demonstrates how those rolling barrels can be recreated in just a few lines of code. All we need now is a jumping player character (which could use the same invisible map to navigate up the screen) and a big ape to sit at the top throwing barrels, then you’ll have the makings of your own fully featured Donkey Kong tribute.

Here’s Mark’s code, which sets some Donkey Kong Barrels rolling about in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 24

You can read more features like this one in Wireframe issue 24, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 24 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code your own Donkey Kong barrels | Wireframe issue 24 appeared first on Raspberry Pi.

Make a keyboard-bashing sprint game | Wireframe issue 23

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/make-a-keyboard-bashing-sprint-game-wireframe-issue-23/

Learn how to code a sprinting minigame straight out of Daley Thompson’s Decathlon with Raspberry Pi’s own Rik Cross.

Spurred on by the success of Konami’s Hyper Sports, Daley Thompson’s Decathlon featured a wealth of controller-wrecking minigames.

Daley Thompson’s Decathlon

Released in 1984, Daley Thompson’s Decathlon was a memorable entry in what’s sometimes called the ‘joystick killer’ genre: players competed in sporting events that largely consisted of frantically waggling the controller or battering the keyboard. I’ll show you how to create a sprinting game mechanic in Python and Pygame.

Python sprinting game

There are variables in the Sprinter() class to keep track of the runner’s speed and distance, as well as global constant ACCELERATION and DECELERATION values to determine the player’s changing rate of speed. These numbers are small, as they represent the number of metres per frame that the player accelerates and decelerates.

The player increases the sprinter’s speed by alternately pressing the left and right arrow keys. This input is handled by the sprinter’s isNextKeyPressed() method, which returns True if the correct key (and only the correct key) is being pressed. A lastKeyPressed variable is used to ensure that keys are pressed alternately. The player also decelerates if no key is being pressed, and this rate of deceleration should be sufficiently smaller than the acceleration to allow the player to pick up enough speed.

Press the left and right arrow keys alternately to increase the sprinter’s speed. Objects move across the screen from right to left to give the illusion of sprinter movement.

For the animation, I used a free sprite called ‘The Boy’ from gameart2d.com, and made use of a single idle image and 15 run cycle images. The sprinter starts in the idle state, but switches to the run cycle whenever its speed is greater than 0. This is achieved by using index() to find the name of the current sprinter image in the runFrames list, and setting the current image to the next image in the list (and wrapping back to the first image once the end of the list is reached). We also need the sprinter to move through images in the run cycle at a speed proportional to the sprinter’s speed. This is achieved by keeping track of the number of frames the current image has been displayed for (in a variable called timeOnCurrentFrame).

To give the illusion of movement, I’ve added objects that move past the player: there’s a finish line and three markers to regularly show the distance travelled. These objects are calculated using the sprinter’s x position on the screen along with the distance travelled. However, this means that each object is at most only 100 pixels away from the player and therefore seems to move slowly. This can be fixed by using a SCALE factor, which is the relationship between metres travelled by the sprinter and pixels on the screen. This means that objects are initially drawn way off to the right of the screen but then travel to the left and move past the sprinter more quickly.

Finally, startTime and finishTime variables are used to calculate the race time. Both values are initially set to the current time at the start of the race, with finishTime being updated as long as the distance travelled is less than 100. Using the time module, the race time can simply be calculated by finishTime - startTime.

Here’s Rik’s code, which gets a sprinting game running in Python (no pun intended). To get it working on your system, you’ll first need to install Pygame Zero. Download the code here.

Get your copy of Wireframe issue 23

You can read more features like this one in Wireframe issue 23, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can download issue 23 for free in PDF format.

Autonauts is coming to colonise your computers with cuteness. We find out more in Wireframe issue 23.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Make a keyboard-bashing sprint game | Wireframe issue 23 appeared first on Raspberry Pi.

Create a Scramble-style scrolling landscape | Wireframe issue 22

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/create-a-scramble-style-scrolling-landscape-wireframe-issue-22/

Weave through a randomly generated landscape in Mark Vanstone’s homage to the classic arcade game Scramble.

Scramble was developed by Konami and released in arcades in 1981. Players avoid terrain and blast enemy craft.

Konami’s Scramble

In the early eighties, arcades and sports halls rang with the sound of a multitude of video games. Because home computers hadn’t yet made it into most households, the only option for the avid video gamer was to go down to their local entertainment establishment and feed the machines with ten pence pieces (which were bigger then). One of these pocket money–hungry machines was Konami’s Scramble — released in 1981, it was one of the earliest side-scrolling shooters with multiple levels.

The Scramble player’s jet aircraft flies across a randomly generated landscape (which sometimes narrows to a cave system), avoiding obstacles and enemy planes, bombing targets on the ground, and trying not to crash. As the game continues, the difficulty increases. The player aircraft can only fly forward, so once a target has been passed, there’s no turning back for a second go.

Code your own scrolling landscape

In this example code, I’ll show you a way to generate a Scramble-style scrolling landscape using Pygame Zero and a couple of additional Pygame functions. On early computers, moving a lot of data around the screen was very slow — until dedicated video hardware like the blitter chip arrived. Scrolling, however, could be achieved either by a quick shuffle of bytes to the left or right in the video memory, or in some cases, by changing the start address of the video memory, which was even quicker.

Avoid the roof and the floor with the arrow keys. Jet graphic courtesy of TheSource4Life at opengameart.org.

For our scrolling, we can use a Pygame surface the same size as the screen. To get the scrolling effect, we just call the scroll() function on the surface to shift everything left by one pixel and then draw a new pixel-wide slice of the terrain. The terrain could just be a single colour, but I’ve included a bit of maths-based RGB tinkering to make it more colourful. We can draw our terrain surface over a background image, as the SRCALPHA flag is set when we create the surface. This is also useful for detecting if the jet has hit the terrain. We can test the pixel from the surface in front of the jet: if it’s not transparent, kaboom!

The jet itself is a Pygame Zero Actor and can be moved up and down with the arrow keys. The left and right arrows increase and decrease the speed. We generate the landscape in the updateLand() and drawLand() functions, where updateLand() first decides whether the landscape is inclining or declining (and the same with the roof), making sure that the roof and floor don’t get too close, and then it scrolls everything left.

Each scroll action moves everything on the terrain surface to the left by one pixel.

The drawLand() function then draws pixels at the right-hand edge of the surface from y coordinates 0 to 600, drawing a thin sliver of roof, open space, and floor. The speed of the jet determines how many times the landscape is updated in each draw cycle, so at faster speeds, many lines of pixels are added to the right-hand side before the display updates.

The use of randint() can be changed to create a more or less jagged landscape, and the gap between roof and floor could also be adjusted for more difficulty. The original game also had enemy aircraft, which you could make with Actors, and fuel tanks on the ground, which could be created on the right-hand side as the terrain comes into view and then moved as the surface scrolls. Scramble sparked a wave of horizontal shooters, from both Konami and rival companies; this short piece of code could give you the basis for making a decent Scramble clone of your own:

Here’s Mark’s code, which gets a Scramble-style scrolling landscape running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 22

You can read more features like this one in Wireframe issue 22, available now at Tesco, WHSmith, and all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 22 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Create a Scramble-style scrolling landscape | Wireframe issue 22 appeared first on Raspberry Pi.

Learn about AWS Services & Solutions – September AWS Online Tech Talks

Post Syndicated from Jenny Hang original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-september-aws-online-tech-talks/

Learn about AWS Services & Solutions – September AWS Online Tech Talks

AWS Tech Talks

Join us this September to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

 

Compute:

September 23, 2019 | 11:00 AM – 12:00 PM PTBuild Your Hybrid Cloud Architecture with AWS – Learn about the extensive range of services AWS offers to help you build a hybrid cloud architecture best suited for your use case.

September 26, 2019 | 1:00 PM – 2:00 PM PTSelf-Hosted WordPress: It’s Easier Than You Think – Learn how you can easily build a fault-tolerant WordPress site using Amazon Lightsail.

October 3, 2019 | 11:00 AM – 12:00 PM PTLower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances – Get an overview of T3 instances, understand what workloads are ideal for them, and understand how the T3 credit system works so that you can lower your EC2 instance costs today.

 

Containers:

September 26, 2019 | 11:00 AM – 12:00 PM PTDevelop a Web App Using Amazon ECS and AWS Cloud Development Kit (CDK) – Learn how to build your first app using CDK and AWS container services.

 

Data Lakes & Analytics:

September 26, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Provisioning Amazon MSK Clusters and Using Popular Apache Kafka-Compatible Tooling – Learn best practices on running Apache Kafka production workloads at a lower cost on Amazon MSK.

 

Databases:

September 25, 2019 | 1:00 PM – 2:00 PM PTWhat’s New in Amazon DocumentDB (with MongoDB compatibility) – Learn what’s new in Amazon DocumentDB, a fully managed MongoDB compatible database service designed from the ground up to be fast, scalable, and highly available.

October 3, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Enterprise-Class Security, High-Availability, and Scalability with Amazon ElastiCache – Learn about new enterprise-friendly Amazon ElastiCache enhancements like customer managed key and online scaling up or down to make your critical workloads more secure, scalable and available.

 

DevOps:

October 1, 2019 | 9:00 AM – 10:00 AM PT – CI/CD for Containers: A Way Forward for Your DevOps Pipeline – Learn how to build CI/CD pipelines using AWS services to get the most out of the agility afforded by containers.

 

Enterprise & Hybrid:

September 24, 2019 | 1:00 PM – 2:30 PM PT Virtual Workshop: How to Monitor and Manage Your AWS Costs – Learn how to visualize and manage your AWS cost and usage in this virtual hands-on workshop.

October 2, 2019 | 1:00 PM – 2:00 PM PT – Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services – Learn how AMS accelerates your migration to AWS, reduces your operating costs, improves security and compliance, and enables you to focus on your differentiating business priorities.

 

IoT:

September 25, 2019 | 9:00 AM – 10:00 AM PTComplex Monitoring for Industrial with AWS IoT Data Services – Learn how to solve your complex event monitoring challenges with AWS IoT Data Services.

 

Machine Learning:

September 23, 2019 | 9:00 AM – 10:00 AM PTTraining Machine Learning Models Faster – Learn how to train machine learning models quickly and with a single click using Amazon SageMaker.

September 30, 2019 | 11:00 AM – 12:00 PM PTUsing Containers for Deep Learning Workflows – Learn how containers can help address challenges in deploying deep learning environments.

October 3, 2019 | 1:00 PM – 2:30 PM PTVirtual Workshop: Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League – Join DeClercq Wentzel, Senior Product Manager for AWS DeepRacer, for a presentation on the basics of machine learning and how to build a reinforcement learning model that you can use to join the AWS DeepRacer League.

 

AWS Marketplace:

September 30, 2019 | 9:00 AM – 10:00 AM PTAdvancing Software Procurement in a Containerized World – Learn how to deploy applications faster with third-party container products.

 

Migration:

September 24, 2019 | 11:00 AM – 12:00 PM PTApplication Migrations Using AWS Server Migration Service (SMS) – Learn how to use AWS Server Migration Service (SMS) for automating application migration and scheduling continuous replication, from your on-premises data centers or Microsoft Azure to AWS.

 

Networking & Content Delivery:

September 25, 2019 | 11:00 AM – 12:00 PM PTBuilding Highly Available and Performant Applications using AWS Global Accelerator – Learn how to build highly available and performant architectures for your applications with AWS Global Accelerator, now with source IP preservation.

September 30, 2019 | 1:00 PM – 2:00 PM PTAWS Office Hours: Amazon CloudFront – Just getting started with Amazon CloudFront and [email protected]? Get answers directly from our experts during AWS Office Hours.

 

Robotics:

October 1, 2019 | 11:00 AM – 12:00 PM PTRobots and STEM: AWS RoboMaker and AWS Educate Unite! – Come join members of the AWS RoboMaker and AWS Educate teams as we provide an overview of our education initiatives and walk you through the newly launched RoboMaker Badge.

 

Security, Identity & Compliance:

October 1, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Running Active Directory on AWS – Learn how to deploy Active Directory on AWS and start migrating your windows workloads.

 

Serverless:

October 2, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on Amazon EventBridge – Learn how to optimize event-driven applications, and use rules and policies to route, transform, and control access to these events that react to data from SaaS apps.

 

Storage:

September 24, 2019 | 9:00 AM – 10:00 AM PTOptimize Your Amazon S3 Data Lake with S3 Storage Classes and Management Tools – Learn how to use the Amazon S3 Storage Classes and management tools to better manage your data lake at scale and to optimize storage costs and resources.

October 2, 2019 | 11:00 AM – 12:00 PM PTThe Great Migration to Cloud Storage: Choosing the Right Storage Solution for Your Workload – Learn more about AWS storage services and identify which service is the right fit for your business.

 

 

Recreate Super Sprint’s top-down racing | Wireframe issue 21

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-super-sprints-top-down-racing-wireframe-issue-21/

Making player and computer-controlled cars race round a track isn’t as hard as it sounds. Mark Vanstone explains all.

The original Super Sprint arcade machine had three steering wheels and three accelerator pedals.

From Gran Trak 10 to Super Sprint

Decades before the advent of more realistic racing games such as Sega Rally or Gran Turismo, Atari produced a string of popular arcade racers, beginning with Gran Trak 10 in 1974 and gradually updated via the Sprint series, which appeared regularly through the seventies and eighties. By 1986, Atari’s Super Sprint allowed three players to compete at once, avoiding obstacles and collecting bonuses as they careened around the tracks.

The original arcade machine was controlled with steering wheels and accelerator pedals, and computer-controlled cars added to the racing challenge. Tracks were of varying complexity, with some featuring flyover sections and shortcuts, while oil slicks and tornadoes posed obstacles to avoid. If a competitor crashed really badly, a new car would be airlifted in by helicopter.

Code your own Super Sprint

So how can we make our own Super Sprint-style racing game with Pygame Zero? To keep this example code short and simple, I’ve created a simple track with a few bends. In the original game, the movement of the computer-controlled cars would have followed a set of coordinates round the track, but as computers have much more memory now, I have used a bitmap guide for the cars to follow. This method produces a much less predictable movement for the cars as they turn right and left based on the shade of the track on the guide.

Four Formula One cars race around the track. Collisions between other cars and the sides of the track are detected.

With Pygame Zero, we can write quite a short piece of code to deal with both the player car and the automated ones, but to read pixels from a position on a bitmap, we need to borrow a couple of objects directly from Pygame: we import the Pygame image and Color objects and then load our guide bitmaps. One is for the player to restrict movement to the track, and the other is for guiding the computer-controlled cars around the track.

Three bitmaps are used for the track. One’s visible, and the other two are guides for the cars.

The cars are Pygame Zero Actors, and are drawn after the main track image in the draw() function. Then all the good stuff happens in the update() function. The player’s car is controlled with the up and down arrows for speed, and the left and right arrows to change the direction of movement. We then check to see if any cars have collided with each other. If a crash has happened, we change the direction of the car and make it reverse a bit. We then test the colour of the pixel where the car is trying to move to. If the colour is black or red (the boundaries), the car turns away from the boundary.

The car steering is based on the shade of a pixel’s colour read from the guide bitmap. If it’s light, the car will turn right, if it’s dark, the car will turn left, and if it’s mid-grey, the car continues straight ahead. We could make the cars stick more closely to the centre by making them react quickly, or make them more random by adjusting the steering angle more slowly. A happy medium would be to get the cars mostly sticking to the track but being random enough to make them tricky to overtake.

Our code will need a lot of extra elements to mimic Atari’s original game, but this short snippet shows how easily you can get a top-down racing game working in Pygame Zero:

Here’s Mark’s code, which gets a Super Sprint-style racer running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 21

You can read more features like this one in Wireframe issue 21, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 21 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to news stand pricing!

The post Recreate Super Sprint’s top-down racing | Wireframe issue 21 appeared first on Raspberry Pi.

Code your own 2D shooting gallery in Python | Wireframe issue 20

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-2d-shooting-gallery-in-python-wireframe-issue-20/

Raspberry Pi’s own Rik Cross shows you how to hit enemies with your mouse pointer as they move around the screen.

Duck Hunt made effective use of the NES Zapper, and made a star of its sniggering dog, who’d pop up to heckle you between stages.

Clicky Clicky Bang Bang

Shooting galleries have always been a part of gaming, from the Seeburg Ray-O-Lite in the 1930s to the light gun video games of the past 40 years. Nintendo’s Duck Hunt — played with the NES Zapper — was a popular console shooting game in the early eighties, while titles such as Time Crisis and The House of the Dead kept the genre alive in the 1990s and 2000s.

Here, I’ll show you how to use a mouse to fire bullets at moving targets. Code written to instead make use of a light gun and a CRT TV (as with Duck Hunt) would look very different. In these games, pressing the light gun’s trigger would cause the entire screen to go black and an enemy sprite to become bright white. A light sensor at the end of the gun would then check whether the gun is pointed at the white sprite, and if so, would register a hit. If more than one enemy was on the screen when the trigger was pressed, each enemy would flash white for one frame in turn, so that the gun would know which enemy had been hit.

Our simple shooting gallery in Python. You could try adding randomly spawning ducks, a scoreboard, and more.

Pygame Zero

I’ve used two Pygame Zero event hooks for dealing with mouse input. Firstly, the on_mouse_move() function updates the position of the crosshair sprite whenever the mouse is moved. The on_mouse_down() function reacts to mouse button presses, with the left button being pressed to fire a bullet (if numberofbullets > 0) and the right button to reload (setting numberofbullets to MAXBULLETS).

Each time a bullet is fired, a check is made to see whether any enemy sprites are colliding with the crosshair — a collision means that an enemy has been hit. Luckily, Pygame Zero has a colliderect() function to tell us whether the rectangular boundary around two sprites intersects.

If this helper function wasn’t available, we’d instead need to use sprites’ x and y coordinates, along with width and height data (w and h below) to check whether the two sprites intersect both horizontally and vertically. This is achieved by coding the following algorithm:

  • Is the left-hand edge of sprite 1 further left than the right-hand edge of sprite 2 (x1 < x2+w2)?
  • Is the right-hand edge of sprite 1 further right than the left-hand edge of sprite 2 (x1+w1 > x2)?
  • Is the top edge of sprite 1 higher up than the bottom edge of sprite 2 (y1 < y2+h2)?
  • Is the bottom edge of sprite 1 lower down than the top edge of sprite 2 (y1+h1 > y2)?

If the answer to the four questions above is True, then the two sprites intersect (see Figure 1). To give visual feedback, hit enemies briefly remain on the screen (in this case, 50 frames). This is achieved by setting a hit variable to True, and then decrementing a timer once this variable has been set. The enemy’s deleted when the timer reaches 0.

Figure 1: A visual representation of a collision algorithm, which checks whether two sprites intersect.

As well as showing an enemy for a short time after being hit, successful shots are also shown. A problem that needs to be overcome is how to modify an enemy sprite to show bullet holes. A hits list for each enemy stores bullet sprites, which are then drawn over enemy sprites.

Storing hits against an enemy allows us to easily stop drawing these hits once the enemy is removed. In the example code, an enemy stops moving once it has been hit.

If you don’t want this behaviour, then you’ll also need to update the position of the bullets in an enemy’s hits list to match the enemy movement pattern.

When decrementing the number of bullets, the max() function is used to ensure that the bullet count never falls below 0. The max() function returns the highest of the numbers passed to it, and as the maximum of 0 and any negative number is 0, the number of bullets always stays within range.

There are a couple of ways in which the example code could be improved. Currently, a hit is registered when the crosshair intersects with an enemy — even if they are barely touching. This means that often part of the bullet is drawn outside of the enemy sprite boundary. This could be solved by creating a clipping mask around an enemy before drawing a bullet. More visual feedback could also be given by drawing missed shots, stored in a separate list.

Here’s Rik’s code, which lets you hit enemies with your mouse pointer. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 20

You can read more features like this one in Wireframe issue 20, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 20 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code your own 2D shooting gallery in Python | Wireframe issue 20 appeared first on Raspberry Pi.

Create your own arcade-style continue screen | Wireframe #19

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/create-your-own-arcade-style-continue-screen-wireframe-19/

Raspberry Pi’s Rik Cross shows you how to create game states, and rules for moving between them.

Ninja Gaiden’s dramatic continue screen. Who would be cruel enough to walk away?

The continue screen, while much less common now, was a staple feature of arcade games, providing an opportunity (for a small fee) to reanimate the game’s hero and to pick up where they left off.

Continue Screens

Games such as Tecmo’s Ninja Gaiden coin-op (known in some regions as Shadow Warriors) added jeopardy to their continue screen, in an effort to convince us to part with our money.

Often, a continue screen is one of many screens that a player may find themselves on; other possibilities being a title screen or an instruction screen. I’ll show you how you can add multiple screens to a game in a structured way, avoiding a tangle of if…else statements and variables.

A simple way of addressing this problem is to create separate update and draw functions for each of these screens, and then switch between these functions as required. Functions are ‘first-class citizens’ of the Python language, which means that they can be stored and manipulated just like any other object, such as numbers, text, and class instances. They can be stored in variables and other data types such as lists and dictionaries, and passed as parameters to (or returned from) other functions.

the continue screen of SNK’s Fantasy

SNK’s Fantasy, released in 1981, was the first arcade game to feature a continue screen.

We can take advantage of the first-class nature of Python functions by storing the functions for the current screen in variables, and then calling them in the main update() and draw() functions. In the following example, notice the difference between storing a function in a variable (by using the function name without parentheses) and calling the function (by including parentheses).

[Ed. comment: We have to use an image here because WordPress doesn’t seem to allow code indentation. We know that’s annoying because you can’t copy and paste the code, so if you know a better solution, please leave us a comment.]

The example code above calls currentupdatefunction() and currentdrawfunction(), which each store a reference to separate update() and draw() functions for the continue screen. These continue screen functions could then also include logic for changing which function is called, by updating the function reference stored in currentupdatefunction and currentdrawfunction.

This way of structuring code can be taken a step further by making use of state machines. In a state machine, a system can be in one of a (finite) number of predefined states, and rules determine the conditions under which a system can transition from one state into another.

Rules define conditions that need to be satisfied in order to move between states.

A state machine (in this case a very simplified version) can be implemented by first creating a core State() class. Each game state has its own update() and draw() methods, and a rules dictionary containing state:rule pairs – references to other state objects linked to functions for testing game conditions. As an example, the continuescreen state has two rules:

  • Transition to the gamescreen state if the SPACE key is pressed;
  • Transition to the titlescreen state if the frame timer reaches 10.

This is pulled together with a StateMachine() class, which keeps track of the current state. The state machine calls the update() and draw() methods for the current state, and checks the rules for transitioning between states. Each rule in the current state’s rules list is executed, with the state machine updating the reference to its current state if the rule function returns True. I’ve also added a frame counter that is incremented by the state machine’s update() function each time it is run. While not a necessary part of the state machine, it does allow the continue screen to count down from 10, and could have a number of other uses, such as for animating sprites.

Something else to point out is the use of lambda functions when adding rules to states. Lambda functions are small, single-expression anonymous functions that return the result of evaluating its expression when called. Lambda functions have been used in this example simply to make the code a little more concise, as there’s no benefit to naming the functions passed to addrule().

State machines have lots of other potential uses, including the modelling of player states. It’s also possible to extend the state machine in this example by adding onenter() and onexit() functions that can be called when transitioning between states.

Here’s Rik’s code, which gets a simple continue screen up and running in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code, visit our Github repository here.

Get your copy of Wireframe issue 19

You can read more features like this one in Wireframe issue 19, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 19 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Create your own arcade-style continue screen | Wireframe #19 appeared first on Raspberry Pi.

Recreate 3D Monster Maze’s 8-bit labyrinth | Wireframe issue 18

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-3d-monster-mazes-8-bit-labyrinth-wireframe-issue-18/

You too can recreate the techniques behind a pioneering 3D maze game in Python. Mark Vanstone explains how.

3D Monster Maze, released in 1982 by J.K. Greye software, written by Malcolm Evans.

3D Monster Maze

While 3D games have become more and more realistic, some may forget that 3D games on home computers started in the mists of time on machines like the Sinclair ZX81. One such pioneering game took pride of place in my collection of tapes, took many minutes to load, and required the 16K RAM pack expansion. That game was 3D Monster Maze — perhaps the most popular game released for the ZX81.

The game was released in 1982 by J.K. Greye Software, and written by Malcolm Evans. Although the graphics were incredibly low resolution by today’s standards, it became an instant hit. The idea of the game was to navigate around a randomly generated maze in search of the exit.

The problem was that a Tyrannosaurus rex also inhabited the maze, and would chase you down and have you for dinner if you didn’t escape quickly enough. The maze itself was made of straight corridors on a 16×18 grid, which the player would move around from one block to the next. The shape of the blocks were displayed by using the low-resolution pixels included in the ZX81’s character set, with 2×2 pixels per character on the screen.

The original ZX81 game drew its maze from chunky 2×2 pixel blocks.

Draw imaginary lines

There’s an interesting trick to recreating the original game’s 3D corridor display which, although quite limited, works well for a simplistic rendering of a maze. To do this, we need to draw imaginary lines diagonally from corner to corner in a square viewport: these are our vanishing point perspective guides. Then each corridor block in our view is half the width and half the height of the block nearer to us.

If we draw this out with lines showing the block positions, we get a view that looks like we’re looking down a long corridor with branches leading off left and right. In our Pygame Zero version of the maze, we’re going to use this wireframe as the basis for drawing our block elements. We’ll create graphics for blocks that are near the player, one block away, two, three, and four blocks away. We’ll need to view the blocks from the left-hand side, the right-hand side, and the centre.

The maze display is made by drawing diagonal lines to a central vanishing point.

Once we’ve created our block graphics, we’ll need to make some data to represent the layout of the maze. In this example, the maze is built from a 10×10 list of zeros and ones. We’ll set a starting position for the player and the direction they’re facing (0–3), then we’re all set to render a view of the maze from our player’s perspective.

The display is created from furthest away to nearest, so we look four blocks away from the player (in the direction they’re looking) and draw a block if there’s one indicated by the maze data to the left; we do the same on the right, and finally in the middle. Then we move towards the player by a block and repeat the process (with larger graphics) until we get to the block the player is on.

Each visible block is drawn from the back forward to make the player’s view of the corridors.

That’s all there is to it. To move backwards and forwards, just change the position in the grid the player’s standing on and redraw the display. To turn, change the direction the player’s looking and redraw. This technique’s obviously a little limited, and will only work with corridors viewed at 90-degree angles, but it launched a whole genre of games on home computers. It really was a big deal for many twelve-year-olds — as I was at the time — and laid the path for the vibrant, fast-moving 3D games we enjoy today.

Here’s Mark’s code, which recreates 3D Monster Maze’s network of corridors in Python. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code, visit our Github repository here.

Get your copy of Wireframe issue 18

You can read more features like this one in Wireframe issue 18, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 18 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate 3D Monster Maze’s 8-bit labyrinth | Wireframe issue 18 appeared first on Raspberry Pi.

Code your own path-following Lemmings in Python | Wireframe issue 17

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-path-following-lemmings-in-python-wireframe-issue-17/

Learn how to create your own obedient lemmings that follow any path put in front of them. Raspberry Pi’s own Rik Cross explains how.

The original Lemmings, first released for the Amiga, quickly spread like a virus to just about every computer and console of the day.

Lemmings

Lemmings is a puzzle-platformer, created at DMA Design, and first became available for the Amiga in 1991. The aim is to guide a number of small lemming sprites to safety, navigating traps and difficult terrain along the way. Left to their own devices, the lemmings will simply follow the path in front of them, but additional ‘special powers’ given to lemmings allow them to (among other things) dig, climb, build, and block in order to create a path to freedom (or to the next level, anyway).

Code your own lemmings

I’ll show you a simple way (using Python and Pygame) in which lemmings can be made to follow the terrain in front of them. The first step is to store the level’s terrain information, which I’ve achieved by using a two-dimensional list to store the colour of each pixel in the background ‘level’ image. In my example, I’ve used the ‘Lemcraft’ tileset by Matt Hackett (of Lost Decade Games) – taken from opengameart.org – and used the Tiled software to stitch the tiles together into a level.

The algorithm we then use can be summarised as follows: check the pixels immediately below a lemming. If the colour of those pixels isn’t the same as the background colour, then the lemming is falling. In this case, move the lemming down by one pixel on the y-axis. If the lemming isn’t falling, then it’s walking. In this case, we need to see whether there is a non-ground, background-coloured pixel in front of the lemming for it to move onto.

Sprites cling to the ground below them, navigating uneven terrain, and reversing direction when they hit an impassable obstacle.

If a pixel is found in front of the lemming (determined by its direction) that is low enough to get to (i.e. lower than its climbheight), then the lemming moves forward on the x-axis by one pixel, and upwards on the y-axis to the new ground level. However, if no suitable ground is found to move onto, then the lemming reverses its direction.

The algorithm is stored as a lemming’s update() method, which is executed for each lemming, each frame of the game. The sample level.png file can be edited, or swapped for another image altogether. If using a different image, just remember to update the level’s BACKGROUND_COLOUR in your code, stored as a (red, green, blue, alpha) tuple. You may also need to increase your lemming climbheight if you want them to be able to navigate a climb of more than four pixels.

There are other things you can do to make a full Lemmings clone. You could try replacing the yellow-rectangle lemmings in my example with pixel-art sprites with their own walk cycle animation (see my article in issue #14), or you could give your lemmings some of the special powers they’ll need to get to safety, achieved by creating flags that determine how lemmings interact with the terrain around them.

Here’s Rik’s code, which gets those path-following lemmings moving about in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 17

You can read more features like this one in Wireframe issue 17, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 17 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code your own path-following Lemmings in Python | Wireframe issue 17 appeared first on Raspberry Pi.

Recreate the sprite-following Options from Gradius using Python | Wireframe issue 16

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-the-sprite-following-options-from-gradius-using-python-wireframe-issue-16/

Learn how to create game objects that follow the path of the main player sprite. Raspberry Pi’s own Rik Cross explains all.

Options first appeared in 1985’s Gradius, but became a mainstay of numerous sequels and spin-offs, including the Salamander and Parodius series of games.

Gradius

First released by Konami in 1985, Gradius pushed the boundaries of the shoot-’em-up genre with its varied level design, dramatic boss fights, and innovative power-up system.

One of the most memorable of its power-ups was the Option — a small, drone-like blob that followed the player’s ship and effectively doubled its firepower.

By collecting more power-ups, it was possible to gather a cluster of death-dealing Options, which obediently moved wherever the player moved.

Recreate sprite-following in Python

There are a few different ways of recreating Gradius’ sprite-following, but in this article, I’ll show you a simple implementation that uses the player’s ‘position history’ to place other following items on the screen. As always, I’ll be using Python and Pygame to recreate this effect, and I’ll be making use of a spaceship image created by ‘pitrizzo’ from opengameart.org.

The first thing to do is to create a spaceship and a list of ‘power-up’ objects. Storing the power-ups in a list allows us to perform a simple calculation on a power-up to determine its position, as you’ll see later. As we’ll be iterating through the power-ups stored in a list, there’s no need to create a separate variable for each. Instead, we can use list comprehension to create the power-ups:

powerups = [Actor(‘powerup’) for p in range(3)]

The player’s position history will be a list of previous positions, stored as a list of (x,y) tuples. Each time the player’s position changes, the new position is added to the front of the list (as the new first element). We only need to know the spaceship’s recent position history, so the list is also truncated to only contain the 100 most recent positions. Although not necessary, the following code can be added to allow you to see a selection (in this case every fifth) of these previous positions:

for p in previouspositions[::5]:

screen.draw.filled_circle(p, 2, (255,0,0))

Plotting the spaceship’s position history.

Each frame of the game, this position list is used to place each of the power-ups. In our Gradius-like example, we need each of these objects to follow the player’s spaceship in a line, as if moving together in a single-file queue. To achieve this effect, a power-up’s position is determined by its position in the power-ups list, with the first power-up in the list taking up a position nearest to the player. In Python, using enumerate when iterating through a list allows us to get the power-up’s position in the list, which can then be used to determine which position in the player’s position history to use.

newposition = previouspositions[(i+1)*20]

So, the first power-up in the list (element 0 in the list) is placed at the coordinates of the twentieth ((0+1)*20) position in the spaceship’s history, the second power-up at the fourtieth position, and so on. Using this simple calculation, elements are equally spaced along the spaceship’s previous path. The only thing to be careful of here is that you have enough items in the position history for the number of items you want to follow the player!

Power-ups following a player sprite, using the player’s position history.

This leaves one more question to answer; where do we place these power-ups initially, when the spaceship has no position history? There are a few different ways of solving this problem, but the simplest is just to generate a fictitious position history at the beginning of the game. As I want power-ups to be lined up behind the spaceship initially, I again used list comprehension

to generate a list of 100 positions with ever-decreasing x-coordinates.

previouspositions = [(spaceship.x - i*spaceship.speed,spaceship.y) for i in range(100)]

With an initial spaceship position of (400,400) and a spaceship.speed of 4, this means the list will initially contain the following coordinates:

previouspositions = [(400,400),(396,400),(392,400),(388,400),...]

Storing our player’s previous position history has allowed us to create path-following power-ups with very little code. The idea of storing an object’s history can have very powerful applications. For example, a paint program could store previous commands that have been executed, and include an ‘undo’ button that can work backwards through the commands.

Here’s Rik’s code, which recreates those sprite-following Options in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 16

You can read more features like this one in Wireframe issue 16, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 16 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate the sprite-following Options from Gradius using Python | Wireframe issue 16 appeared first on Raspberry Pi.

Coding an isometric game map | Wireframe issue 15

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/coding-an-isometric-game-map-wireframe-issue-15/

Isometric graphics give 2D games the illusion of depth. Mark Vanstone explains how to make an isometric game map of your own.

Published by Quicksilva in 1983, Ant Attack was one of the earliest games to use isometric graphics. And you threw grenades at giant ants. It was brilliant.

Isometric projection

Most early arcade games were 2D, but in 1982, a new dimension emerged: isometric projection. The first isometric game to hit arcades was Sega’s pseudo-3D shooter, Zaxxon. The eye-catching format soon caught on, and other isometric titles followed: Q*bert came out the same year, and in 1983 the first isometric game for home computers was published: Ant Attack, written by Sandy White.

Ant Attack

Ant Attack was first released on the ZX Spectrum, and the aim of the game was for the player to find and rescue a hostage in a city infested with giant ants. The isometric map has since been used by countless titles, including Ultimate Play The Game’s classics Knight Lore and Alien 8, and my own educational history series ArcVenture.

Let’s look at how an isometric display is created, and code a simple example of how this can be done in Pygame Zero — so let’s start with the basics. The isometric view displays objects as if you’re looking down at 45 degrees onto them, so the top of a cube looks like a diamond shape. The scene is made by drawing cubes on a diagonal grid so that the cubes overlap and create solid-looking structures. Additional layers can be used above them to create the illusion of height.

Blocks are drawn from the back forward, one line at a time and then one layer on top of another until the whole map is drawn.

The cubes are actually two-dimensional bitmaps, which we start printing at the top of the display and move along a diagonal line, drawing cubes as we go. The map is defined by a three-dimensional list (or array). The list is the width of the map by the height of the map, and has as many layers as we want to represent in the upward direction. In our example, we’ll represent the floor as the value 0 and a block as value 1. We’ll make a border around the map and create some arches and pyramids, but you could use any method you like — such as a map editor — to create the map data.

To make things a bit easier on the processor, we only need to draw cubes that are visible in the window, so we can do a check of the coordinates before we draw each cube. Once we’ve looped over the x, y, and z axes of the data list, we should have a 3D map displayed. The whole map doesn’t fit in the window, and in a full game, the map is likely to be many times the size of the screen. To see more of the map, we can add some keyboard controls.

Here’s Mark’s isometric map, coded in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, visit our Github repository here.

If we detect keyboard presses in the update() function, all we need to do to move the map is change the coordinates we start drawing the map from. If we start drawing further to the left, the right-hand side of the map emerges, and if we draw the map higher, the lower part of the map can be seen.

We now have a basic map made of cubes that we can move around the window. If we want to make this into a game, we can expand the way the data represents the display. We could add differently shaped blocks represented by different numbers in the data, and we could include a player block which gets drawn in the draw() function and can be moved around the map. We could also have some enemies moving around — and before we know it, we’ll have a game a bit like Ant Attack.

Tiled

When writing games with large isometric maps, an editor will come in handy. You can write your own, but there are several out there that you can use. One very good one is called Tiled and can be downloaded free from mapeditor.org. Tiled allows you to define your own tilesets and export the data in various formats, including JSON, which can be easily read into Python.

Get your copy of Wireframe issue 15

You can read more features like this one in Wireframe issue 15, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 15 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Coding an isometric game map | Wireframe issue 15 appeared first on Raspberry Pi.

Make a Donkey Kong–style walk cycle | Wireframe issue 14

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/make-a-donkey-kong-style-walk-cycle-wireframe-issue-14/

Effective animation gave Donkey Kong barrels of personality. Raspberry Pi’s own Rik Cross explains how to create a similar walk cycle.

Donkey Kong wasn’t the first game to feature an animated character who could walk and jump, but on its release in 1981, it certainly had more personality than the games that came before it. You only have to compare Donkey Kong to another Nintendo arcade game that came out just two years earlier — the half-forgotten top-down shooter Sheriff — to see how quickly both technology and pixel art moved on in that brief period. Although simple by modern standards, Donkey Kong’s hero Jumpman (later known as Mario) packed movement and personality into just a few frames of animation.

In this article, I’ll show you how to use Python and Pygame to create a character with a simple walk cycle animation like Jumpman’s in Donkey Kong. The code can, however, be adapted for any game object that requires animation, and even for multiple game object animations, as I’ll explain later.

Jumpman’s (aka Mario’s) walk cycle comprised just three frames of animation.

Firstly, we’ll need some images to animate. As this article is focused on the animation code and not the theory behind creating walk cycle images, I grabbed some suitable images created by Kenney Vleugels and available at opengameart.org.

Let’s start by animating the player with a simple walk cycle. The two images to be used in the animation are stored in an images list, and an animationindex variable keeps track of the index of the current image in the list to display. So, for a very simple animation with just two different frames, the images list will contain two different images:

images = [‘walkleft1’,‘walkleft2’

To achieve a looping animation, the animationindex is repeatedly incremented, and is reset to 0 once the end of the images list is reached. Displaying the current image can then be achieved by using the animationindex to reference and draw the appropriate image in the animation cycle:

self.image = self.images[self.state][self.animationindex]

A list of images along with an index is used to loop through an animation cycle.

The problem with the code described so far is that the animationindex is incremented once per frame, and so the walk cycle will happen way too quickly, and won’t look natural. To solve this problem, we need to tell the player to update its animation every few frames, rather than every frame. To achieve this, we need another couple of variables; I’ll use animationdelay to store the number of frames to skip between displayed images, and animationtimer to store the number of frames since the last image change.

Therefore, the code needed to animate the player becomes:

self.animationtimer += 1
if self.animationtimer >= self.animationdelay:
self.animationtimer = 0
self.animationindex += 1
if self.animationindex > len(self.images) - 1:
self.animationindex = 0
self.image = self.images[self.animationindex]

So we have a player that appears to be walking, but now the problem is that the player walks constantly, and always in the same direction! The rest of this article will show you how to solve these two related problems.

There are a few different ways to approach this problem, but the method I’ll use is to make use of game object states, and then have different animations for each state. This method is a little more complicated, but it’s very adaptable.

The first thing to do is to decide on what the player’s ‘states’ might be — stand, walkleft, and walkright will do as a start. Just as we did with our previous single animation, we can now define a list of images for each of the possible player’s states. Again, there are lots of ways of structuring this data, but I’ve opted for a Python dictionary linking states and image lists:

self.images = { ‘stand’ : [‘stand1’],
‘walkleft’ : [‘walkleft1’,‘walkleft2’],
‘walkright’ : [‘walkright1’,‘walkright2’]
}

The player’s state can then be stored, and the correct image obtained by using the value of state along with the animationindex:

self.image = self.images[self.state][self.animationindex]

The correct player state can then be set by getting the keyboard input, setting the player to walkleft if the left arrow key is pressed or walkright if the right arrow key is pressed. If neither key is pressed, the player can be set to a stand state; the image list for which contains a single image of the player facing the camera.

Animation cycles can be linked to player ‘states’.

For simplicity, a maximum of two images are used for each animation cycle; adding more images would create a smoother or more realistic animation.

Using the code above, it would also be possible to easily add additional states for, say, jumping or fighting enemies. You could even take things further by defining an Animation() object for each player state. This way, you could specify the speed and other properties (such as whether or not to loop) for each animation separately, giving you greater flexibility.

Here’s Rik’s animated walk cycle, coded in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 14

You can read more features like this one in Wireframe issue 14, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 14 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Make a Donkey Kong–style walk cycle | Wireframe issue 14 appeared first on Raspberry Pi.