Tag Archives: python

Open-Sourcing Metaflow, a Human-Centric Framework for Data Science

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/open-sourcing-metaflow-a-human-centric-framework-for-data-science-fa72e04a5d9?source=rss----2615bd06b42e---4

by David Berg, Ravi Kiran Chirravuri, Romain Cledat, Savin Goyal, Ferras Hamad, Ville Tuulos

tl;dr Metaflow is now open-source! Get started at metaflow.org.

Netflix applies data science to hundreds of use cases across the company, including optimizing content delivery and video encoding. Data scientists at Netflix relish our culture that empowers them to work autonomously and use their judgment to solve problems independently. We want our data scientists to be curious and take smart risks that have the potential for high business impact.

About two years ago, we, at our newly formed Machine Learning Infrastructure team started asking our data scientists a question: “What is the hardest thing for you as a data scientist at Netflix?” We were expecting to hear answers related to large-scale data and models, and maybe issues related to modern GPUs. Instead, we heard stories about projects where getting the first version to production took surprisingly long — mainly because of mundane reasons related to software engineering. We heard many stories about difficulties related to data access and basic data processing. We sat in meetings where data scientists discussed with their stakeholders how to best version different versions of their models without impacting production. We saw how excited data scientists were about modern off-the-shelf machine learning libraries, but we also witnessed various issues caused by these libraries when they were casually included as dependencies in production workflows.

We realized that nearly everything that data scientists wanted to do was already doable technically, but nothing was easy enough. Our job as a Machine Learning Infrastructure team would therefore not be mainly about enabling new technical feats. Instead, we should make common operations so easy that data scientists would not even realize that they were difficult before. We would focus our energy solely on improving data scientist productivity by being fanatically human-centric.

How could we improve the quality of life for data scientists? The following picture started emerging:

Our data scientists love the freedom of being able to choose the best modeling approach for their project. They know that feature engineering is critical for many models, so they want to stay in control of model inputs and feature engineering logic. In many cases, data scientists are quite eager to own their own models in production, since it allows them to troubleshoot and iterate the models faster.

On the other hand, very few data scientists feel strongly about the nature of the data warehouse, the compute platform that trains and scores their models, or the workflow scheduler. Preferably, from their point of view, these foundational components should “just work”. If, and when they fail, the error messages should be clear and understandable in the context of their work.

A key observation was that most of our data scientists had nothing against writing Python code. In fact, plain-and-simple Python is quickly becoming the lingua franca of data science, so using Python is preferable to domain specific languages. Data scientists want to retain their freedom to use arbitrary, idiomatic Python code to express their business logic — like they would do in a Jupyter notebook. However, they don’t want to spend too much time thinking about object hierarchies, packaging issues, or dealing with obscure APIs unrelated to their work. The infrastructure should allow them to exercise their freedom as data scientists but it should provide enough guardrails and scaffolding, so they don’t have to worry about software architecture too much.

Introducing Metaflow

These observations motivated Metaflow, our human-centric framework for data science. Over the past two years, Metaflow has been used internally at Netflix to build and manage hundreds of data-science projects from natural language processing to operations research.

By design, Metaflow is a deceptively simple Python library:

Data scientists can structure their workflow as a Directed Acyclic Graph of steps, as depicted above. The steps can be arbitrary Python code. In this hypothetical example, the flow trains two versions of a model in parallel and chooses the one with the highest score.

On the surface, this doesn’t seem like much. There are many existing frameworks, such as Apache Airflow or Luigi, which allow execution of DAGs consisting of arbitrary Python code. The devil is in the many carefully designed details of Metaflow: for instance, note how in the above example data and models are stored as normal Python instance variables. They work even if the code is executed on a distributed compute platform, which Metaflow supports by default, thanks to Metaflow’s built-in content-addressed artifact store. In many other frameworks, loading and storing of artifacts is left as an exercise for the user, which forces them to decide what should and should not be persisted. Metaflow removes this cognitive overhead.

Metaflow is packed with human-centric details like this, all of which aim at boosting data scientist productivity. For a comprehensive overview of all features of Metaflow, take a look at our documentation at docs.metaflow.org.

Metaflow on Amazon Web Services

Netflix’s data warehouse contains hundreds of petabytes of data. While a typical machine learning workflow running on Metaflow touches only a small shard of this warehouse, it can still process terabytes of data.

Metaflow is a cloud-native framework. It leverages elasticity of the cloud by design — both for compute and storage. Netflix has been one of the largest users of Amazon Web Services (AWS) for many years and we have accumulated plenty of operational experience and expertise in dealing with the cloud, AWS in particular. For the open-source release, we partnered with AWS to provide a seamless integration between Metaflow and various AWS services.

Metaflow comes with built-in capability to snapshot all code and data in Amazon S3 automatically, which is a key value proposition of our internal Metaflow setup. This provides us with a comprehensive solution for versioning and experiment tracking without any user intervention, which is core to any production-grade machine learning infrastructure.

In addition, Metaflow comes bundled with a high-performance S3 client, which can load data up to 10Gbps. This client has been massively popular amongst our users, who can now load data into their workflows an order of magnitude faster than before, enabling faster iteration cycles.

For general purpose data processing, Metaflow integrates with AWS Batch, which is a managed, container-based compute platform provided by AWS. The user can benefit from infinitely scalable compute clusters by adding a single line in their code: @batch. For training machine learning models, besides writing their own functions, the user has the choice to use AWS Sagemaker, which provides high-performance implementations of various models, many of which support distributed training.

Metaflow supports all common off-the-shelf machine learning frameworks through our @conda decorator, which allows the user to specify external dependencies for their steps safely. The @conda decorator freezes the execution environment, providing good guarantees of reproducibility, both when executed locally as well as in the cloud.

For more details, read this page about Metaflow’s integration with AWS.

From Prototype To Production

Out of the box, Metaflow provides a first-class local development experience. It allows data scientists to develop and test code quickly on your laptop, similar to any Python script. If your workflow supports parallelism, Metaflow takes advantage of all CPU cores available on your development machine.

We encourage our users to deploy their workflows to production as soon as possible. In our case, “production” means a highly available, centralized DAG scheduler, Meson, where users can export their Metaflow runs for execution with a single command. This allows them to start testing their workflow with regularly updating data quickly, which is a highly effective way to surface bugs and issues in the model. Since Meson is not available in open-source, we are working on providing a similar integration to AWS Step Functions, which is a highly available workflow scheduler.

In a complex business environment like Netflix’s, there are many ways to consume the results of a data science workflow. Often, the final results are written to a table, to be consumed by a dashboard. Sometimes, the resulting model is deployed as a microservice to support real-time inferencing. It is also common to chain workflows so that the results of a workflow are consumed by another. Metaflow supports all these modalities, although some of these features are not yet available in the open-source version.

When it comes to inspecting the results, Metaflow comes with a notebook-friendly client API. Most of our data scientists are heavy users of Jupyter notebooks, so we decided to focus our UI efforts on a seamless integration with notebooks, instead of providing a one-size-fits-all Metaflow UI. Our data scientists can build custom model UIs in notebooks, fetching artifacts from Metaflow, which provide just the right information about each model. A similar experience is available with AWS Sagemaker notebooks with open-source Metaflow.

Get Started With Metaflow

Metaflow has been eagerly adopted inside of Netflix, and today, we are making Metaflow available as an open-source project.

We hope that our vision of data scientist autonomy and productivity resonates outside Netflix as well. We welcome you to try Metaflow, start using it in your organization, and participate in its development.

You can find the project home page at metaflow.org and the code at github.com/Netflix/metaflow. Metaflow is comprehensively documented at docs.metaflow.org. The quickest way to get started is to follow our tutorial. If you want to learn more before getting your hands dirty, you can watch presentations about Metaflow at the high level or dig deeper into the internals of Metaflow.

If you have any questions, thoughts, or comments about Metaflow, you can find us at Metaflow chat room or you can reach us by email at [email protected]. We are eager to hear from you!


Open-Sourcing Metaflow, a Human-Centric Framework for Data Science was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Code a Frogger-style road-crossing game | Wireframe #27

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-frogger-style-road-crossing-game-wireframe-27/

Guide a frog across busy roads and rivers. Mark Vanstone shows you how to code a simple remake of Konami’s arcade game, Frogger.

Konami’s original Frogger: so iconic, it even featured in a 1998 episode of Seinfeld.

Frogger

Why did the frog cross the road? Because Frogger would be a pretty boring game if it didn’t. Released in 1981 by Konami, the game appeared in assorted bars, sports halls, and arcades across the world, and became an instant hit. The concept was simple: players used the joystick to move a succession of frogs from the bottom of the screen to the top, avoiding a variety of hazards – cars, lorries, and later, the occasional crocodile. Each frog had to be safely manoeuvred to one of five alcoves within a time limit, while extra points were awarded for eating flies along the way.

Before Frogger, Konami mainly focused on churning out clones of other hit arcade games like Space Invaders and Breakout; Frogger was one of its earliest original ideas, and the simplicity of its concept saw it ported to just about every home system available at the time. (Ironically, Konami’s game would fall victim to repeated cloning by other developers.) Decades later, developers still take inspiration from it; Hipster Whale’s Crossy Road turned Frogger into an endless running game; earlier this year, Konami returned to the creative well with Frogger in Toy Town, released on Apple Arcade.

Code your own Konami Frogger

We can recreate much of Frogger’s gameplay in just a few lines of Pygame Zero code. The key elements are the frog’s movement, which use the arrow keys, vehicles that move across the screen, and floating objects – logs and turtles – moving in opposite directions. Our background graphic will provide the road, river, and grass for our frog to move over. The frog’s movement will be triggered from an on_key_down() function, and as the frog moves, we switch to a second frame with legs outstretched, reverting back to a sitting position after a short delay. We can use the inbuilt Actor properties to change the image and set the angle of rotation.

In our Frogger homage, we move the frog with the arrow keys to avoid the traffic, and jump onto the floating logs and turtles.

For all the other moving elements, we can also use Pygame Zero Actors; we just need to make an array for our cars with different graphics for the various rows, and an array for our floating objects in the same way.
In our update() function, we need to move each Actor according to which row it’s in, and when an Actor disappears off the screen, set the x coordinate so that it reappears on the opposite side.

Handling the logic of the frog moving across the road is quite easy; we just check for collision with each of the cars, and if the frog hits a car, then we have a squashed frog. The river crossing is a little more complicated. Each time the frog moves on the river, we need to make sure that it’s on a floating Actor. We therefore check to make sure that the frog is in collision with one of the floating elements, otherwise it’s game over.

There are lots of other elements you could add to the example shown here: the original arcade game provided several frogs to guide to their alcoves on the other side of the river, while crocodiles also popped up from time to time to add a bit more danger. Pygame Zero has all the tools you need to make a fully functional version of Konami’s hit.

Here’s Mark’s code, which gets a Frogger homage running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 27

You can read more features like this one in Wireframe issue 27, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 27 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Frogger-style road-crossing game | Wireframe #27 appeared first on Raspberry Pi.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

Post Syndicated from Nadin El-Yabroudi original https://blog.cloudflare.com/introducing-flan-scan/

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

Today, we’re excited to open source Flan Scan, Cloudflare’s in-house lightweight network vulnerability scanner. Flan Scan is a thin wrapper around Nmap that converts this popular open source tool into a vulnerability scanner with the added benefit of easy deployment.

We created Flan Scan after two unsuccessful attempts at using “industry standard” scanners for our compliance scans. A little over a year ago, we were paying a big vendor for their scanner until we realized it was one of our highest security costs and many of its features were not relevant to our setup. It became clear we were not getting our money’s worth. Soon after, we switched to an open source scanner and took on the task of managing its complicated setup. That made it difficult to deploy to our entire fleet of more than 190 data centers.

We had a deadline at the end of Q3 to complete an internal scan for our compliance requirements but no tool that met our needs. Given our history with existing scanners, we decided to set off on our own and build a scanner that worked for our setup. To design Flan Scan, we worked closely with our auditors to understand the requirements of such a tool. We needed a scanner that could accurately detect the services on our network and then lookup those services in a database of CVEs to find vulnerabilities relevant to our services. Additionally, unlike other scanners we had tried, our tool had to be easy to deploy across our entire network.

We chose Nmap as our base scanner because, unlike other network scanners which sacrifice accuracy for speed, it prioritizes detecting services thereby reducing false positives. We also liked Nmap because of the Nmap Scripting Engine (NSE), which allows scripts to be run against the scan results. We found that the “vulners” script, available on NSE, mapped the detected services to relevant CVEs from a database, which is exactly what we needed.

The next step was to make the scanner easy to deploy while ensuring it outputted actionable and valuable results. We added three features to Flan Scan which helped package up Nmap into a user-friendly scanner that can be deployed across a large network.

  • Easy Deployment and ConfigurationTo create a lightweight scanner with easy configuration, we chose to run Flan Scan inside a Docker container. As a result, Flan Scan can be built and pushed to a Docker registry and maintains the flexibility to be configured at runtime. Flan Scan also includes sample Kubernetes configuration and deployment files with a few placeholders so you can get up and scanning quickly.
  • Pushing results to the Cloud Flan Scan adds support for pushing results to a Google Cloud Storage Bucket or an S3 bucket. All you need to do is set a few environment variables and Flan Scan will do the rest. This makes it possible to run many scans across a large network and collect the results in one central location for processing.
  • Actionable Reports – Flan Scan generates actionable reports from Nmap’s output so you can quickly identify vulnerable services on your network, the applicable CVEs, and the IP addresses and ports where these services were found. The reports are useful for engineers following up on the results of the scan as well as auditors looking for evidence of compliance scans.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample run of Flan Scan from start to finish. 

How has Scan Flan improved Cloudflare’s network security?

By the end of Q3, not only had we completed our compliance scans, we also used Flan Scan to tangibly improve the security of our network. At Cloudflare, we pin the software version of some services in production because it allows us to prioritize upgrades by weighing the operational cost of upgrading against the improvements of the latest version. Flan Scan’s results revealed that our FreeIPA nodes, used to manage Linux users and hosts, were running an outdated version of Apache with several medium severity vulnerabilities. As a result, we prioritized their update. Flan Scan also found a vulnerable instance of PostgreSQL leftover from a performance dashboard that no longer exists.

Flan Scan is part of a larger effort to expand our vulnerability management program. We recently deployed osquery to our entire network to perform host-based vulnerability tracking. By complementing osquery’s findings with Flan Scan’s network scans we are working towards comprehensive visibility of the services running at our edge and their vulnerabilities. With two vulnerability trackers in place, we decided to build a tool to manage the increasing number of vulnerability  sources. Our tool sends alerts on new vulnerabilities, filters out false positives, and tracks remediated vulnerabilities. Flan Scan’s valuable security insights were a major impetus for creating this vulnerability tracking tool.

How does Flan Scan work?

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner

The first step of Flan Scan is running an Nmap scan with service detection. Flan Scan’s default Nmap scan runs the following scans:

  1. ICMP ping scan – Nmap determines which of the IP addresses given are online.
  2. SYN scan – Nmap scans the 1000 most common ports of the IP addresses which responded to the ICMP ping. Nmap marks ports as open, closed, or filtered.
  3. Service detection scan – To detect which services are running on open ports Nmap performs TCP handshake and banner grabbing scans.

Other types of scanning such as UDP scanning and IPv6 addresses are also possible with Nmap. Flan Scan allows users to run these and any other extended features of Nmap by passing in Nmap flags at runtime.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Nmap output

Flan Scan adds the “vulners” script tag in its default Nmap command to include in the output a list of vulnerabilities applicable to the services detected. The vulners script works by making API calls to a service run by vulners.com which returns any known vulnerabilities for the given service.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Nmap output with Vulners script

The next step of Flan Scan uses a Python script to convert the structured XML of Nmap’s output to an actionable report. The reports of the previous scanner we used listed each of the IP addresses scanned and present the vulnerabilities applicable to that location. Since we had multiple IP addresses running the same service, the report would repeat the same list of vulnerabilities under each of these IP addresses. This meant scrolling back and forth on documents hundreds of pages long to obtain a list of all IP addresses with the same vulnerabilities.  The results were impossible to digest.

Flan Scans results are structured around services. The report enumerates all vulnerable services with a list beneath each one of relevant vulnerabilities and all IP addresses running this service. This structure makes the report shorter and actionable since the services that need to be remediated can be clearly identified. Flan Scan reports are made using LaTeX because who doesn’t like nicely formatted reports that can be generated with a script? The raw LaTeX file that Flan Scan outputs can be converted to a beautiful PDF by using tools like pdf2latex or TeXShop.

Introducing Flan Scan: Cloudflare’s Lightweight Network Vulnerability Scanner
Sample Flan Scan report

What’s next?

Cloudflare’s mission is to help build a better Internet for everyone, not just Internet giants who can afford to buy expensive tools. We’re open sourcing Flan Scan because we believe it shouldn’t cost tons of money to have strong network security.

You can get started running a vulnerability scan on your network in a few minutes by following the instructions on the README. We welcome contributions and suggestions from the community.

Python 3.8 runtime now available in AWS Lambda

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/python-3-8-runtime-now-available-in-aws-lambda/

You can now develop your AWS Lambda functions using the Python 3.8 runtime. Start using this runtime today by specifying a runtime parameter value of python3.8 when creating or updating Lambda functions.

New Python runtime features

Python 3.8 is a stable release and brings several new features, including assignment expressions, positional-only arguments, and vectorcall.

Assignment expressions

Assignment expressions provide a way to assign values to variables within expressions using the new notation NAME := expr. This is informally known as the “walrus operator,” as it looks like the eyes and tusks of a walrus on its side.

Before:

>>> walrus = True
print(walrus)
True

After:

>>> print(walrus := True)
True

Previously, assignment was only available in statement form. With Python 3.8, it is available in list comprehensions and other expression contexts.

For examples, usage, and limitations, see PEP-572.

Positional-only arguments

Positional-only parameters give more control to library authors by introducing a new function parameter syntax `/` to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments.

When describing APIs, they can be used to better express the intended usage and allow the API to evolve in a safe, backward-compatible way. They also make the Python language more consistent with existing documentation and the behavior of various “builtin” and standard library functions.

For a full specification, including syntax and examples, see PEP-570 and PEP-457.

f-strings support

An f-string is a formatted string literal to enclose variables, and even expressions, inside curly braces. An F-string is evaluated at runtime and included in the string, recognized by the leading f:

>>> city = "London"
>>> f"Living in {city} is amazing"
'Living in London is amazing'

Python 3.8 adds the ability to use assignment expressions inside f-strings which are evaluated from left-to-right.

>>> import math
>>> radius = 3.8

>>> f"With a diameter of {( diameter := 2 * radius)}, the circumference is {math.pi * diameter:.2f}"
' With a diameter of 7.6 the circumference is 23.88'

Vectorcall

Vectorcall is a new protocol and calling convention based on the “fastcall” convention, which was previously used internally by CPython. The new features can be used by any user-defined extension class. Vectorcall generalizes the calling convention used internally for Python and builtin functions so that all calls can benefit from better performance. It is designed to remove the overhead of temporary object creation and multiple indirections.

For additional information on vectorcall, see PEP-590.

Amazon Linux 2

Python 3.8, like Node.js 10 and 12, and Java 11, is based on an Amazon Linux 2 execution environment. Amazon Linux 2 provides a secure, stable, and high-performance execution environment to develop and run cloud and enterprise applications.

Next steps

Get started building with Python 3.8 today by specifying a runtime parameter value of python3.8 when creating or updating your Lambda functions.

Enjoy, go build with Python 3.8!

New Automation Features In AWS Systems Manager

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/new-automation-features-in-aws-systems-manager/

Today we are announcing additional automation features inside of AWS Systems Manager. If you haven’t used Systems Manager yet, it’s a service that provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.

With this new release, it just got even more powerful. We have added additional capabilities to AWS Systems Manager that enables you to build, run, and share automations with others on your team or inside your organisation — making managing your infrastructure more repeatable and less error-prone.

Inside the AWS Systems Manager console on the navigation menu, there is an item called Automation if I click this menu item I will see the Execute automation button.

When I click on this I am asked what document I want to run. AWS provides a library of documents that I could choose from, however today, I am going to build my own so I will click on the Create document button.

This takes me to a a new screen that allows me to create a document (sometimes referred to as an automation playbook) that amongst other things executes Python or PowerShell scripts.

The console gives me two options for editing a document: A YAML editor or the “Builder” tool that provides a guided, step-by-step user interface with the ability to include documentation for each workflow step.

So, let’s take a look by building and running a simple automation. When I create a document using the Builder tool, the first thing required is a document name.

Next, I need to provide a description. As you can see below, I’m able to use Markdown to format the description. The description is an excellent opportunity to describe what your document does, this is valuable since most users will want to share these documents with others on their team and build a library of documents to solve everyday problems.

Optionally, I am asked to provide parameters for my document. These parameters can be used in all of the scripts that you will create later. In my example, I have created three parameters: imageId, tagValue, and instanceType. When I come to execute this document, I will have the opportunity to provide values for these parameters that will override any defaults that I set.

When someone executes my document, the scripts that are executed will interact with AWS services. A document runs with the user permissions for most of its actions along with the option of providing an Assume Role. However, for documents with the Run a Script action, the role is required when the script is calling any AWS API.

You can set the Assume role globally in the builder tool; however, I like to add a parameter called assumeRole to my document, this gives anyone that is executing it the ability to provide a different one.

You then wire this parameter up to the global assumeRole by using the {{assumeRole}}syntax in the Assume role property textbox (I have called my parameter name assumeRole but you could call it what you like, just make sure that the name you give the parameter is what you put in the double parentheses syntax e.g.{{yourParamName}}).

Once my document is set up, I then need to create the first step of my document. Your document can contain 1 or more steps, and you can create sophisticated workflows with branching, for example based on a parameter or failure of a step. Still, in this example, I am going to create three steps that execute one after another. Again you need to give the step a name and a description. This description can also include markdown. You need to select an Action Type, for this example I will choose Run a script.

With the ‘Run a script’ action type, I get to run a script in Python or PowerShell without requiring any infrastructure to run the script. It’s important to realise that this script will not be running on one of your EC2 instances. The scripts run in a managed compute environment. You can configure a Amazon CloudWatch log group on the preferences page to send outputs to a CloudWatch log group of your choice.

In this demo, I write some Python that creates an EC2 instance. You will notice that this script is using the AWS SDK for Python. I create an instance based upon an image_id, tag_value, and instance_type that are passed in as parameters to the script.

To pass parameters into the script, in the Additional Inputs section, I select InputPayload as the input type. I then use a particular YAML format in the Input Value text box to wire up the global parameters to the parameters that I am going to use in the script. You will notice that again I have used the double parentheses syntax to reference the global parameters e.g. {{imageId}}

In the Outputs section, I also wire up an output parameter than can be used by subsequent steps.

Next, I will add a second step to my document . This time I will poll the instance to see if its status has switched to ok. The exciting thing about this code is the InstanceId, is passed into the script from a previous step. This is an example of how the execution steps can be chained together to use outputs of earlier steps.

def poll_instance(events, context):
    import boto3
    import time

    ec2 = boto3.client('ec2')

    instance_id = events['InstanceId']

    print('[INFO] Waiting for instance to enter Status: Ok', instance_id)

    instance_status = "null"

    while True:
    res = ec2.describe_instance_status(InstanceIds=[instance_id])

    if len(res['InstanceStatuses']) == 0:
        print("Instance Status Info is not available yet")
        time.sleep(5)
        continue

    instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status']

    print('[INFO] Polling get status of the instance', instance_status)

    if instance_status == 'ok':
        break

    time.sleep(10)

    return {'Status': instance_status, 'InstanceId': instance_id}

To pass the parameters into the second step, notice that I use the double parentheses syntax to reference the output of a previous step. The value in the Input value textbox {{launchEc2Instance.payload}} is the name of the step launchEc2Instance and then the name of the output parameter payload.

Lastly, I will add a final step. This step will run a PowerShell script and use the AWS Tools for PowerShell. I’ve added this step purely to show that you can use PowerShell as an alternative to Python.

You will note on the first line that I have to Install the AWSPowerShell.NetCore module and use the -Force switch before I can start interacting with AWS services.

All this step does is take the InstanceId output from the LaunchEc2Instance step and use it to return the InstanceType of the ECS instance.

It’s important to note that I have to pass the parameters from LaunchEc2Instance step to this step by configuring the Additional inputs in the same way I did earlier.

Now that our document is created we can execute it. I go to the Actions & Change section of the menu and select Automation, from this screen, I click on the Execute automation button. I then get to choose the document I want to execute. Since this is a document I created, I can find it on the Owned by me tab.

If I click the LaunchInstance document that I created earlier, I get a document details screen that shows me the description I added. This nicely formatted description allows me to generate documentation for my document and enable others to understand what it is trying to achieve.

When I click Next, I am asked to provide any Input parameters for my document. I add the imageId and ARN for the role that I want to use when executing this automation. It’s important to remember that this role will need to have permissions to call any of the services that are requested by the scripts. In my example, that means it needs to be able to create EC2 instances.

Once the document executes, I am taken to a screen that shows the steps of the document and gives me details about how long each step took and respective success or failure of each step. I can also drill down into each step and examine the logs. As you can see, all three steps of my document completed successfully, and if I go to the Amazon Elastic Compute Cloud (EC2) console, I will now have an EC2 instance that I created with tag LaunchedBySsmAutomation.

These new features can be found today in all regions inside the AWS Systems Manager console so you can start using them straight away.

Happy Automating!

— Martin;

Code a Phoenix-style mothership battle | Wireframe #26

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-phoenix-style-mothership-battle-wireframe-26/

It was one of gaming’s first boss battles. Mark Vanstone shows you how to recreate the mothership from the 1980 arcade game, Phoenix.

Phoenix’s fifth stage offered a unique challenge in 1980: one of gaming’s first-ever boss battles.

First released in 1980, Phoenix was something of an arcade pioneer. The game was the kind of post-Space Invaders fixed-screen shooter that was ubiquitous at the time: players moved their ship from side to side, shooting at a variety of alien birds of different sizes and attack patterns. The enemies moved swiftly, and the player’s only defence was a temporary shield which could be activated when the birds swooped and strafed the lone defender. But besides all that, Phoenix had a few new ideas of its own: not only did it offer five distinct stages, but it also featured one of the earliest examples of a boss battle – its heavily armoured alien mothership, which required accurate shots to its shields before its weak spot could be exposed.

To recreate Phoenix’s boss, all we need is Pygame Zero. We can get a portrait style window with the WIDTH and HEIGHT variables and throw in some parallax stars (an improvement on the original’s static backdrop) with some blitting in the draw() function. The parallax effect is created by having a static background of stars with a second (repeated) layer of stars moving down the screen.

The mothership itself is made up of several Actor objects which move together down the screen towards the player’s spacecraft, which can be moved right and left using the mouse. There’s the main body of the mothership, in the centre is the alien that we want to shoot, and then we have two sets of moving shields.

Like the original Phoenix, our mothership boss battle has multiple shields that need to be taken out to expose the alien at the core.

In this example, rather than have all the graphics dimensions in multiples of eight (as we always did in the old days), we will make all our shield blocks 20 by 20 pixels, because computers simply don’t need to work in multiples of eight any more. The first set of shields is the purple rotating bar around the middle of the ship. This is made up of 14 Actor blocks which shift one place to the right each time they move. Every other block has a couple of portal windows which makes the rotation obvious, and when a block moves off the right-hand side, it is placed on the far left of the bar.

The second set of shields are in three yellow rows (you may want to add more), the first with 14 blocks, the second with ten blocks, and the last with four. These shield blocks are fixed in place but share a behaviour with the purple bar shields, in that when they are hit by a bullet, they change to a damaged version. There are four levels of damage before they are destroyed and the bullets can pass through. When enough shields have been destroyed for a bullet to reach the alien, the mothership is destroyed (in this version, the alien flashes).

Bullets can be fired by clicking the mouse button. Again, the original game had alien birds flying around the mothership and dive-bombing the player, making it harder to get a good shot in, but this is something you could try adding to the code yourself.

To really bring home that eighties Phoenix arcade experience, you could also add in some atmospheric shooting effects and, to round the whole thing off, have an 8-bit rendition of Beethoven’s Für Elise playing in the background.

Here’s Mark’s code, which gets a simple mothership battle running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 26

You can read more features like this one in Wireframe issue 26, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 26 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a Phoenix-style mothership battle | Wireframe #26 appeared first on Raspberry Pi.

Make a Columns-style tile-matching game | Wireframe #25

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/make-a-columns-style-tile-matching-game-wireframe-25/

Raspberry Pi’s own Rik Cross shows you how to code your own Columns-style tile-matching puzzle game in Python and Pygame Zero.

Created by Hewlett-Packard engineer Jay Geertsen, Columns was Sega’s sparkly rival to Nintendo’s all-conquering Tetris.

Columns and tile-matching

Tile-matching games began with Tetris in 1984 and the less famous Chain Shot! the following year. The genre gradually evolved through games like Dr. Mario, Columns, Puyo Puyo, and Candy Crush Saga. Although their mechanics differ, the goals are the same: to organise a board of different-coloured tiles by moving them around until they match.

Here, I’ll show how you can create a simple tile-matching game using Python and Pygame. In it, any tile can be swapped with the tile to its right, with the aim being to make matches of three or more tiles of the same colour. Making a match causes the tiles to disappear from the board, with tiles dropping down to fill in the gaps.

At the start of a new game, a board of randomly generated tiles is created. This is made as an (initially empty) two-dimensional array, whose size is determined by the values of rows and columns. A specific tile on the board is referenced by its row and column number.

We want to start with a truly random board, but we also want to avoid having any matching tiles. Random tiles are added to each board position, therefore, but replaced if a tile is the same as the one above or to it’s left (if such a tile exists).

Our board consists of 12 rows and 8 columns of tiles. Pressing SPACE will swap the 2 selected tiles (outlined in white), and in this case, create a match of red tiles vertically.

In our game, two tiles are ‘selected’ at any one time, with the player pressing the arrow keys to change those tiles. A selected variable keeps track of the row and column of the left-most selected tile, with the other tile being one column to the right of the left-most tile. Pressing SPACE swaps the two selected tiles, checks for matches, clears any matched tiles, and fills any gaps with new tiles.

A basic ‘match-three’ algorithm would simply check whether any tiles on the board have a matching colour tile on either side, horizontally or vertically. I’ve opted for something a little more convoluted, though, as it allows us to check for matches on any length, as well as track multiple, separate matches. A currentmatch list keeps track of the (x,y) positions of a set of matching tiles. Whenever this list is empty, the next tile to check is added to the list, and this process is repeated until the next tile is a different colour.

If the currentmatch list contains three or more tiles at this point, then the list is added to the overall matches list (a list of lists of matches!) and the currentmatch list is reset. To clear matched tiles, the matched tile positions are set to None, which indicates the absence of a tile at that position. To fill the board, tiles in each column are moved down by one row whenever an empty board position is found, with a new tile being added to the top row of the board.

The code provided here is just a starting point, and there are lots of ways to develop the game, including adding a scoring system and animation to liven up your tiles.

Here’s Rik’s code, which gets a simple tile-match game running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 25

You can read more features like this one in Wireframe issue 25, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 25 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Make a Columns-style tile-matching game | Wireframe #25 appeared first on Raspberry Pi.

Code your own Donkey Kong barrels | Wireframe issue 24

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-donkey-kong-barrels-wireframe-issue-24/

Replicate the physics of barrel rolling – straight out of the classic Donkey Kong. Mark Vanstone shows you how.

Released in 1981, Donkey Kong was one of the most important games in Nintendo’s history.

Nintendo’s Donkey Kong

Donkey Kong first appeared in arcades in 1981, and starred not only the titular angry ape, but also a bouncing, climbing character called Jumpman – who later went on to star in Nintendo’s little-known series of Super Mario games. Donkey Kong featured four screens per level, and the goal in each was to avoid obstacles and guide Mario (sorry, Jumpman) to the top of the screen to rescue the hapless Pauline. Partly because the game was so ferociously difficult from the beginning, Donkey Kong’s first screen is arguably the most recognisable today: Kong lobs an endless stream of barrels, which roll down a network of crooked girders and threaten to knock Jumpman flat.

Barrels in Pygame Zero

Donkey Kong may have been a relentlessly tough game, but we can recreate one of its most recognisable elements with relative ease. We can get a bit of code running with Pygame Zero – and a couple of functions borrowed from Pygame – to make barrels react to the platforms they’re on, roll down in the direction of a slope, and fall off the end onto the next platform. It’s a very simple physics simulation using an invisible bitmap to test where the platforms are and which way they’re sloping. We also have some ladders which the barrels randomly either roll past or sometimes use to descend to the next platform below.

Our Donkey Kong tribute up and running in Pygame Zero. The barrels roll down the platforms and sometimes the ladders.

Once we’ve created a barrel as an Actor, the code does three tests for its platform position on each update: one to the bottom-left of the barrel, one bottom-centre, and one bottom-right. It samples three pixels and calculates how much red is in those pixels. That tells us how much platform is under the barrel in each position. If the platform is tilted right, the number will be higher on the left, and the barrel must move to the right. If tilted left, the number will be higher on the right, and the barrel must move left. If there is no red under the centre point, the barrel is in the air and must fall downward.

There are just three frames of animation for the barrel rolling (you could add more for a smoother look): for rolling right, we increase the frame number stored with the barrel Actor; for rolling to the left, we decrease the frame number; and if the barrel’s going down a ladder, we use the front-facing images for the animation. The movement down a ladder is triggered by another test for the blue component of a pixel below the barrel. The code then chooses randomly whether to send the barrel down the ladder.

The whole routine will keep producing more barrels and moving them down the platforms until they reach the bottom. Again, this is a very simple physics system, but it demonstrates how those rolling barrels can be recreated in just a few lines of code. All we need now is a jumping player character (which could use the same invisible map to navigate up the screen) and a big ape to sit at the top throwing barrels, then you’ll have the makings of your own fully featured Donkey Kong tribute.

Here’s Mark’s code, which sets some Donkey Kong Barrels rolling about in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 24

You can read more features like this one in Wireframe issue 24, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 24 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code your own Donkey Kong barrels | Wireframe issue 24 appeared first on Raspberry Pi.

Your new free online training courses for the autumn term

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/free-online-training-courses-autumn-19/

Over the autumn term, we’ll be launching three brand-new, free online courses on the FutureLearn platform. Wherever you are in the world, you can learn with us for free!

Three people looking facing forward

The course presenters are Pi Towers residents Mark, Janina, and Eirini

Design and Prototype Embedded Computer Systems

The first new course is Design and Prototype Embedded Computer Systems, which will start on 28 October. In this course, you will discover the product design life cycle as you design your own embedded system!

A diagram illustrating the iterative design life cycle with four stages: Analyse, design, build, test

You’ll investigate how the purpose of the system affects the design of the system, from choosing its components to the final product, and you’ll find out more about the design of an algorithm. You will also explore how embedded systems are used in the world around us. Book your place today!

Programming 103: Saving and Structuring Data

What else would you expect us to call the sequel to Programming 101 and Programming 102? That’s right — we’ve made Programming 103: Saving and Structuring Data! The course will begin on 4 November, and you can reserve your place now.

Illustration of a robot reading a book called 'human 2 binary phrase book'

Programming 103 explores how to use data across multiple runs of your program. You’ll learn how to save text and binary files, and how structuring data is necessary for programs to “understand” the data that they load. You’ll look at common types of structured files such as CSV and JSON files, as well as how you can connect to a SQL database to use it in your Python programs.

Introduction to Encryption and Cryptography

The third course, Introduction to Encryption and Cryptography, is currently in development, and therefore coming soon. In this course, you’ll learn what encryption is and how it was used in the past, and you’ll use the Caesar and Vigenère ciphers.

The Caesar cipher is a type of substitution cipher

You’ll also look at modern encryption and investigate both symmetric and asymmetric encryption schemes. The course also shows you the future of encryption, and it includes several practical encryption activities, which can be used in the classroom too.

National Centre for Computing Education

If you’re a secondary school teacher in England, note that all of the above courses count towards your Computer Science Accelerator Programme certificate.

Group shot of the first NCCE GCSE accelerator graduates

The very first group of teachers who earned Computer Science Accelerator Programme certificates: they got to celebrate their graduation at Google HQ in London.

What’s been your favourite online course this year? Tell us about it in the comments.

The post Your new free online training courses for the autumn term appeared first on Raspberry Pi.

Estefannie’s Jurassic Park goggles

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/estefannies-jurassic-park-goggles/

When we invited Estefannie Explains It All to present at Coolest Projects International, she decided to make something cool with a Raspberry Pi to bring along. But being Estefannie, she didn’t just make something a little bit cool. She went ahead and made Raspberry Pi Zero-powered Jurassic Park goggles, or, as she calls them, the world’s first globally triggered, mass broadcasting, photon-emitting and -collecting head unit.

Make your own Jurassic Park goggles using a Raspberry Pi // MAKE SOMETHING

Is it heavy? Yes. But these goggles are not expensive. Follow along as I make the classic Jurassic Park Goggles from scratch!! The 3D Models: https://www.thingiverse.com/thing:3732889 My code: https://github.com/estefanniegg/estefannieExplainsItAll/blob/master/makes/JurassicGoggles/jurassic_park.py Thank you Coolest Projects for bringing me over to speak in Ireland!! https://coolestprojects.org/ Thank you Polymaker for sending me the Polysher and the PolySmooth filament!!!!

3D-printing, sanding, and sanding

Estefannie’s starting point was the set of excellent 3D models of the iconic goggles that Jurassicpaul has kindly made available on Thingiverse. There followed several 3D printing attempts and lots of sanding, sanding, sanding, spray painting, and sanding, then some more printing with special Polymaker filament that can be ethanol polished.

Adding the electronics and assembling the goggles

Estefannie soldered rings of addressable LEDs and created custom models for 3D-printable pieces to fit both them and the goggles. She added a Raspberry Pi Zero, some more LEDs and buttons, an adjustable headgear part from a welding mask, and – importantly – four circles of green acetate. After quite a lot of gluing, soldering, and wiring, she ended up with an entirely magnificent set of goggles.

Here, they’re modelled magnificently by Raspberry Pi videographer Brian. I think you’ll agree he cuts quite a dash.

Coding and LED user interface

Estefannie wrote a Python script to interact with Twitter, take photos, and provide information about the goggles’ current status via the LED rings. When Estefannie powers up the Raspberry Pi, it runs a script on startup and connects to her phone’s wireless hotspot. A red LED on the front of the goggles indicates that the script is up and running.

Once it’s running, pressing a button at the back of the head unit makes the Raspberry Pi search Twitter for mentions of @JurassicPi. The LEDs light up green while it searches, just like you remember from the film. If Estefannie’s script finds a mention, the LEDs flash white and the Raspberry Pi camera module takes a photo. Then they light up blue while the script tweets the photo.




All the code is available on Estefannie’s GitHub. I love this project – I love the super clear, simple user experience provided by the LED rings, and there’s something I really appealing about the asynchronous Twitter interaction, where you mention @JurassicPi and then get an image later, the next time googles are next turned on.

Extra bonus Coolest Projects

If you read the beginning of this post and thought, “wait, what’s Coolest Projects?” then be sure to watch to the end of Estefannie’s video to catch her excellentCoolest Projects mini vlog. And then sign up for updates about Coolest Projects events near you, so you can join in next year, or help a team of young people to join in.

The post Estefannie’s Jurassic Park goggles appeared first on Raspberry Pi.

Pulling Raspberry Pi translation data from GitHub

Post Syndicated from Nina Szymor original https://www.raspberrypi.org/blog/pulling-translation-data-from-github/

What happens when you give two linguists jobs at Raspberry Pi? They start thinking they can do digital making, even though they have zero coding skills! Because if you don’t feel inspired to step out of your comfort zone here — surrounded by all the creativity, making, and technology — then there is no hope you’ll be motivated to do it anywhere else.

two smiling women standing in front of a colourful wall

Maja and Nina, our translation team, and coding beginners

Maja and I support the community of Raspberry Pi translation volunteers, and we wanted to build something to celebrate them and the amazing work they do! Our educational content is already available in 26 languages, with more than 400 translations on our projects website. But our volunteer community is always translating more content, and so off we went, on an ambitious (by our standards!) mission to create a Raspberry Pi–powered translation notification system. This is a Raspberry Pi that pulls GitHub data to display a message on a Sense HAT and play a tune whenever we add fresh translated content to the Raspberry Pi projects website!

Breaking it down

There were three parts to the project: two of them were pretty easy (displaying a message on a Sense HAT and playing a tune), and one more challenging (pulling information about new translated content added to our repositories on GitHub). We worked on each part separately and then put all of the code together.

Two computers and two pastries

Mandatory for coding: baked goods and tea

Displaying a message on Sense HAT and playing a sound

We used the Raspberry Pi projects Getting started with the Sense HAT and GPIO music box to help us with this part of our build.

At first we wanted the Sense HAT to display fireworks, but we soon realised how bad we both are at designing animations, so we moved on to displaying a less creative but still satisfying smiley face, followed by a message saying “Hooray! Another translation!” and another smiley face. LED screen displaying the message 'Another translation!'

We used the sense_hat and time modules, and wrote a function that can be easily used in the main body of the program. You can look at the comments in the code above to see what each line does:

Python code snippet for displaying a message on a Sense HAT

So we could add the fun tune, we learned how to use the Pygame library to play sounds. Using Pygame it’s really simple to create a function that plays a sound: once you have the .wav file in your chosen location, you simply import and initialise the pygame module, create a Sound object, and provide it with the path to your .wav file. You can then play your sound:

Python code snippet for playing a sound

We’ve programmed our translation notification system to play the meow sound three times, using the sleep function to create a one-second break between each sound. Because why would you want one meow if you can have three?

Pulling repository information from GitHub

This was the more challenging part for Maja and me, so we asked for help from experienced programmers, including our colleague Ben Nuttall. We explained what we wanted to do: pull information from our GitHub repositories where all the projects available on the Raspberry Pi projects website are kept, and every time a new language directory is found, to execute the sparkles and meow functions to let us and EVERYONE in the office know that we have new translations! Ben did a bit of research and quickly found the PyGithub library, which enables you to manage your GitHub resources using Python scripts.

Python code snippet for pulling data from GitHub

Check out the comments to see what the code does

The script runs in an infinite loop, checking all repositories in the ‘raspberrypilearning’ organisation for new translations (directories with names in form of xx-XX, eg. fr-CA) every 60 minutes. Any new translation is then printed and preserved in memory. We had some initial issues with the usage of the PyGithub library: calling .get_commits() on an empty repository throws an exception, but the library doesn’t provide any functions to check whether a repo is empty or not. Fortunately, wrapping this logic in a try...except statement solved the problem.

And there we have it: success!

Demo of our Translation Notification System build

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the #RaspberryPi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Our ideas for further development

We’re pretty proud that the whole Raspberry Pi office now hears a meowing cat whenever new translated content is added to our projects website, but we’ve got plans for further development of our translation notification system. Our existing translated educational resources have already been viewed by over 1 million users around the world, and we want anyone interested in the translations our volunteers make possible to be able to track new translated projects as the go live!

One way to do that is to modify the code to tweet or send an email with the name of the newly added translation together with a link to the project and information on the language in which it was added. Alternatively, we could adapt the system to only execute the sparkles and meow functions when a translation in a particular language is added. Then our more than 1000 volunteers, or any learner using our translations, could set up their own Raspberry Pi and Sense HAT to receive notifications of content in the language that interests them, rather than in all languages.

We need your help

Both ideas pose a pretty big challenge for the inexperienced new coders of the Raspberry Pi translation team, so we’d really appreciate any tips you have for helping us get started or for improving our existing system! Please share your thoughts in the comments below.

The post Pulling Raspberry Pi translation data from GitHub appeared first on Raspberry Pi.

Make a keyboard-bashing sprint game | Wireframe issue 23

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/make-a-keyboard-bashing-sprint-game-wireframe-issue-23/

Learn how to code a sprinting minigame straight out of Daley Thompson’s Decathlon with Raspberry Pi’s own Rik Cross.

Spurred on by the success of Konami’s Hyper Sports, Daley Thompson’s Decathlon featured a wealth of controller-wrecking minigames.

Daley Thompson’s Decathlon

Released in 1984, Daley Thompson’s Decathlon was a memorable entry in what’s sometimes called the ‘joystick killer’ genre: players competed in sporting events that largely consisted of frantically waggling the controller or battering the keyboard. I’ll show you how to create a sprinting game mechanic in Python and Pygame.

Python sprinting game

There are variables in the Sprinter() class to keep track of the runner’s speed and distance, as well as global constant ACCELERATION and DECELERATION values to determine the player’s changing rate of speed. These numbers are small, as they represent the number of metres per frame that the player accelerates and decelerates.

The player increases the sprinter’s speed by alternately pressing the left and right arrow keys. This input is handled by the sprinter’s isNextKeyPressed() method, which returns True if the correct key (and only the correct key) is being pressed. A lastKeyPressed variable is used to ensure that keys are pressed alternately. The player also decelerates if no key is being pressed, and this rate of deceleration should be sufficiently smaller than the acceleration to allow the player to pick up enough speed.

Press the left and right arrow keys alternately to increase the sprinter’s speed. Objects move across the screen from right to left to give the illusion of sprinter movement.

For the animation, I used a free sprite called ‘The Boy’ from gameart2d.com, and made use of a single idle image and 15 run cycle images. The sprinter starts in the idle state, but switches to the run cycle whenever its speed is greater than 0. This is achieved by using index() to find the name of the current sprinter image in the runFrames list, and setting the current image to the next image in the list (and wrapping back to the first image once the end of the list is reached). We also need the sprinter to move through images in the run cycle at a speed proportional to the sprinter’s speed. This is achieved by keeping track of the number of frames the current image has been displayed for (in a variable called timeOnCurrentFrame).

To give the illusion of movement, I’ve added objects that move past the player: there’s a finish line and three markers to regularly show the distance travelled. These objects are calculated using the sprinter’s x position on the screen along with the distance travelled. However, this means that each object is at most only 100 pixels away from the player and therefore seems to move slowly. This can be fixed by using a SCALE factor, which is the relationship between metres travelled by the sprinter and pixels on the screen. This means that objects are initially drawn way off to the right of the screen but then travel to the left and move past the sprinter more quickly.

Finally, startTime and finishTime variables are used to calculate the race time. Both values are initially set to the current time at the start of the race, with finishTime being updated as long as the distance travelled is less than 100. Using the time module, the race time can simply be calculated by finishTime - startTime.

Here’s Rik’s code, which gets a sprinting game running in Python (no pun intended). To get it working on your system, you’ll first need to install Pygame Zero. Download the code here.

Get your copy of Wireframe issue 23

You can read more features like this one in Wireframe issue 23, available now at Tesco, WHSmith, all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can download issue 23 for free in PDF format.

Autonauts is coming to colonise your computers with cuteness. We find out more in Wireframe issue 23.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Make a keyboard-bashing sprint game | Wireframe issue 23 appeared first on Raspberry Pi.

Create a Scramble-style scrolling landscape | Wireframe issue 22

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/create-a-scramble-style-scrolling-landscape-wireframe-issue-22/

Weave through a randomly generated landscape in Mark Vanstone’s homage to the classic arcade game Scramble.

Scramble was developed by Konami and released in arcades in 1981. Players avoid terrain and blast enemy craft.

Konami’s Scramble

In the early eighties, arcades and sports halls rang with the sound of a multitude of video games. Because home computers hadn’t yet made it into most households, the only option for the avid video gamer was to go down to their local entertainment establishment and feed the machines with ten pence pieces (which were bigger then). One of these pocket money–hungry machines was Konami’s Scramble — released in 1981, it was one of the earliest side-scrolling shooters with multiple levels.

The Scramble player’s jet aircraft flies across a randomly generated landscape (which sometimes narrows to a cave system), avoiding obstacles and enemy planes, bombing targets on the ground, and trying not to crash. As the game continues, the difficulty increases. The player aircraft can only fly forward, so once a target has been passed, there’s no turning back for a second go.

Code your own scrolling landscape

In this example code, I’ll show you a way to generate a Scramble-style scrolling landscape using Pygame Zero and a couple of additional Pygame functions. On early computers, moving a lot of data around the screen was very slow — until dedicated video hardware like the blitter chip arrived. Scrolling, however, could be achieved either by a quick shuffle of bytes to the left or right in the video memory, or in some cases, by changing the start address of the video memory, which was even quicker.

Avoid the roof and the floor with the arrow keys. Jet graphic courtesy of TheSource4Life at opengameart.org.

For our scrolling, we can use a Pygame surface the same size as the screen. To get the scrolling effect, we just call the scroll() function on the surface to shift everything left by one pixel and then draw a new pixel-wide slice of the terrain. The terrain could just be a single colour, but I’ve included a bit of maths-based RGB tinkering to make it more colourful. We can draw our terrain surface over a background image, as the SRCALPHA flag is set when we create the surface. This is also useful for detecting if the jet has hit the terrain. We can test the pixel from the surface in front of the jet: if it’s not transparent, kaboom!

The jet itself is a Pygame Zero Actor and can be moved up and down with the arrow keys. The left and right arrows increase and decrease the speed. We generate the landscape in the updateLand() and drawLand() functions, where updateLand() first decides whether the landscape is inclining or declining (and the same with the roof), making sure that the roof and floor don’t get too close, and then it scrolls everything left.

Each scroll action moves everything on the terrain surface to the left by one pixel.

The drawLand() function then draws pixels at the right-hand edge of the surface from y coordinates 0 to 600, drawing a thin sliver of roof, open space, and floor. The speed of the jet determines how many times the landscape is updated in each draw cycle, so at faster speeds, many lines of pixels are added to the right-hand side before the display updates.

The use of randint() can be changed to create a more or less jagged landscape, and the gap between roof and floor could also be adjusted for more difficulty. The original game also had enemy aircraft, which you could make with Actors, and fuel tanks on the ground, which could be created on the right-hand side as the terrain comes into view and then moved as the surface scrolls. Scramble sparked a wave of horizontal shooters, from both Konami and rival companies; this short piece of code could give you the basis for making a decent Scramble clone of your own:

Here’s Mark’s code, which gets a Scramble-style scrolling landscape running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 22

You can read more features like this one in Wireframe issue 22, available now at Tesco, WHSmith, and all good independent UK newsagents, and the Raspberry Pi Store, Cambridge.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 22 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Create a Scramble-style scrolling landscape | Wireframe issue 22 appeared first on Raspberry Pi.

Recreate Super Sprint’s top-down racing | Wireframe issue 21

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-super-sprints-top-down-racing-wireframe-issue-21/

Making player and computer-controlled cars race round a track isn’t as hard as it sounds. Mark Vanstone explains all.

The original Super Sprint arcade machine had three steering wheels and three accelerator pedals.

From Gran Trak 10 to Super Sprint

Decades before the advent of more realistic racing games such as Sega Rally or Gran Turismo, Atari produced a string of popular arcade racers, beginning with Gran Trak 10 in 1974 and gradually updated via the Sprint series, which appeared regularly through the seventies and eighties. By 1986, Atari’s Super Sprint allowed three players to compete at once, avoiding obstacles and collecting bonuses as they careened around the tracks.

The original arcade machine was controlled with steering wheels and accelerator pedals, and computer-controlled cars added to the racing challenge. Tracks were of varying complexity, with some featuring flyover sections and shortcuts, while oil slicks and tornadoes posed obstacles to avoid. If a competitor crashed really badly, a new car would be airlifted in by helicopter.

Code your own Super Sprint

So how can we make our own Super Sprint-style racing game with Pygame Zero? To keep this example code short and simple, I’ve created a simple track with a few bends. In the original game, the movement of the computer-controlled cars would have followed a set of coordinates round the track, but as computers have much more memory now, I have used a bitmap guide for the cars to follow. This method produces a much less predictable movement for the cars as they turn right and left based on the shade of the track on the guide.

Four Formula One cars race around the track. Collisions between other cars and the sides of the track are detected.

With Pygame Zero, we can write quite a short piece of code to deal with both the player car and the automated ones, but to read pixels from a position on a bitmap, we need to borrow a couple of objects directly from Pygame: we import the Pygame image and Color objects and then load our guide bitmaps. One is for the player to restrict movement to the track, and the other is for guiding the computer-controlled cars around the track.

Three bitmaps are used for the track. One’s visible, and the other two are guides for the cars.

The cars are Pygame Zero Actors, and are drawn after the main track image in the draw() function. Then all the good stuff happens in the update() function. The player’s car is controlled with the up and down arrows for speed, and the left and right arrows to change the direction of movement. We then check to see if any cars have collided with each other. If a crash has happened, we change the direction of the car and make it reverse a bit. We then test the colour of the pixel where the car is trying to move to. If the colour is black or red (the boundaries), the car turns away from the boundary.

The car steering is based on the shade of a pixel’s colour read from the guide bitmap. If it’s light, the car will turn right, if it’s dark, the car will turn left, and if it’s mid-grey, the car continues straight ahead. We could make the cars stick more closely to the centre by making them react quickly, or make them more random by adjusting the steering angle more slowly. A happy medium would be to get the cars mostly sticking to the track but being random enough to make them tricky to overtake.

Our code will need a lot of extra elements to mimic Atari’s original game, but this short snippet shows how easily you can get a top-down racing game working in Pygame Zero:

Here’s Mark’s code, which gets a Super Sprint-style racer running in Python. To get it working on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 21

You can read more features like this one in Wireframe issue 21, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 21 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to news stand pricing!

The post Recreate Super Sprint’s top-down racing | Wireframe issue 21 appeared first on Raspberry Pi.

Code your own 2D shooting gallery in Python | Wireframe issue 20

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-2d-shooting-gallery-in-python-wireframe-issue-20/

Raspberry Pi’s own Rik Cross shows you how to hit enemies with your mouse pointer as they move around the screen.

Duck Hunt made effective use of the NES Zapper, and made a star of its sniggering dog, who’d pop up to heckle you between stages.

Clicky Clicky Bang Bang

Shooting galleries have always been a part of gaming, from the Seeburg Ray-O-Lite in the 1930s to the light gun video games of the past 40 years. Nintendo’s Duck Hunt — played with the NES Zapper — was a popular console shooting game in the early eighties, while titles such as Time Crisis and The House of the Dead kept the genre alive in the 1990s and 2000s.

Here, I’ll show you how to use a mouse to fire bullets at moving targets. Code written to instead make use of a light gun and a CRT TV (as with Duck Hunt) would look very different. In these games, pressing the light gun’s trigger would cause the entire screen to go black and an enemy sprite to become bright white. A light sensor at the end of the gun would then check whether the gun is pointed at the white sprite, and if so, would register a hit. If more than one enemy was on the screen when the trigger was pressed, each enemy would flash white for one frame in turn, so that the gun would know which enemy had been hit.

Our simple shooting gallery in Python. You could try adding randomly spawning ducks, a scoreboard, and more.

Pygame Zero

I’ve used two Pygame Zero event hooks for dealing with mouse input. Firstly, the on_mouse_move() function updates the position of the crosshair sprite whenever the mouse is moved. The on_mouse_down() function reacts to mouse button presses, with the left button being pressed to fire a bullet (if numberofbullets > 0) and the right button to reload (setting numberofbullets to MAXBULLETS).

Each time a bullet is fired, a check is made to see whether any enemy sprites are colliding with the crosshair — a collision means that an enemy has been hit. Luckily, Pygame Zero has a colliderect() function to tell us whether the rectangular boundary around two sprites intersects.

If this helper function wasn’t available, we’d instead need to use sprites’ x and y coordinates, along with width and height data (w and h below) to check whether the two sprites intersect both horizontally and vertically. This is achieved by coding the following algorithm:

  • Is the left-hand edge of sprite 1 further left than the right-hand edge of sprite 2 (x1 < x2+w2)?
  • Is the right-hand edge of sprite 1 further right than the left-hand edge of sprite 2 (x1+w1 > x2)?
  • Is the top edge of sprite 1 higher up than the bottom edge of sprite 2 (y1 < y2+h2)?
  • Is the bottom edge of sprite 1 lower down than the top edge of sprite 2 (y1+h1 > y2)?

If the answer to the four questions above is True, then the two sprites intersect (see Figure 1). To give visual feedback, hit enemies briefly remain on the screen (in this case, 50 frames). This is achieved by setting a hit variable to True, and then decrementing a timer once this variable has been set. The enemy’s deleted when the timer reaches 0.

Figure 1: A visual representation of a collision algorithm, which checks whether two sprites intersect.

As well as showing an enemy for a short time after being hit, successful shots are also shown. A problem that needs to be overcome is how to modify an enemy sprite to show bullet holes. A hits list for each enemy stores bullet sprites, which are then drawn over enemy sprites.

Storing hits against an enemy allows us to easily stop drawing these hits once the enemy is removed. In the example code, an enemy stops moving once it has been hit.

If you don’t want this behaviour, then you’ll also need to update the position of the bullets in an enemy’s hits list to match the enemy movement pattern.

When decrementing the number of bullets, the max() function is used to ensure that the bullet count never falls below 0. The max() function returns the highest of the numbers passed to it, and as the maximum of 0 and any negative number is 0, the number of bullets always stays within range.

There are a couple of ways in which the example code could be improved. Currently, a hit is registered when the crosshair intersects with an enemy — even if they are barely touching. This means that often part of the bullet is drawn outside of the enemy sprite boundary. This could be solved by creating a clipping mask around an enemy before drawing a bullet. More visual feedback could also be given by drawing missed shots, stored in a separate list.

Here’s Rik’s code, which lets you hit enemies with your mouse pointer. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 20

You can read more features like this one in Wireframe issue 20, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 20 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code your own 2D shooting gallery in Python | Wireframe issue 20 appeared first on Raspberry Pi.

Create your own arcade-style continue screen | Wireframe #19

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/create-your-own-arcade-style-continue-screen-wireframe-19/

Raspberry Pi’s Rik Cross shows you how to create game states, and rules for moving between them.

Ninja Gaiden’s dramatic continue screen. Who would be cruel enough to walk away?

The continue screen, while much less common now, was a staple feature of arcade games, providing an opportunity (for a small fee) to reanimate the game’s hero and to pick up where they left off.

Continue Screens

Games such as Tecmo’s Ninja Gaiden coin-op (known in some regions as Shadow Warriors) added jeopardy to their continue screen, in an effort to convince us to part with our money.

Often, a continue screen is one of many screens that a player may find themselves on; other possibilities being a title screen or an instruction screen. I’ll show you how you can add multiple screens to a game in a structured way, avoiding a tangle of if…else statements and variables.

A simple way of addressing this problem is to create separate update and draw functions for each of these screens, and then switch between these functions as required. Functions are ‘first-class citizens’ of the Python language, which means that they can be stored and manipulated just like any other object, such as numbers, text, and class instances. They can be stored in variables and other data types such as lists and dictionaries, and passed as parameters to (or returned from) other functions.

the continue screen of SNK’s Fantasy

SNK’s Fantasy, released in 1981, was the first arcade game to feature a continue screen.

We can take advantage of the first-class nature of Python functions by storing the functions for the current screen in variables, and then calling them in the main update() and draw() functions. In the following example, notice the difference between storing a function in a variable (by using the function name without parentheses) and calling the function (by including parentheses).

[Ed. comment: We have to use an image here because WordPress doesn’t seem to allow code indentation. We know that’s annoying because you can’t copy and paste the code, so if you know a better solution, please leave us a comment.]

The example code above calls currentupdatefunction() and currentdrawfunction(), which each store a reference to separate update() and draw() functions for the continue screen. These continue screen functions could then also include logic for changing which function is called, by updating the function reference stored in currentupdatefunction and currentdrawfunction.

This way of structuring code can be taken a step further by making use of state machines. In a state machine, a system can be in one of a (finite) number of predefined states, and rules determine the conditions under which a system can transition from one state into another.

Rules define conditions that need to be satisfied in order to move between states.

A state machine (in this case a very simplified version) can be implemented by first creating a core State() class. Each game state has its own update() and draw() methods, and a rules dictionary containing state:rule pairs – references to other state objects linked to functions for testing game conditions. As an example, the continuescreen state has two rules:

  • Transition to the gamescreen state if the SPACE key is pressed;
  • Transition to the titlescreen state if the frame timer reaches 10.

This is pulled together with a StateMachine() class, which keeps track of the current state. The state machine calls the update() and draw() methods for the current state, and checks the rules for transitioning between states. Each rule in the current state’s rules list is executed, with the state machine updating the reference to its current state if the rule function returns True. I’ve also added a frame counter that is incremented by the state machine’s update() function each time it is run. While not a necessary part of the state machine, it does allow the continue screen to count down from 10, and could have a number of other uses, such as for animating sprites.

Something else to point out is the use of lambda functions when adding rules to states. Lambda functions are small, single-expression anonymous functions that return the result of evaluating its expression when called. Lambda functions have been used in this example simply to make the code a little more concise, as there’s no benefit to naming the functions passed to addrule().

State machines have lots of other potential uses, including the modelling of player states. It’s also possible to extend the state machine in this example by adding onenter() and onexit() functions that can be called when transitioning between states.

Here’s Rik’s code, which gets a simple continue screen up and running in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code, visit our Github repository here.

Get your copy of Wireframe issue 19

You can read more features like this one in Wireframe issue 19, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 19 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Create your own arcade-style continue screen | Wireframe #19 appeared first on Raspberry Pi.

Recreate 3D Monster Maze’s 8-bit labyrinth | Wireframe issue 18

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-3d-monster-mazes-8-bit-labyrinth-wireframe-issue-18/

You too can recreate the techniques behind a pioneering 3D maze game in Python. Mark Vanstone explains how.

3D Monster Maze, released in 1982 by J.K. Greye software, written by Malcolm Evans.

3D Monster Maze

While 3D games have become more and more realistic, some may forget that 3D games on home computers started in the mists of time on machines like the Sinclair ZX81. One such pioneering game took pride of place in my collection of tapes, took many minutes to load, and required the 16K RAM pack expansion. That game was 3D Monster Maze — perhaps the most popular game released for the ZX81.

The game was released in 1982 by J.K. Greye Software, and written by Malcolm Evans. Although the graphics were incredibly low resolution by today’s standards, it became an instant hit. The idea of the game was to navigate around a randomly generated maze in search of the exit.

The problem was that a Tyrannosaurus rex also inhabited the maze, and would chase you down and have you for dinner if you didn’t escape quickly enough. The maze itself was made of straight corridors on a 16×18 grid, which the player would move around from one block to the next. The shape of the blocks were displayed by using the low-resolution pixels included in the ZX81’s character set, with 2×2 pixels per character on the screen.

The original ZX81 game drew its maze from chunky 2×2 pixel blocks.

Draw imaginary lines

There’s an interesting trick to recreating the original game’s 3D corridor display which, although quite limited, works well for a simplistic rendering of a maze. To do this, we need to draw imaginary lines diagonally from corner to corner in a square viewport: these are our vanishing point perspective guides. Then each corridor block in our view is half the width and half the height of the block nearer to us.

If we draw this out with lines showing the block positions, we get a view that looks like we’re looking down a long corridor with branches leading off left and right. In our Pygame Zero version of the maze, we’re going to use this wireframe as the basis for drawing our block elements. We’ll create graphics for blocks that are near the player, one block away, two, three, and four blocks away. We’ll need to view the blocks from the left-hand side, the right-hand side, and the centre.

The maze display is made by drawing diagonal lines to a central vanishing point.

Once we’ve created our block graphics, we’ll need to make some data to represent the layout of the maze. In this example, the maze is built from a 10×10 list of zeros and ones. We’ll set a starting position for the player and the direction they’re facing (0–3), then we’re all set to render a view of the maze from our player’s perspective.

The display is created from furthest away to nearest, so we look four blocks away from the player (in the direction they’re looking) and draw a block if there’s one indicated by the maze data to the left; we do the same on the right, and finally in the middle. Then we move towards the player by a block and repeat the process (with larger graphics) until we get to the block the player is on.

Each visible block is drawn from the back forward to make the player’s view of the corridors.

That’s all there is to it. To move backwards and forwards, just change the position in the grid the player’s standing on and redraw the display. To turn, change the direction the player’s looking and redraw. This technique’s obviously a little limited, and will only work with corridors viewed at 90-degree angles, but it launched a whole genre of games on home computers. It really was a big deal for many twelve-year-olds — as I was at the time — and laid the path for the vibrant, fast-moving 3D games we enjoy today.

Here’s Mark’s code, which recreates 3D Monster Maze’s network of corridors in Python. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code, visit our Github repository here.

Get your copy of Wireframe issue 18

You can read more features like this one in Wireframe issue 18, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 18 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate 3D Monster Maze’s 8-bit labyrinth | Wireframe issue 18 appeared first on Raspberry Pi.

How to build databases using Python and text files | Hello World #9

Post Syndicated from Mac Bowley original https://www.raspberrypi.org/blog/how-to-build-databases-using-python-and-text-files-hello-world-9/

In Hello World issue 9, Raspberry Pi’s own Mac Bowley shares a lesson that introduces students to databases using Python and text files.

In this lesson, students create a library app for their books. This will store information about their book collection and allow them to display, manipulate, and search their collection. You will show students how to use text files in their programs that act as a database.

The project will give your students practical examples of database terminology and hands-on experience working with persistent data. It gives opportunities for students to define and gain concrete experience with key database concepts using a language they are familiar with. The script that accompanies this activity can be adapted to suit your students’ experience and competency.

This ready-to-go software project can be used alongside approaches such as PRIMM or pair programming, or as a worked example to engage your students in programming with persistent data.

What makes a database?

Start by asking the students why we need databases and what they are: do they ever feel unorganised? Life can get complicated, and there is so much to keep track of, the raw data required can be overwhelming. How can we use computing to solve this problem? If only there was a way of organising and accessing data that would let us get it out of our head. Databases are a way of organising the data we care about, so that we can easily access it and use it to make our lives easier.

Then explain that in this lesson the students will create a database, using Python and a text file. The example I show students is a personal library app that keeps track of which books I own and where I keep them. I have also run this lesson and allowed the students pick their own items to keep track of — it just involves a little more planning time at the end. Split the class up into pairs; have each of them discuss and select five pieces of data about a book (or their own item) they would like to track in a database. They should also consider which type of data each of them is. Give them five minutes to discuss and select some data to track.

Databases are organised collections of data, and this allows them to be displayed, maintained, and searched easily. Our database will have one table — effectively just like a spreadsheet table. The headings on each of the columns are the fields: the individual pieces of data we want to store about the books in our collection. The information about a single book are called its attributes and are stored together in one record, which would be a single row in our database table. To make it easier to search and sort our database, we should also select a primary key: one field that will be unique for each book. Sometimes one of the fields we are already storing works for this purpose; if not, then the database will create an ID number that it uses to uniquely identify each record.

Create a library application

Pull the class back together and ask a few groups about the data they selected to track. Make sure they have chosen appropriate data types. Ask some if they can find any of the fields that would be a primary key; the answer will most likely be no. The ISBN could work, but for our simple application, having to type in a 10- or 13-digit number just to use for an ID would be overkill. In our database, we are going to generate our own IDs.

The requirements for our database are that it can do the following things: save data to a file, read data from that file, create new books, display our full database, allow the user to enter a search term, and display a list of relevant results based on that term. We can decompose the problem into the following steps:

  • Set up our structures
  • Create a record
  • Save the data to the database file
  • Read from the database file
  • Display the database to the user
  • Allow the user to search the database
  • Display the results

Have the class log in and power up Python. If they are doing this locally, have them create a new folder to hold this project. We will be interacting with external files and so having them in the same folder avoids confusion with file locations and paths. They should then load up a new Python file. To start, download the starter file from the link provided. Each student should make a copy of this file. At first, I have them examine the code, and then get them to run it. Using concepts from PRIMM, I get them to print certain messages when a menu option is selected. This can be a great exemplar for making a menu in any application they are developing. This will be the skeleton of our database app: giving them a starter file can help ease some cognitive load from students.

Have them examine the variables and make guesses about what they are used for.

  • current_ID – a variable to count up as we create records, this will be our primary key
  • new_additions – a list to hold any new records we make while our code is running, before we save them to the file
  • filename – the name of the database file we will be using
  • fields – a list of our fields, so that our dictionaries can be aligned with our text file
  • data – a list that will hold all of the data from the database, so that we can search and display it without having to read the file every time

Create the first record

We are going to use dictionaries to store our records. They reference their elements using keys instead of indices, which fit our database fields nicely. We are going to generate our own IDs. Each of these must be unique, so a variable is needed that we can add to as we make our records. This is a user-focused application, so let’s make it so our user can input the data for the first book. The strings, in quotes, on the left of the colon, are the keys (the names of our fields) and the data on the right is the stored value, in our case whatever the user inputs in response to our appropriate prompts. We finish this part of by adding the record to the file, incrementing the current ID, and then displaying a useful feedback message to the user to say their record has been created successfully. Your students should now save their code and run it to make sure there aren’t any syntax errors.

You could make use of pair programming, with carefully selected pairs taking it in turns in the driver and navigator roles. You could also offer differing levels of scaffolding: providing some of the code and asking them to modify it based on given requirements.

How to use the code in your class

To complete the project, your students can add functionality to save their data to a CSV file, read from a database file, and allow users to search the database. The code for the whole project is available at helloworld.cc/database.

An example of the code

You may want to give your students the entire piece of code. They can investigate and modify it to their own purpose. You can also lead them through it, having them follow you as you demonstrate how an expert constructs a piece of software. I have done both to great effect. Let me know how your classes get on! Get in touch at [email protected]

Hello World issue 9

The brand-new issue of Hello World is out today, and available right now as a free PDF download from the Hello World website.



UK-based educators can also sign up to receive Hello World as printed magazine FOR FREE, direct to their door. And those outside the UK, educator or not, can subscribe to receive new digital issues of Hello World in their inbox on the day of release.

The post How to build databases using Python and text files | Hello World #9 appeared first on Raspberry Pi.

Code your own path-following Lemmings in Python | Wireframe issue 17

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-path-following-lemmings-in-python-wireframe-issue-17/

Learn how to create your own obedient lemmings that follow any path put in front of them. Raspberry Pi’s own Rik Cross explains how.

The original Lemmings, first released for the Amiga, quickly spread like a virus to just about every computer and console of the day.

Lemmings

Lemmings is a puzzle-platformer, created at DMA Design, and first became available for the Amiga in 1991. The aim is to guide a number of small lemming sprites to safety, navigating traps and difficult terrain along the way. Left to their own devices, the lemmings will simply follow the path in front of them, but additional ‘special powers’ given to lemmings allow them to (among other things) dig, climb, build, and block in order to create a path to freedom (or to the next level, anyway).

Code your own lemmings

I’ll show you a simple way (using Python and Pygame) in which lemmings can be made to follow the terrain in front of them. The first step is to store the level’s terrain information, which I’ve achieved by using a two-dimensional list to store the colour of each pixel in the background ‘level’ image. In my example, I’ve used the ‘Lemcraft’ tileset by Matt Hackett (of Lost Decade Games) – taken from opengameart.org – and used the Tiled software to stitch the tiles together into a level.

The algorithm we then use can be summarised as follows: check the pixels immediately below a lemming. If the colour of those pixels isn’t the same as the background colour, then the lemming is falling. In this case, move the lemming down by one pixel on the y-axis. If the lemming isn’t falling, then it’s walking. In this case, we need to see whether there is a non-ground, background-coloured pixel in front of the lemming for it to move onto.

Sprites cling to the ground below them, navigating uneven terrain, and reversing direction when they hit an impassable obstacle.

If a pixel is found in front of the lemming (determined by its direction) that is low enough to get to (i.e. lower than its climbheight), then the lemming moves forward on the x-axis by one pixel, and upwards on the y-axis to the new ground level. However, if no suitable ground is found to move onto, then the lemming reverses its direction.

The algorithm is stored as a lemming’s update() method, which is executed for each lemming, each frame of the game. The sample level.png file can be edited, or swapped for another image altogether. If using a different image, just remember to update the level’s BACKGROUND_COLOUR in your code, stored as a (red, green, blue, alpha) tuple. You may also need to increase your lemming climbheight if you want them to be able to navigate a climb of more than four pixels.

There are other things you can do to make a full Lemmings clone. You could try replacing the yellow-rectangle lemmings in my example with pixel-art sprites with their own walk cycle animation (see my article in issue #14), or you could give your lemmings some of the special powers they’ll need to get to safety, achieved by creating flags that determine how lemmings interact with the terrain around them.

Here’s Rik’s code, which gets those path-following lemmings moving about in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 17

You can read more features like this one in Wireframe issue 17, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 17 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code your own path-following Lemmings in Python | Wireframe issue 17 appeared first on Raspberry Pi.

Recreate the sprite-following Options from Gradius using Python | Wireframe issue 16

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-the-sprite-following-options-from-gradius-using-python-wireframe-issue-16/

Learn how to create game objects that follow the path of the main player sprite. Raspberry Pi’s own Rik Cross explains all.

Options first appeared in 1985’s Gradius, but became a mainstay of numerous sequels and spin-offs, including the Salamander and Parodius series of games.

Gradius

First released by Konami in 1985, Gradius pushed the boundaries of the shoot-’em-up genre with its varied level design, dramatic boss fights, and innovative power-up system.

One of the most memorable of its power-ups was the Option — a small, drone-like blob that followed the player’s ship and effectively doubled its firepower.

By collecting more power-ups, it was possible to gather a cluster of death-dealing Options, which obediently moved wherever the player moved.

Recreate sprite-following in Python

There are a few different ways of recreating Gradius’ sprite-following, but in this article, I’ll show you a simple implementation that uses the player’s ‘position history’ to place other following items on the screen. As always, I’ll be using Python and Pygame to recreate this effect, and I’ll be making use of a spaceship image created by ‘pitrizzo’ from opengameart.org.

The first thing to do is to create a spaceship and a list of ‘power-up’ objects. Storing the power-ups in a list allows us to perform a simple calculation on a power-up to determine its position, as you’ll see later. As we’ll be iterating through the power-ups stored in a list, there’s no need to create a separate variable for each. Instead, we can use list comprehension to create the power-ups:

powerups = [Actor(‘powerup’) for p in range(3)]

The player’s position history will be a list of previous positions, stored as a list of (x,y) tuples. Each time the player’s position changes, the new position is added to the front of the list (as the new first element). We only need to know the spaceship’s recent position history, so the list is also truncated to only contain the 100 most recent positions. Although not necessary, the following code can be added to allow you to see a selection (in this case every fifth) of these previous positions:

for p in previouspositions[::5]:

screen.draw.filled_circle(p, 2, (255,0,0))

Plotting the spaceship’s position history.

Each frame of the game, this position list is used to place each of the power-ups. In our Gradius-like example, we need each of these objects to follow the player’s spaceship in a line, as if moving together in a single-file queue. To achieve this effect, a power-up’s position is determined by its position in the power-ups list, with the first power-up in the list taking up a position nearest to the player. In Python, using enumerate when iterating through a list allows us to get the power-up’s position in the list, which can then be used to determine which position in the player’s position history to use.

newposition = previouspositions[(i+1)*20]

So, the first power-up in the list (element 0 in the list) is placed at the coordinates of the twentieth ((0+1)*20) position in the spaceship’s history, the second power-up at the fourtieth position, and so on. Using this simple calculation, elements are equally spaced along the spaceship’s previous path. The only thing to be careful of here is that you have enough items in the position history for the number of items you want to follow the player!

Power-ups following a player sprite, using the player’s position history.

This leaves one more question to answer; where do we place these power-ups initially, when the spaceship has no position history? There are a few different ways of solving this problem, but the simplest is just to generate a fictitious position history at the beginning of the game. As I want power-ups to be lined up behind the spaceship initially, I again used list comprehension

to generate a list of 100 positions with ever-decreasing x-coordinates.

previouspositions = [(spaceship.x - i*spaceship.speed,spaceship.y) for i in range(100)]

With an initial spaceship position of (400,400) and a spaceship.speed of 4, this means the list will initially contain the following coordinates:

previouspositions = [(400,400),(396,400),(392,400),(388,400),...]

Storing our player’s previous position history has allowed us to create path-following power-ups with very little code. The idea of storing an object’s history can have very powerful applications. For example, a paint program could store previous commands that have been executed, and include an ‘undo’ button that can work backwards through the commands.

Here’s Rik’s code, which recreates those sprite-following Options in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 16

You can read more features like this one in Wireframe issue 16, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 16 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate the sprite-following Options from Gradius using Python | Wireframe issue 16 appeared first on Raspberry Pi.