Tag Archives: python

Code a Rally-X-style mini-map | Wireframe #43

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-rally-x-style-mini-map-wireframe-43/

Race around using a mini-map for navigation, just like the arcade classic, Rally-X. Mark Vanstone has the code

In Namco’s original arcade game, the red cars chased the player relentlessly around each level. Note the handy mini-map on the right.

The original Rally-X arcade game blasted onto the market in 1980, at the same time as Pac‑Man and Defender. This was the first year that developer Namco had exported its games outside Japan thanks to the deal it struck with Midway, an American game distributor. The aim of Rally-X is to race a car around a maze, avoiding enemy cars while collecting yellow flags – all before your fuel runs out.

The aspect of Rally-X that we’ll cover here is the mini-map. As the car moves around the maze, its position can be seen relative to the flags on the right of the screen. The main view of the maze only shows a section of the whole map, and scrolls as the car moves, whereas the mini-map shows the whole size of the map but without any of the maze walls – just dots where the car and flags are (and in the original, the enemy cars). In our example, the mini-map is five times smaller than the main map, so it’s easy to work out the calculation to translate large map co‑ordinates to mini-map co-ordinates.

To set up our Rally-X homage in Pygame Zero, we can stick with the default screen size of 800×600. If we use 200 pixels for the side panel, that leaves us with a 600×600 play area. Our player’s car will be drawn in the centre of this area at the co-ordinates 300,300. We can use the in-built rotation of the Actor object by setting the angle property of the car. The maze scrolls depending on which direction the car is pointing, and this can be done by having a lookup table in the form of a dictionary list (directionMap) where we define x and y increments for each angle the car can travel. When the cursor keys are pressed, the car stays central and the map moves.

A screenshot of our Rally-X homage running in Pygame Zero

Roam the maze and collect those flags in our Python homage to Rally-X.

To detect the car hitting a wall, we can use a collision map. This isn’t a particularly memory-efficient way of doing it, but it’s easy to code. We just use a bitmap the same size as the main map which has all the roads as black and all the walls as white. With this map, we can detect if there’s a wall in the direction in which the car’s moving by testing the pixels directly in front of it. If a wall is detected, we rotate the car rather than moving it. If we draw the side panel after the main map, we’ll then be able to see the full layout of the screen with the map scrolling as the car navigates through the maze.

We can add flags as a list of Actor objects. We could make these random, but for the sake of simplicity, our sample code has them defined in a list of x and y co-ordinates. We need to move the flags with the map, so in each update(), we loop through the list and add the same increments to the x and y co‑ordinates as the main map. If the car collides with any flags, we just take them off the list of items to draw by adding a collected variable. Having put all of this in place, we can draw the mini-map, which will show the car and the flags. All we need to do is divide the object co-ordinates by five and add an x and y offset so that the objects appear in the right place on the mini-map.

And those are the basics of Rally-X! All it needs now is a fuel gauge, some enemy cars, and obstacles – but we’ll leave those for you to sort out…

Here’s Mark’s code for a Rally-X-style driving game with mini-map. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 43

You can read more features like this one in Wireframe issue 43, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 43 for free in PDF format.

Wireframe #43, with the gorgeous Sea of Stars on the cover.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

 

 

 

The post Code a Rally-X-style mini-map | Wireframe #43 appeared first on Raspberry Pi.

Global sunrise/sunset Raspberry Pi art installation

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/global-sunrise-sunset-raspberry-pi-art-installation/

24h Sunrise/Sunset is a digital art installation that displays a live sunset and sunrise happening somewhere in the world with the use of CCTV.

Image by fotoswiss.com

Artist Dries Depoorter wanted to prove that “CCTV cameras can show something beautiful”, and turned to Raspberry Pi to power this global project.

Image by fotoswiss.com

Harnessing CCTV

The arresting visuals are beamed to viewers using two Raspberry Pi 3B+ computers and an Arduino Nano Every that stream internet protocol (IP) cameras with the use of command line media player OMXPlayer.

Dual Raspberry Pi power

The two Raspberry Pis communicate with each other using the MQTT protocol — a standard messaging protocol for the Internet of Things (IoT) that’s ideal for connecting remote devices with a small code footprint and minimal network bandwidth.

One of the Raspberry Pis checks at which location in the world a sunrise or sunset is happening and streams the closest CCTV camera.

The insides of the sleek display screen…

Beam me out, Scotty

The big screens are connected with the I2C protocol to the Arduino, and the Arduino is connected serial with the second Raspberry Pi. Dries also made a custom printed circuit board (PCB) so the build looks cleaner.

All that hardware is powered by an industrial power supply, just because Dries liked the style of it.

Software

Everything is written in Python 3, and Dries harnessed the Python 3 libraries BeautifulSoup, Sun, Geopy, and Pytz to calculate sunrise and sunset times at specific locations. Google Firebase databases in the cloud help with admin by way of saving timestamps and the IP addresses of the cameras.

Hardware

The artist stood infront of the two large display screens
Image of the artist with his work by fotoswiss.com

And, lastly, Dries requested a shoutout for his favourite local Raspberry Pi shop Gotron in Ghent.

The post Global sunrise/sunset Raspberry Pi art installation appeared first on Raspberry Pi.

What the blink is my IP address?

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/what-the-blink-is-my-ip-address/

Picture the scene: you have a Raspberry Pi configured to run on your network, you power it up headless (without a monitor), and now you need to know which IP address it was assigned.

Matthias came up with this solution, which makes your Raspberry Pi blink its IP address, because he used a Raspberry Pi Zero W headless for most of his projects and got bored with having to look it up with his DHCP server or hunt for it by pinging different IP addresses.

How does it work?

A script runs when you start your Raspberry Pi and indicates which IP address is assigned to it by blinking it out on the device’s LED. The script comprises about 100 lines of Python, and you can get it on GitHub.

A screen running Python
Easy peasy GitHub breezy

The power/status LED on the edge of the Raspberry Pi blinks numbers in a Roman numeral-like scheme. You can tell which number it’s blinking based on the length of the blink and the gaps between each blink, rather than, for example, having to count nine blinks for a number nine.

Blinking in Roman numerals

Short, fast blinks represent the numbers one to four, depending on how many short, fast blinks you see. A gap between short, fast blinks means the LED is about to blink the next digit of the IP address, and a longer blink represents the number five. So reading the combination of short and long blinks will give you your device’s IP address.

You can see this in action at this exact point in the video. You’ll see the LED blink fast once, then leave a gap, blink fast once again, then leave a gap, then blink fast twice. That means the device’s IP address ends in 112.

What are octets?

Luckily, you usually only need to know the last three numbers of the IP address (the last octet), as the previous octets will almost always be the same for all other computers on the LAN.

The script blinks out the last octet ten times, to give you plenty of chances to read it. Then it returns the LED to its default functionality.

Which LED on which Raspberry Pi?

On a Raspberry Pi Zero W, the script uses the green status/power LED, and on other Raspberry Pis it uses the green LED next to the red power LED.

The green LED blinking the IP address (the red power LED is slightly hidden by Matthias’ thumb)

Once you get the hang of the Morse code-like blinking style, this is a really nice quick solution to find your device’s IP address and get on with your project.

The post What the blink is my IP address? appeared first on Raspberry Pi.

Recreate Q*bert’s cube-hopping action | Wireframe #42

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-qberts-cube-hopping-action-wireframe-42/

Code the mechanics of an eighties arcade hit in Python and Pygame Zero. Mark Vanstone shows you how

Players must change the colour of every cube to complete the level.

Late in 1982, a funny little orange character with a big nose landed in arcades. The titular Q*bert’s task was to jump around a network of cubes arranged in a pyramid formation, changing the colours of each as they went. Once the cubes were all the same colour, it was on to the next level; to make things more interesting, there were enemies like Coily the snake, and objects which helped Q*bert: some froze enemies in their tracks, while floating discs provided a lift back to the top of the stage.

Q*bert was designed by Warren Davis and Jeff Lee at the American company Gottlieb, and soon became such a smash hit that, the following year, it was already being ported to most of the home computer platforms available at the time. New versions and remakes continued to appear for years afterwards, with a mobile phone version appearing in 2003. Q*bert was by far Gottlieb’s most popular game, and after several changes in company ownership, the firm is now part of Sony’s catalogue – Q*bert’s main character even made its way into the 2015 film, Pixels.

Q*bert uses isometric-style graphics to draw a pseudo-3D display – something we can easily replicate in Pygame Zero by using a single cube graphic with which we make a pyramid of Actor objects. Starting with seven cubes on the bottom row, we can create a simple double loop to create the pile of cubes. Our Q*bert character will be another Actor object which we’ll position at the top of the pile to start. The game screen can then be displayed in the draw() function by looping through our 28 cube Actors and then drawing Q*bert.

Our homage to Q*bert. Try not to fall into the terrifying void.

We need to detect player input, and for this we use the built-in keyboard object and check the cursor keys in our update() function. We need to make Q*bert move from cube to cube so we can move the Actor 32 pixels on the x-axis and 48 pixels on the y-axis. If we do this in steps of 2 for x and 3 for y, we will have Q*bert on the next cube in 16 steps. We can also change his image to point in the right direction depending on the key pressed in our jump() function. If we use this linear movement in our move() function, we’ll see the Actor go in a straight line to the next block. To add a bit of bounce to Q*bert’s movement, we add or subtract (depending on the direction) the values in the bounce[] list. This will make a bit more of a curved movement to the animation.

Now that we have our long-nosed friend jumping around, we need to check where he’s landing. We can loop through the cube positions and check whether Q*bert is over each one. If he is, then we change the image of the cube to one with a yellow top. If we don’t detect a cube under Q*bert, then the critter’s jumped off the pyramid, and the game’s over. We can then do a quick loop through all the cube Actors, and if they’ve all been changed, then the player has completed the level. So those are the basic mechanics of jumping around on a pyramid of cubes. We just need some snakes and other baddies to annoy Q*bert – but we’ll leave those for you to add. Good luck!

Here’s Mark’s code for a Q*bert-style, cube-hopping platform game. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 42

You can read more features like this one in Wireframe issue 42, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 42 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate Q*bert’s cube-hopping action | Wireframe #42 appeared first on Raspberry Pi.

Raspberry Pi retro player

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-retro-player/

We found this project at TeCoEd and we loved the combination of an OLED display housed inside a retro Argus slide viewer. It uses a Raspberry Pi 3 with Python and OpenCV to pull out single frames from a video and write them to the display in real time.​

TeCoEd names this creation the Raspberry Pi Retro Player, or RPRP, or -- rather neatly -- RP squared. The Argus viewer, he tells us, was a charity-shop find that cost just 50p.  It sat collecting dust for a few years until he came across an OLED setup guide on hackster.io, which inspired the birth of the RPRP.

Timelapse of the build and walk-through of the code

At the heart of the project is a Raspberry Pi 3 which is running a Python program that uses the OpenCV computer vision library.  The code takes a video clip and breaks it down into individual frames. Then it resizes each frame and converts it to black and white, before writing it to the OLED display. The viewer sees the video play in pleasingly retro monochrome on the slide viewer.

Tiny but cute, like us!

TeCoEd ran into some frustrating problems with the OLED display, which, he discovered, uses the SH1106 driver, rather than the standard SH1306 driver that the Adafruit CircuitPython library expects. Many OLED displays use the SH1306 driver, but it turns out that cheaper displays like the one in this project use the SH1106. He has made a video to spare other makers this particular throw-it-all-in-the-bin moment.

Tutorial for using the SH1106 driver for cheap OLED displays

If you’d like to try this build for yourself, here’s all the code and setup advice on GitHub.

Wiring diagram

TeCoEd is, as ever, our favourite kind of maker -- the sharing kind! He has collated everything you’ll need to get to grips with OpenCV, connecting the SH1106 OLED screen over I2C, and more. He’s even told us where we can buy the OLED board.

The post Raspberry Pi retro player appeared first on Raspberry Pi.

Recreate Time Pilot’s free-scrolling action | Wireframe #41

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-time-pilots-free-scrolling-action-wireframe-41/

Fly through the clouds in our re-creation of Konami’s classic 1980s shooter. Mark Vanstone has the code

Arguably one of Konami’s most successful titles, Time Pilot burst into arcades in 1982. Yoshiki Okamoto worked on it secretly, and it proved so successful that a sequel soon followed. In the original, the player flew through five eras, from 1910, 1940, 1970, 1982, and then to the far future: 2001. Aircraft start as biplanes and progress to become UFOs, naturally, by the last level.

Players also rescue other pilots by picking them up as they parachute from their aircraft. The player’s plane stays in the centre of the screen while other game objects move around it. The clouds that give the impression of movement have a parallax style to them, some moving faster than others, offering an illusion of depth.

To make our own version with Pygame Zero, we need eight frames of player aircraft images – one for each direction it can fly. After we create a player Actor object, we can get input from the cursor keys and change the direction the aircraft is pointing with a variable which will be set from zero to 7, zero being the up direction. Before we draw the player to the screen, we set the image of the Actor to the stem image name, plus whatever that direction variable is at the time. That will give us a rotating aircraft.

To provide a sense of movement, we add clouds. We can make a set of random clouds on the screen and move them in the opposite direction to the player aircraft. As we only have eight directions, we can use a lookup table to change the x and y coordinates rather than calculating movement values. When they go off the screen, we can make them reappear on the other side so that we end up with an ‘infinite’ playing area. Add a level variable to the clouds, and we can move them at different speeds on each update() call, producing the parallax effect. Then we need enemies. They will need the same eight frames to move in all directions. For this sample, we will just make one biplane, but more could be made and added.

Our Python homage to Konami’s arcade classic.

To get the enemy plane to fly towards the player, we need a little maths. We use the math.atan2() function to work out the angle between the enemy and the player. We convert that to a direction which we set in the enemy Actor object, and set its image and movement according to that direction variable. We should now have the enemy swooping around the player, but we will also need some bullets. When we create bullets, we need to put them in a list so that we can update each one individually in our update(). When the player hits the fire button, we just need to make a new bullet Actor and append it to the bullets list. We give it a direction (the same as the player Actor) and send it on its way, updating its position in the same way as we have done with the other game objects.

The last thing is to detect bullet hits. We do a quick point collision check and if there’s a match, we create an explosion Actor and respawn the enemy somewhere else. For this sample, we haven’t got any housekeeping code to remove old bullet Actors, which ought to be done if you don’t want the list to get really long, but that’s about all you need: you have yourself a Time Pilot clone!

Here’s Mark’s code for a Time Pilot-style free-scrolling shooter. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 41

You can read more features like this one in Wireframe issue 41, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 41 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate Time Pilot’s free-scrolling action | Wireframe #41 appeared first on Raspberry Pi.

Cloudflare Workers Announces Broad Language Support

Post Syndicated from Cody Koeninger original https://blog.cloudflare.com/cloudflare-workers-announces-broad-language-support/

Cloudflare Workers Announces Broad Language Support

Cloudflare Workers Announces Broad Language Support

We initially launched Cloudflare Workers with support for JavaScript and languages that compile to WebAssembly, such as Rust, C, and C++. Since then, Cloudflare and the community have improved the usability of Typescript on Workers. But we haven’t talked much about the many other popular languages that compile to JavaScript. Today, we’re excited to announce support for Python, Scala, Kotlin, Reason and Dart.

You can build applications on Cloudflare Workers using your favorite language starting today.

Cloudflare Workers Announces Broad Language Support

Getting Started

Getting started is as simple as installing Wrangler, then running generate for the template for your chosen language: Python, Scala, Kotlin, Dart, or Reason. For Python, this looks like:

wrangler generate my-python-project https://github.com/cloudflare/python-worker-hello-world

Follow the installation instructions in the README inside the generated project directory, then run wrangler publish. You can see the output of your Worker at your workers.dev subdomain, e.g. https://my-python-project.cody.workers.dev/. You can sign up for a free Workers account if you don’t have one yet.

That’s it. It is really easy to write in your favorite languages. But, this wouldn’t be a very compelling blog post if we left it at that. Now, I’ll shift the focus to how we added support for these languages and how you can add support for others.

How it all works under the hood

Language features are important. For instance, it’s hard to give up the safety and expressiveness of pattern matching once you’ve used it. Familiar syntax matters to us as programmers.

You may also have existing code in your preferred language that you’d like to reuse. Just keep in mind that the advantages of running on V8 come with the limitation that if you use libraries that depend on native code or language-specific VM features, they may not translate to JavaScript. WebAssembly may be an option in that case. But for memory-managed languages you’re usually better off compiling to JavaScript, at least until the story around garbage collection for WebAssembly stabilizes.

I’ll walk through how the Worker language templates are made using a representative example of a dynamically typed language, Python, and a statically typed language, Scala. If you want to follow along, you’ll need to have Wrangler installed and configured with your Workers account. If it’s your first time using Workers it’s a good idea to go through the quickstart.

Dynamically typed languages: Python

You can generate a starter “hello world” Python project for Workers by running

wrangler generate my-python-project https://github.com/cloudflare/python-worker-hello-world

Wrangler will create a my-python-project directory and helpfully remind you to configure your account_id in the wrangler.toml file inside it.  The README.md file in the directory links to instructions on setting up Transcrypt, the Python to JavaScript compiler we’re using. If you already have Python 3.7 and virtualenv installed, this just requires running

cd my-python-project
virtualenv env
source env/bin/activate
pip install transcrypt
wrangler publish

The main requirement for compiling to JavaScript on Workers is the ability to produce a single js file that fits in our bundle size limit of 1MB. Transcrypt adds about 70k for its Python runtime in this case, which is well within that limit. But by default running Transcrypt on a Python file will produce multiple JS and source map files in a __target__ directory. Thankfully Wrangler has built in support for webpack. There’s a webpack loader for Transcrypt, making it easy to produce a single file. See the webpack.config.js file for the setup.

The point of all this is to run some Python code, so let’s take a look at index.py:

def handleRequest(request):
   return __new__(Response('Python Worker hello world!', {
       'headers' : { 'content-type' : 'text/plain' }
   }))

addEventListener('fetch', (lambda event: event.respondWith(handleRequest(event.request))))

In most respects this is very similar to any other Worker hello world, just in Python syntax. Dictionary literals take the place of JavaScript objects, lambda is used instead of an anonymous arrow function, and so on. If using __new__ to create instances of JavaScript classes seems awkward, the Transcrypt docs discuss an alternative.

Clearly, addEventListener is not a built-in Python function, it’s part of the Workers runtime. Because Python is dynamically typed, you don’t have to worry about providing type signatures for JavaScript APIs. The downside is that mistakes will result in failures when your Worker runs, rather than when Transcrypt compiles your code. Transcrypt does have experimental support for some degree of static checking using mypy.

Statically typed languages: Scala

You can generate a starter “hello world” Scala project for Workers by running

wrangler generate my-scala-project https://github.com/cloudflare/scala-worker-hello-world

The Scala to JavaScript compiler we’re using is Scala.js. It has a plugin for the Scala build tool, so installing sbt and a JDK is all you’ll need.

Running sbt fullOptJS in the project directory will compile your Scala code to a single index.js file. The build configuration in build.sbt is set up to output to the root of the project, where Wrangler expects to find an index.js file. After that you can run wrangler publish as normal.

Scala.js uses the Google Closure Compiler to optimize for size when running fullOptJS. For the hello world, the file size is 14k. A more realistic project involving async fetch weighs in around 100k, still well within Workers limits.

In order to take advantage of static type checking, you’re going to need type signatures for the JavaScript APIs you use. There are existing Scala signatures for fetch and service worker related APIs. You can see those being imported in the entry point for the Worker, Main.scala:

import org.scalajs.dom.experimental.serviceworkers.{FetchEvent}
import org.scalajs.dom.experimental.{Request, Response, ResponseInit}
import scala.scalajs.js

The import of scala.scalajs.js allows easy access to Scala equivalents of JavaScript types, such as js.Array or js.Dictionary. The remainder of Main looks fairly similar to a Typescript Worker hello world, with syntactic differences such as Unit instead of Void and square brackets instead of angle brackets for type parameters:

object Main {
  def main(args: Array[String]): Unit = {
    Globals.addEventListener("fetch", (event: FetchEvent) => {
      event.respondWith(handleRequest(event.request))
    })
  }

  def handleRequest(request: Request): Response = {
    new Response("Scala Worker hello world", ResponseInit(
        _headers = js.Dictionary("content-type" -> "text/plain")))
  }
}  

Request, Response and FetchEvent are defined by the previously mentioned imports. But what’s this Globals object? There are some Worker-specific extensions to JavaScript APIs. You can handle these in a statically typed language by either automatically converting existing Typescript type definitions for Workers or by writing type signatures for the features you want to use. Writing the type signatures isn’t hard, and it’s good to know how to do it, so I included an example in Globals.scala:

import scalajs.js
import js.annotation._

@js.native
@JSGlobalScope
object Globals extends js.Object {
  def addEventListener(`type`: String, f: js.Function): Unit = js.native
}

The annotation @js.native indicates that the implementation is in existing JavaScript code, not in Scala. That’s why the body of the addEventListener definition is just js.native. In a JavaScript Worker you’d call addEventListener as a top-level function in global scope. Here, the @JSGlobalScope annotation indicates that the function signatures we’re defining are available in the JavaScript global scope.

You may notice that the type of the function passed to addEventListener is just js.Function, rather than specifying the argument and return types. If you want more type safety, this could be done as js.Function1[FetchEvent, Unit].  If you’re trying to work quickly at the expense of safety, you could use def addEventListener(any: Any*): Any to allow anything.

For more information on defining types for JavaScript interfaces, see the Scala.js docs.

Using Workers KV and async Promises

Let’s take a look at a more realistic example using Workers KV and asynchronous calls. The idea for the project is our own HTTP API to store and retrieve text values. For simplicity’s sake I’m using the first slash-separated component of the path for the key, and the second for the value. Usage of the finished project will look like PUT /meaning of life/42 or GET /meaning of life/

The first thing I need is to add type signatures for the parts of the KV API that I’m using, in Globals.scala. My KV namespace binding in wrangler.toml is just going to be named KV, resulting in a corresponding global object:

object Globals extends js.Object {
  def addEventListener(`type`: String, f: js.Function): Unit = js.native
  
  val KV: KVNamespace = js.native
}
bash$ curl -w "\n" -X PUT 'https://scala-kv-example.cody.workers.dev/meaning of life/42'

bash$ curl -w "\n" -X GET 'https://scala-kv-example.cody.workers.dev/meaning of life/'
42

So what’s the definition of the KVNamespace type? It’s an interface, so it becomes a Scala trait with a @js.native annotation. The only methods I need to add right now are the simple versions of KV.get and KV.put that take and return strings. The return values are asynchronous, so they’re wrapped in a js.Promise. I’ll make that wrapped string a type alias, KVValue, just in case we want to deal with the array or stream return types in the future:

object KVNamespace {
  type KVValue = js.Promise[String]
}

@js.native
trait KVNamespace extends js.Object {
  import KVNamespace._
  
  def get(key: String): KVValue = js.native
  
  def put(key: String, value: String): js.Promise[Unit] = js.native
}

With type signatures complete, I’ll move on to Main.scala and how to handle interaction with JavaScript Promises. It’s possible to use js.Promise directly, but I’d prefer to use Scala semantics for asynchronous Futures. The methods toJSPromise and toFuture from js.JSConverters can be used to convert back and forth:

  def get(key: String): Future[Response] = {
    Globals.KV.get(key).toFuture.map { (value: String) =>
        new Response(value, okInit)
    } recover {
      case err =>
        new Response(s"error getting a value for '$key': $err", errInit)
    }
  }

The function for putting values makes similar use of toFuture to convert the return value from KV into a Future. I use map to transform the value into a Response, and recover to handle failures. If you prefer async / await syntax instead of using combinators, you can use scala-async.

Finally, the new definition for handleRequest is a good example of how pattern matching makes code more concise and less error-prone at the same time. We match on exactly the combinations of HTTP method and path components that we want, and default to an informative error for any other case:

  def handleRequest(request: Request): Future[Response] = {
    (request.method, request.url.split("/")) match {
      case (HttpMethod.GET, Array(_, _, _, key)) =>
        get(key)
      case (HttpMethod.PUT, Array(_, _, _, key, value)) =>
        put(key, value)
      case _ =>
        Future.successful(
          new Response("expected GET /key or PUT /key/value", errInit))
    }
  }

You can get the complete code for this example by running

wrangler generate projectname https://github.com/cloudflare/scala-worker-kv

How to contribute

I’m a fan of programming languages, and will continue to add more Workers templates. You probably know your favorite language better than I do, so pull requests are welcome for a simple hello world or more complex example.

And if you’re into programming languages check out the latest language rankings from RedMonk where Python is the first non-Java or JavaScript language ever to place in the top two of these rankings.

Stay tuned for the rest of Serverless Week!

This Raspberry Pi–powered setup improves home brewing

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/this-raspberry-pi-powered-setup-improves-home-brewing/

We spied New Orleans–based Raspberry Pi–powered home brewing analysis and were interested in how this project could help other at-home brewers perfect their craft.

Raspberry Pi in a case with fan, neatly tucked away on a shelf in the Danger Shed

When you’re making beer, you want the yeast to eat up the sugars and leave alcohol behind. To check whether this is happening, you need to be able to track changes in gravity, known as ‘gravity curves’. You also have to do yeast cell counts, and you need to be able to tell when your beer has finished fermenting.

“We wanted a way to skip the paper and pencil and instead input the data directly into the software. Enter the Raspberry Pi!”

Patrick Murphy

Patrick Murphy and co. created a piece of software called Aleproof which allows you to monitor all of this stuff remotely. But before rolling it out, they needed somewhere to test that it works. Enter the ‘Danger Shed’, where they ran Aleproof on Raspberry Pi.

The Danger Shed benefits from a fancy light-changing fan for the Raspberry Pi

A Raspberry Pi 3 Model B+ spins their Python-based program on Raspberry Pi OS and shares its intel via a mounted monitor.

Here’s what Patrick had to say about what they’re up to in the Danger Shed and why they needed a Raspberry Pi:

The project uses PyCharm to run the Python-based script on the Raspberry Pi OS

“I am the founder and owner of Arithmech, a small software company that develops Python applications for brewers. Myself and a few buddies (all of us former Army combat medics) started our own brewing project called Danger Shed Ales & Mead to brew and test out the software on real-world data. We brew in the shed and record data on paper as we go, then enter the data into our software at a later time.”

Look how neat and out of the way our tiny computer is

“We wanted a way to skip the paper and pencil and instead input the data directly into the software. Enter the Raspberry Pi! The shed is small, hot, has leaks, and is generally a hostile place for a full-size desktop computer. Raspberry Pi solves our problem in multiple ways: it’s small, portable, durable (in a case), and easily cooled. But on top of that, we are able to run the code using PyCharm, enter data throughout the brewing process, and fix bugs all from the shed!”

The Raspberry Pi in its case inc. fan

“The Raspberry Pi made it easy for us to set up our software and run it as a stand-alone brewing software station.”

Productivity may have slowed when Patrick, Philip, and John remembered you can play Minecraft on the Raspberry Pi

The post This Raspberry Pi–powered setup improves home brewing appeared first on Raspberry Pi.

ICYMI: Serverless Q2 2020

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/icymi-serverless-q2-2020/

Welcome to the 10th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, checkout what happened last quarter here.

AWS Lambda

AWS Lambda functions can now mount an Amazon Elastic File System (EFS). EFS is a scalable and elastic NFS file system storing data within and across multiple Availability Zones (AZ) for high availability and durability. In this way, you can use a familiar file system interface to store and share data across all concurrent execution environments of one, or more, Lambda functions. EFS supports full file system access semantics, such as strong consistency and file locking.

Using different EFS access points, each Lambda function can access different paths in a file system, or use different file system permissions. You can share the same EFS file system with Amazon EC2 instances, containerized applications using Amazon ECS and AWS Fargate, and on-premises servers.

Learn how to create an Amazon EFS-mounted Lambda function using the AWS Serverless Application Model in Sessions With SAM Episode 10.

With our recent launch of .NET Core 3.1 AWS Lambda runtime, we’ve also released version 2.0.0 of the PowerShell module AWSLambdaPSCore. The new version now supports PowerShell 7.

Amazon EventBridge

At AWS re:Invent 2019, we introduced a preview of Amazon EventBridge schema registry and discovery. This is a way to store the structure of the events (the schema) in a central location. It can simplify using events in your code by generating the code to process them for Java, Python, and TypeScript. In April, we announced general availability of EventBridge Schema Registry.

We also added support for resource policies. Resource policies allow sharing of schema repository across different AWS accounts and organizations. In this way, developers on different teams can search for and use any schema that another team has added to the shared registry.

Ben Smith, AWS Serverless Developer Advocate, published a guide on how to capture user events and monitor user behavior using the Amazon EventBridge partner integration with Auth0. This enables better insight into your application to help deliver a more customized experience for your users.

AWS Step Functions

In May, we launched a new AWS Step Functions service integration with AWS CodeBuild. CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces packages that are ready for deployment. Now, during the execution of a state machine, you can start or stop a build, get build report summaries, and delete past build executions records.

With the new AWS CodePipeline support to invoke Step Functions you can customize your delivery pipeline with choices, external validations, or parallel tasks. Each of those tasks can now call CodeBuild to create a custom build following specific requirements. Learn how to build a continuous integration workflow with Step Functions and AWS CodeBuild.

Rob Sutter, AWS Serverless Developer Advocate, has published a video series on Step Functions. We’ve compiled a playlist on YouTube to help you on your serverless journey.

AWS Amplify

The AWS Amplify Framework announced in April that they have rearchitected the Amplify UI component library to enable JavaScript developers to easily add authentication scenarios to their web apps. The authentication components include numerous improvements over previous versions. These include the ability to automatically sign in users after sign-up confirmation, better customization, and improved accessibility.

Amplify also announced the availability of Amplify Framework iOS and Amplify Framework Android libraries and tools. These help mobile application developers to easily build secure and scalable cloud-powered applications. Previously, mobile developers relied on a combination of tools and SDKS along with the Amplify CLI to create and manage a backend.

These new native libraries are oriented around use-cases, such as authentication, data storage and access, machine learning predictions etc. They provide a declarative interface that enables you to programmatically apply best practices with abstractions.

A mono-repository is a repository that contains more than one logical project, each in its own repository. Monorepo support is now available for the AWS Amplify Console, allowing developers to connect Amplify Console to a sub-folder in your mono-repository. Learn how to set up continuous deployment and hosting on a monorepo with the Amplify Console.

Amazon Keyspaces (for Apache Cassandra)

Amazon Managed Apache Cassandra Service (MCS) is now generally available under the new name: Amazon Keyspaces (for Apache Cassandra). Amazon Keyspaces is built on Apache Cassandra and can be used as a fully managed serverless database. Your applications can read and write data from Amazon Keyspaces using your existing Cassandra Query Language (CQL) code, with little or no changes. Danilo Poccia explains how to use Amazon Keyspace with API Gateway and Lambda in this launch post.

AWS Glue

In April we extended AWS Glue jobs, based on Apache Spark, to run continuously and consume data from streaming platforms such as Amazon Kinesis Data Streams and Apache Kafka (including the fully-managed Amazon MSK). Learn how to manage a serverless extract, transform, load (ETL) pipeline with Glue in this guide by Danilo Poccia.

Serverless posts

Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest published to the AWS Compute Blog this quarter.

Introducing the new serverless LAMP stack

Ben Smith, AWS Serverless Developer Advocate, introduces the Serverless LAMP stack. He explains how to use serverless technologies with PHP. Learn about the available tools, frameworks and strategies to build serverless applications, and why now is the right time to start.

 

Building a location-based, scalable, serverless web app

James Beswick, AWS Serverless Developer Advocate, walks through building a location-based, scalable, serverless web app. Ask Around Me is an example project that allows users to ask questions within a geofence to create an engaging community driven experience.

Building well-architected serverless applications

Julian Wood, AWS Serverless Developer Advocate, published two blog series on building well-architected serverless applications. Learn how to better understand application health and lifecycle management.

Device hacking with serverless

Go beyond the browser with these creative and physical projects. Moheeb Zara, AWS Serverless Developer Advocate, published several serverless powered device hacks, all using off the shelf parts.

April

May

June

Tech Talks and events

We hold AWS Online Tech Talks covering serverless topics throughout the year. You can find these in the serverless section of the AWS Online Tech Talks page. We also regularly join in on podcasts, and record short videos you can find to learn in quick bite-sized chunks.

Here are the highlights from Q2.

Innovator Island Workshop

Learn how to build a complete serverless web application for a popular theme park called Innovator Island. James Beswick created a video series to walk you through this popular workshop at your own pace.

Serverless First Function

In May, we held a new virtual event series, the Serverless-First Function, to help you and your organization get the most out of the cloud. The first event, on May 21, included sessions from Amazon CTO, Dr. Werner Vogels, and VP of Serverless at AWS, David Richardson. The second event, May 28, was packed with sessions with our AWS Serverless Developer Advocate team. Catch up on the AWS Twitch channel.

Live streams

The AWS Serverless Developer Advocate team hosts several weekly livestreams on the AWS Twitch channel covering a wide range of topics. You can catch up on all our past content, including workshops, on the AWS Serverless YouTube channel.

Eric Johnson hosts “Sessions with SAM” every Thursday at 10AM PST. Each week, Eric shows how to use SAM to solve different serverless challenges. He explains how to use SAM templates to build powerful serverless applications. Catch up on the last few episodes.

James Beswick, AWS Serverless Developer Advocate, has compiled a round-up of all his content from Q2. He has plenty of videos ranging from beginner to advanced topics.

AWS Serverless Heroes

We’re pleased to welcome Kyuhyun Byun and Serkan Özal to the growing list of AWS Serverless Heroes. The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.

Still looking for more?

The Serverless landing page has much more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more getting started tutorials.

Follow the AWS Serverless team on our new LinkedIn page we share all the latest news and events. You can also follow all of us on Twitter to see latest news, follow conversations, and interact with the team.

Chris Munns: @chrismunns
Eric Johnson: @edjgeek
James Beswick: @jbesw
Moheeb Zara: @virgilvox
Ben Smith: @benjamin_l_s
Rob Sutter: @rts_rob
Julian Wood: @julian_wood

Code Jetpac’s rocket building action | Wireframe #40

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-jetpacs-rocket-building-action-wireframe-40/

Pick up parts of a spaceship, fuel it up, and take off in Mark Vanstone’s Python and Pygame Zero rendition of a ZX Spectrum classic

The original Jetpac, in all its 8-bit ZX Spectrum glory

For ZX Spectrum owners, there was something special about waiting for a game to load, with the sound of zeros and ones screeching from the cassette tape player next to the computer. When the loading screen – an image of an astronaut and Ultimate Play the Game’s logo – appeared, you knew the wait was going to be worthwhile. Created by brothers Chris and Tim Stamper in 1983, Jetpac was one of the first hits for their studio, Ultimate Play the Game. The game features the hapless astronaut Jetman, who must build and fuel a rocket from the parts dotted around the screen, all the while avoiding or shooting swarms of deadly aliens.

This month’s code snippet will provide the mechanics of collecting the ship parts and fuel to get Jetman’s spaceship to take off.  We can use the in-built Pygame Zero Actor objects for all the screen elements and the Actor collision routines to deal with gravity and picking up items. To start, we need to initialise our Actors. We’ll need our Jetman, the ground, some platforms, the three parts of the rocket, some fire for the rocket engines, and a fuel container. The way each Actor behaves will be determined by a set of lists. We have a list for objects with gravity, objects that are drawn each frame, a list of platforms, a list of collision objects, and the list of items that can be picked up.

Jetman jumps inside the rocket and is away. Hurrah!

Our draw() function is straightforward as it loops through the list of items in the draw list and then has a couple of conditional elements being drawn after. The update() function is where all the action happens: we check for keyboard input to move Jetman around, apply gravity to all the items on the gravity list, check for collisions with the platform list, pick up the next item if Jetman is touching it, apply any thrust to Jetman, and move any items that Jetman is holding to move with him. When that’s all done, we can check if refuelling levels have reached the point where Jetman can enter the rocket and blast off.

If you look at the helper functions checkCollisions() and checkTouching(), you’ll see that they use different methods of collision detection, the first being checking for a collision with a specified point so we can detect collisions with the top or bottom of an actor, and the touching collision is a rectangle or bounding box collision, so that if the bounding box of two Actors intersect, a collision is registered. The other helper function applyGravity() makes everything on the gravity list fall downward until the base of the Actor hits something on the collide list.

So that’s about it: assemble a rocket, fill it with fuel, and lift off. The only thing that needs adding is a load of pesky aliens and a way to zap them with a laser gun.

Here’s Mark’s Jetpac code. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 40

You can read more features like this one in Wireframe issue 40, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 40 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code Jetpac’s rocket building action | Wireframe #40 appeared first on Raspberry Pi.

Be a better Scrabble player with a Raspberry Pi High Quality Camera

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/be-a-better-scrabble-player-with-a-raspberry-pi-high-quality-camera/

One of our fave makers, Wayne from Devscover, got a bit sick of losing at Scrabble (and his girlfriend was likely raging at being stuck in lockdown with a lesser opponent). So he came up with a Raspberry Pi–powered solution!

Using a Raspberry Pi High Quality Camera and a bit of Python, you can quickly figure out the highest-scoring word your available Scrabble tiles allow you to play.

Hardware

  • Raspberry Pi 3B
  • Compatible touchscreen
  • Raspberry Pi High Quality Camera
  • Power supply for the touchscreen and Raspberry Pi
  • Scrabble board

You don’t have to use a Raspberry Pi 3B, but you do need a model that has both display and camera ports. Wayne also chose to use an official Raspberry Pi Touch Display because it can power the computer, but any screen that can talk to your Raspberry Pi should be fine.

Software

Firstly, the build takes a photo of your Scrabble tiles using raspistill.

Next, a Python script processes the image of your tiles and then relays the highest-scoring word you can play to your touchscreen.

The key bit of code here is twl, a Python script that contains every possible word you can play in Scrabble.

From 4.00 minutes into his build video, Wayne walks you through what each bit of code does and how he made it work for this project, including how he installed and used the Scrabble dictionary.

Fellow Scrabble-strugglers have suggested sneaky upgrades in the comments of Wayne’s YouTube video, such having the build relay answers to a more discreet smart watch.

No word yet on how the setup deals with the blank Scrabble tiles; those things are like gold dust.

In case you haven’t met the Raspberry Pi High Quality Camera yet, Wayne also did this brilliant unboxing and tutorial video for our newest piece of hardware.

And for more projects from Devscover, check out this great Amazon price tracker using a Raspberry Pi Zero W, and make sure to subscribe to the channel for more content.

The post Be a better Scrabble player with a Raspberry Pi High Quality Camera appeared first on Raspberry Pi.

Learning with Raspberry Pi — robotics, a Master’s degree, and beyond

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/learning-with-raspberry-pi-robotics-a-masters-degree-and-beyond/

Meet Callum Fawcett, who shares his journey from tinkering with the first Raspberry Pi while he was at school, to a Master’s degree in computer science and a real-life job in programming. We also get to see some of the awesome projects he’s made along the way.

I first decided to get a Raspberry Pi at the age of 14. I had already started programming a little bit before and found that I really enjoyed the language Python. At the time the first Raspberry Pi came out, my History teacher told us about them and how they would be a great device to use to learn programming. I decided to ask for one to help me learn more. I didn’t really know what I would use it for or how it would even work, but after a little bit of help at the start, I quickly began making small programs in Python. I remember some of my first programs being very simple dictionary-type programs in which I would match English words to German to help with my German homework.

Learning Linux, C++, and Python

Most of my learning was done through two sources. I learnt Linux and how the terminal worked using online resources such as Stack Overflow. I would have a problem that I needed to solve, look up solutions online, and try out commands that I found. This was perhaps the hardest part of learning how to use a Raspberry Pi, as it was something I had never done before, but it really helped me in later years when I would use Linux more than Windows. For learning programming, I preferred to use books. I had a book for C++ and a book for Python that I would work through. These were game-based books, so many of the fun projects that I did were simple text-based games where you typed in responses to questions.

A family robotics project

The first robot Callum made using a Raspberry Pi

By far the coolest project I did with the Raspberry Pi was to build a small robot (shown above). This was a joint project between myself and my dad. He sorted out the electronics and I programmed the robot. It was a great opportunity to learn about robotics and refine my programming skills. By the end, the robot was capable of moving around by itself, driving into objects, and then reversing and trying a new direction. It was almost like an unintelligent Roomba that couldn’t hoover, but I spent many hours improving small bits and pieces to make it as easy to use as possible. My one wish that I never managed to achieve with my robot was allowing it to map out its surroundings. This was a very ambitious project at the time, since I was still quite inexperienced in programming. The biggest problem with this was calibrating the robot’s turning circle, which was never consistent so it was very hard to have the robot know where in the room it was.

Sense HAT maze game

Another fun project that I worked on used the Sense HAT developed for the Astro Pi computers for use on the International Space Station. Using this, I was able to make a memory maze game (shown below), in which a player is shown a maze for several seconds and then has to navigate that maze from memory by shaking the device. This was my first introduction to using more interactive types of input, and this eventually led to my final-year project, which used these interesting interactions to develop another way of teaching.

Learning programming without formal lessons

I have now just finished my Master’s degree in computer science at the University of Bristol. Before going to university, I had no experience of being taught programming in a formal environment. It was not a taught subject at my secondary school or sixth form. I wanted to get more people at my school interested in this area of study though, which I did by running a coding club for people. I would help others debug their code and discuss interesting problems with them. The reason that I chose to study computer science is largely because of my experiences with Raspberry Pi and other programming I did in my own time during my teenage years. I likely would have studied history if it weren’t for the programming I had done by myself making robots and other games.

Raspberry Pi has continued to play a part in my degree and extra-curricular activities; I used them in two large projects during my time at university and used a similar device in my final project. My robot experience also helped me to enter my university’s ‘Robot Wars’ competition which, though we never won, was a lot of fun.

A tool for learning and a device for industry

Having a Raspberry Pi is always useful during a hackathon, because it’s such a versatile component. Tech like Raspberry Pi will always be useful for beginners to learn the basics of programming and electronics, but these computers are also becoming more and more useful for people with more experience to make fun and useful projects. I could see tech like Raspberry Pi being used in the future to help quickly prototype many types of electronic devices and, as they become more powerful, even being used as an affordable way of controlling many types of robots, which will become more common in the future.

Our guest blogger Callum

Now I am going on to work on programming robot control systems at Ocado Technology. My experiences of robot building during my years before university played a large part in this decision. Already, robots are becoming a huge part of society, and I think they are only going to become more prominent in the future. Automation through robots and artificial intelligence will become one of the most important tools for humanity during the 21st century, and I look forward to being a part of that process. If it weren’t for learning through Raspberry Pi, I certainly wouldn’t be in this position.

Cheers for your story, Callum! Has tinkering with our tiny computer inspired your educational or professional choices? Let us know in the comments below. 

The post Learning with Raspberry Pi — robotics, a Master’s degree, and beyond appeared first on Raspberry Pi.

Code Gauntlet’s four-player co-op mode | Wireframe #39

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-gauntlets-four-player-co-op-mode-wireframe-39/

Four players dungeon crawling at once? Mark Vanstone shows you how to recreate Gauntlet’s co-op mode in Python and Pygame Zero.

Players collected items while battling their way through dungeons. Shooting food was a definite faux pas.

Atari’s Gauntlet was an eye-catching game, not least because it allowed four people to explore its dungeons together. Each player could choose one of four characters, each with its own abilities – there was a warrior, a Valkyrie, a wizard, and an elf – and surviving each dungeon required slaughtering enemies and the constant gathering of food, potions, and keys that unlocked doors and exits.

Designed by Ed Logg, and loosely based on the tabletop RPG Dungeons & Dragons, as well as John Palevich’s 1983 dungeon crawler, Dandy, Gauntlet was a big success. It was ported to most of the popular home systems at the time, and Atari released a sequel arcade machine, Gauntlet II, in 1986.

Atari’s original arcade machine featured four joysticks, but our example will mix keyboard controls and gamepad inputs. Before we deal with the movement, we’ll need some characters and dungeon graphics. For this example, we can make our dungeon from a large bitmap image and use a collision map to prevent our characters from clipping through walls. We’ll also need graphics for the characters moving in eight different directions. Each direction has three frames of walking animation, which makes a total of 24 frames per character. We can use a Pygame Zero Actor object for each character and add a few extra properties to keep track of direction and the current animation frame. If we put the character Actors in a list, we can loop through the list to check for collisions, move the player, or draw them to the screen.

We now test input devices for movement controls using the built-in Pygame keyboard object to test if keys are pressed. For example, keyboard.left will return True if the left arrow key is being held down. We can use the arrow keys for one player and the WASD keys for the other keyboard player. If we register x and y movements separately, then if two keys are pressed – for example, up and left – we can read that as a diagonal movement. In this way, we can get all eight directions of movement from just four keys.

For joystick or gamepad movement, we need to import the joystick module from Pygame. This provides us with methods to count the number of joystick or gamepad devices that are attached to the computer, and then initialise them for input. When we check for input from these devices, we just need to get the x-axis value and the y- axis value and then make it into an integer. Joysticks and gamepads should return a number between -1 and 1 on each axis, so if we round that number, we will get the movement value we need.

We can work out the direction (and the image we need to use) of the character with a small lookup table of x and y values and translate that to a frame number cycling through those three frames of animation as the character walks. Then all we need to do before we move the character is check they aren’t going to collide with a wall or another character. And that’s it – we now have a four-player control system. As for adding enemy spawners, loot, and keys – well, that’s a subject for another time.

Here’s Mark’s code snippet. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, go here.

Get your copy of Wireframe issue 39

You can read more features like this one in Wireframe issue 39, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 39 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code Gauntlet’s four-player co-op mode | Wireframe #39 appeared first on Raspberry Pi.

Code Robotron: 2084’s twin-stick action | Wireframe #38

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-robotron-2084s-twin-stick-action-wireframe-38/

News flash! Before we get into our Robotron: 2084 code, we have some important news to share about Wireframe: as of issue 39, the magazine will be going monthly.

The new 116-page issue will be packed with more in-depth features, more previews and reviews, and more of the guides to game development that make the magazine what it is. The change means we’ll be able to bring you new subscription offers, and generally make the magazine more sustainable in a challenging global climate.

As for existing subscribers, we’ll be emailing you all to let you know how your subscription is changing, and we’ll have some special free issues on offer as a thank you for your support.

The first monthly issue will be out on 4 June, and subsequent editions will be published on the first Thursday of every month after that. You’ll be able to order a copy online, or you’ll find it in selected supermarkets and newsagents if you’re out shopping for essentials.

We now return you to our usual programming…

Move in one direction and fire in another with this Python and Pygame re-creation of an arcade classic. Raspberry Pi’s own Mac Bowley has the code.

Robotron: 2084 is often listed on ‘best game of all time’ lists, and has been remade and re-released for numerous systems over the years.

Robotron: 2084

Released back in 1982, Robotron: 2084 popularised the concept of the twin-stick shooter. It gave players two joysticks which allowed them to move in one direction while also shooting at enemies in another. Here, I’ll show you how to recreate those controls using Python and Pygame. We don’t have access to any sticks, only a keyboard, so we’ll be using the arrow keys for movement and WASD to control the direction of fire.

The movement controls use a global variable, a few if statements, and two built-in Pygame functions: on_key_down and on_key_up. The on_key_down function is called when a key on the keyboard is pressed, so when the player presses the right arrow key, for example, I set the x direction of the player to be a positive 1. Instead of setting the movement to 1, instead, I’ll add 1 to the direction. The on_key_down function is called when a button’s released. A key being released means the player doesn’t want to travel in that direction anymore and so we should do the opposite of what we did earlier – we take away the 1 or -1 we applied in the on_key_up function.

We repeat this process for each arrow key. Moving the player in the update() function is the last part of my movement; I apply a move speed and then use a playArea rect to clamp the player’s position.

The arena background and tank sprites were created in Piskel. Separate sprites for the tank allow the turret to rotate separately from the tracks.

Turn and fire

Now for the aiming and rotating. When my player aims, I want them to set the direction the bullets will fire, which functions like the movement. The difference this time is that when a player hits an aiming key, I set the direction directly rather than adjusting the values. If my player aims up, and then releases that key, the shooting will stop. Our next challenge is changing this direction into a rotation for the turret.

Actors in Pygame can be rotated in degrees, so I have to find a way of turning a pair of x and y directions into a rotation. To do this, I use the math module’s atan2 function to find the arc tangent of two points. The function returns a result in radians, so it needs to be converted. (You’ll also notice I had to adjust mine by 90 degrees. If you want to avoid having to do this, create a sprite that faces right by default.)

To fire bullets, I’m using a flag called ‘shooting’ which, when set to True, causes my turret to turn and fire. My bullets are dictionaries; I could have used a class, but the only thing I need to keep track of is an actor and the bullet’s direction.

Here’s Mac’s code snippet, which creates a simple twin-stick shooting mechanic in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, go here.

You can look at the update function and see how I’ve implemented a fire rate for the turret as well. You can edit the update function to take a single parameter, dt, which stores the time since the last frame. By adding these up, you can trigger a bullet at precise intervals and then reset the timer.

This code is just a start – you could add enemies and maybe other player weapons to make a complete shooting experience.

Get your copy of Wireframe issue 38

You can read more features like this one in Wireframe issue 38, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 38 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code Robotron: 2084’s twin-stick action | Wireframe #38 appeared first on Raspberry Pi.

Learn at home: a guide for parents #2

Post Syndicated from Katie Gouskos original https://www.raspberrypi.org/blog/digital-making-at-home-parents-guide-python/

With millions of schools still in lockdown, parents have been telling us that they need help to support their children with learning computing at home. As well as providing loads of great content for young people, we’ve been working on support tutorials specifically for parents who want to understand and learn about the programmes used in schools and our resources.

If you don’t know your Scratch from your Trinket and your Python, we’ve got you!

Glen, Web Developer at the Raspberry Pi Foundation, and Maddie, aged 8

 

What are Python and Trinket all about?

In our last blog post for parents, we talked to you about Scratch, the programming language used in most primary schools. This time Mark, Youth Programmes Manager at the Raspberry Pi Foundation, takes you through how to use Trinket. Trinket is a free online platform that lets you write and run your code in any web browser. This is super useful because it means you don’t have to install any new software.

A parents’ introduction to Trinket

Sign up to our regular parents’ newsletter to receive regular, FREE tutorials, tips & fun projects for young people of all levels of experience: http://rpf.i…

Trinket also lets you create public web pages and projects that can be viewed by anyone with the link to them. That means your child can easily share their coding creation with others, and for you that’s a good opportunity to talk to them about staying safe online and not sharing any personal information.

Lincoln, aged 10

Getting to know Python

We’ve also got an introduction to Python for you, from Mac, a Learning Manager on our team. He’ll guide you through what to expect from Python, which is a widely used text-based programming language. For many learners, Python is their first text-based language, because it’s very readable, and you can get things done with fewer lines of code than in many other programming languages. In addition, Python has support for ‘Turtle’ graphics and other features that make coding more fun and colourful for learners. Turtle is simply a Python feature that works like a drawing board, letting you control a turtle to draw anything you like using code.

A parents’ introduction to Python

Sign up to our regular parents’ newsletter to receive regular, FREE tutorials, tips & fun projects for young people of all levels of experience: http://rpf.i…

Why not try out Mac’s suggestions of Hello world, Countdown timer, and Outfit recommender for  yourself?

Python is used in lots of real-world software applications in industries such as aerospace, retail banking, insurance and healthcare, so it’s very useful for your children to learn it!

Parent diary: juggling homeschooling and work

Olympia is Head of Youth Programmes at the Raspberry Pi Foundation and also a mum to two girls aged 9 and 11. She is currently homeschooling them as well as working (and hopefully having the odd evening to herself!). Olympia shares her own experience of learning during lockdown and how her family are adapting to their new routine.

Parent diary: Juggling homeschooling and work

Olympia Brown, Head of Youth Partnerships at the Raspberry Pi Foundation shares her experience of homeschooling during the lockdown, and how her family are a…

Digital Making at Home

To keep young people entertained and learning, we launched our Digital Making at Home series, which is free and accessible to everyone. New code-along videos are released every Monday, with different themes and projects for all levels of experience.

Code along live with the team on Wednesday 6 May at 14:00 BST / 9:00 EDT for a special session of Digital Making at Home

Sarah and Ozzy, aged 13

We want your feedback

We’ve been asking parents what they’d like to see as part of our initiative to support young people and parents. We’ve had some great suggestions so far! If you’d like to share your thoughts, you can email us at [email protected].

Sign up for our bi-weekly emails, tailored to your needs

Sign up now to start receiving free activities suitable to your child’s age and experience level, straight to your inbox. And let us know what you as a parent or guardian need help with, and what you’d like more or less of from us. 

PS: All of our resources are completely free. This is made possible thanks to the generous donations of individuals and organisations. Learn how you can help too!

 

The post Learn at home: a guide for parents #2 appeared first on Raspberry Pi.

Build a serverless Martian weather display with CircuitPython and AWS Lambda

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/build-a-serverless-martian-weather-display-with-circuitpython-and-aws-lambda/

Build a standalone digital weather display of Mars showing the latest images from the Mars Curiosity Rover.

This project uses an Adafruit PyPortal, an open-source IoT touch display. Traditionally, a microcontroller is programmed with firmware compiled using various specific toolchains. Fortunately, the PyPortal is programmed using CircuitPython, a lightweight version of Python that works on embedded hardware. You just copy your code to the PyPortal like you would to a thumb drive and it runs.

I deploy the backend, the part in the cloud that does all the heavy lifting, using the AWS Serverless Application Repository (SAR). The code on the PyPortal makes a REST call to the backend to handle the requests to the NASA Mars Rover Photos API and InSight: Mars Weather Service API. It then converts and resizes the image before returning the information to the PyPortal for display.

An Adafruit PyPortal displaying the latest images from the Mars Curiosity Rover and weather data from InSight Mars Lander.

An Adafruit PyPortal displaying the latest images from the Mars Curiosity Rover and weather data from InSight Mars Lander.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

Using a serverless backend reduces the load on the PyPortal. The PyPortal makes a call to the backend API and receives a small JSON object with the relevant data. This allows you to change to the logic of where and how to get the image and weather data without needing physical access to the device.

The backend API consists of an AWS Lambda function, written in Python, behind an Amazon API Gateway endpoint. When invoked, the FetchMarsData function makes requests to two separate NASA APIs. First it fetches the latest images from the Mars Curiosity Rover, typically from the previous day, and picks one at random. It resizes and converts the image to bitmap format before uploading to Amazon S3 with public read permissions. The PyPortal downloads the image from S3 later.

The function then calls the InSight: Mars Weather Service API. It retrieves the average air temperature, wind speed, pressure, season, solar day (sol), as well as the first and last timestamp of daily sampling. The API returns these values and the S3 image URL as a JSON object.

I use the AWS Serverless Application Model (SAM) to create the backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Generate a free NASA API key at api.nasa.gov. This is required to gain access to the NASA data APIs.
  2. Navigate to the aws-serverless-pyportal-mars-weather-display application in the Serverless Application Repository.
  3. Choose Deploy.
  4. On the next page, under Application Settings, enter the parameter, NasaApiKey.

  5. Once complete, choose View CloudFormation Stack.

  6. Select the Outputs tab and make a note of the MarsApiUrl. This is required for configuring the PyPortal.

  7. Navigate to the MarsApiKey URL listed in the Outputs tab.

  8. Click Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the MarsApiUrl.

PyPortal setup

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.2.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Insert the microSD card in the slot located on the back of the device.
  4. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help if you need to troubleshoot issues.
  5. Optionally if you have a 3D printer at home, you can print a case for your PyPortal. This can protect your project while also being a great way to display it on a desk.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. Flashing new firmware on the PyPortal is as simple as copying a Python file and necessary assets over to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the MarsApiKey and MarsApiUrl API Gateway endpoint, which can be found under Outputs in the AWS CloudFormation stack created by the Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request.
    Optionally, if you installed the Mu Editor, you can click on “Serial” to follow along the device log.Animated gif of the PyPortal device displaying a Mars rover image and Mars weather data.

Understanding how CircuitPython calls API Gateway

The main CircuitPython file is code.py. At the end of the file, the while loop periodically performs the operations necessary to display the photos from the Curiosity Rover and the InSight Mars lander weather data.

while True:
    data = callAPIEndpoint(secrets['mars_api_url'])
    downloadImage(data['image_url'])
    showDisplay(data['insight'], 
    displayTime=60*interval_minutes)

First, it calls the API Gateway endpoint using the URL from the secrets.py file, and passes the returned JSON to helper functions. The callAPIEndpoint(url) function passes the MarsApiKey in the header and a timeout of 30 seconds to the wifi.get() method. The timeout is required for integrations with services like Lambda and API Gateway. Remember, the CircuitPython code is running on a microcontroller and sometimes must wait longer when making requests.

def callAPIEndpoint(mars_api_url):
    headers = {"x-api-key": secrets['mars_api_key']}
    response = wifi.get(mars_api_url, headers=headers, timeout=30)
    data = response.json()
    print("JSON Response: ", data)
    response.close()
    return data

The JSON object that is received by the PyPortal is defined in the handler of the Lambda function. In the GitHub project downloaded earlier, see src/app.py.

def lambda_handler(event, context):
    url = fetchRoverImage()
    imgData = fetchImageData(url)
    image_s3_url = resize_image(imgData)
    weatherData = getMarsInsightWeather()

    return {
        "statusCode": 200,
        "body": json.dumps({
            "image_url": image_s3_url,
            "insight": weatherData
        })
    }

Similar to the CircuitPython code, this uses helper functions to perform all the various operations needed to retrieve and craft the data. At completion, the returned JSON is passed as the response to the PyPortal.

A quick way to add a new property is to edit the Lambda function directly through the AWS Lambda Console. Here, a key “hello” is added with a value “world”:

In the CircuitPython code.py file, the key is now available in the JSON response from API Gateway. The following prints the key value, which can be seen using the Mu Editor Serial debugger.

data = callAPIEndpoint(secrets['mars_api_url'])

print(data[‘hello’])

The Lambda function is packaged with the AWS Python SDK, boto3, which provides methods for interacting with a variety of AWS services. The Python Requests library is also included to make calls to the NASA APIs. Try exploring how to incorporate other services or APIs into your project. To understand how to modify the visual display on the PyPortal itself, see the displayio guide from Adafruit.

Conclusion

I show how to build a “live” Martian weather display using an Adafruit PyPortal, CircuitPython, and AWS Serverless technologies. Whether this is your first time using hardware or a serverless backend in the AWS Cloud, this project is simplified by the use of CircuitPython and the Serverless Application Model.

I also show how to make a request to API Gateway from the PyPortal. I then craft a response in Lambda for the PyPortal. Since both use variants of the Python programming language, much of the syntax stays the same.

To learn more, explore other devices supported by CircuitPython and the variety of community contributed libraries. Combined with the breadth of AWS services, you can push the boundaries of creativity.

Code a homage to Lunar Lander | Wireframe #37

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-a-homage-to-lunar-lander-wireframe-37/

Shoot for the moon in our Python version of the Atari hit, Lunar Lander. Mark Vanstone has the code.

Atari’s cabinet featured a thrust control, two buttons for rotating, and an abort button in case it all went horribly wrong.

Lunar Lander

First released in 1979 by Atari, Lunar Lander was based on a concept created a decade earlier. The original 1969 game (actually called Lunar) was a text-based affair that involved controlling a landing module’s thrust to guide it safely down to the lunar surface; a later iteration, Moonlander, created a more visual iteration of the same idea on the DEC VT50 graphics terminal.

Given that it appeared at the height of the late-seventies arcade boom, though, it was Atari’s coin-op that became the most recognisable version of Lunar Lander, arriving just after the tenth anniversary of the Apollo 11 moon landing. Again, the aim of the game was to use rotation and thrust controls to guide your craft, and gently set it down on a suitably flat platform. The game required efficient control of the lander, and extra points were awarded for parking successfully on more challenging areas of the landscape.

The arcade cabinet was originally going to feature a normal joystick, but this was changed to a double stalked up-down lever providing variable levels of thrust. The player had to land the craft against the clock with a finite amount of fuel with the Altitude, Horizontal Speed, and Vertical Speed readouts at the top of the screen as a guide. Four levels of difficulty were built into the game, with adjustments to landing controls and landing areas.

Our homage to the classic Lunar Lander. Can you land without causing millions of dollars’ worth of damage?

Making the game

To write a game like Lunar Lander with Pygame Zero, we can replace the vector graphics with a nice pre-drawn static background and use that as a collision detection mechanism and altitude meter. If our background is just black where the Lander can fly and a different colour anywhere the landscape is, then we can test pixels using the Pygame function image.get_at() to see if the lander has landed. We can also test a line of pixels from the Lander down the Y-axis until we hit the landscape, which will give us the lander’s altitude.

The rotation controls of the lander are quite simple, as we can capture the left and right arrow keys and increase or decrease the rotation of the lander; however, when thrust is applied (by pressing the up arrow) things get a little more complicated. We need to remember which direction the thrust came from so that the craft will continue to move in that direction even if it is rotated, so we have a direction property attached to our lander object. A little gravity is applied to the position of the lander, and then we just need a little bit of trigonometry to work out the movement of the lander based on its speed and direction of travel.

To judge if the lander has been landed safely or rammed into the lunar surface, we look at the downward speed and angle of the craft as it reaches an altitude of 1. If the speed is sufficiently slow and the angle is near vertical, then we trigger the landed message, and the game ends. If the lander reaches zero altitude without these conditions met, then we register a crash. Other elements that can be added to this sample are things like a limited fuel gauge and variable difficulty levels. You might even try adding the sounds of the rocket booster noise featured on the original arcade game.

Engage

The direction of thrust could be done in several ways. In this case, we’ve kept it simple, with one directional value which gradually moves in a new direction when an alternative thrust is applied. You may want to try making an X- and Y-axis direction calculation for thrust so that values are a combination of the two dimensions. You could also add joystick control to provide variable thrust input.

Here’s Mark’s code snippet, which creates a simple shooting game in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, go here.

Get your copy of Wireframe issue 36

You can read more features like this one in Wireframe issue 37, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 37 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code a homage to Lunar Lander | Wireframe #37 appeared first on Raspberry Pi.

Track your cat’s activity with a homemade speedometer

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/track-your-cats-activity-with-a-homemade-speedometer/

Firstly, hamster wheels for cats are (still) a thing. Secondly, Bengal cats run far. And Shawn Nunley on reddit is the latest to hit on this solution for kitty exercise and bonus cat stats.

Here is the wheel itself. That part was shop-bought. (Apparently it’s a ZiggyDoo Ferris Cat Wheel.)

Smol kitty in big wheel

Shawn has created a speedometer that tracks distance and speed. Every time a magnet mounted on the wheel passes a fixed sensor, a Raspberry Pi Zero writes to a log file so he can see how far and fast his felines have travelled. The wheel has six sensors, which each record 2.095 ft of travel. This project revealed the cats do about 4-6 miles per night on their wheel, and they reach speeds of 14 miles an hour.

Here’s your shopping list:

  • Raspberry Pi
  • Reed switch (Shawn got these)
  • Jumper wires
  • Ferris cat wheel

The tiny white box sticking out at the base of the wheel is the sensor

Shawn soldered a 40-pin header to his Raspberry Pi Zero and used jumper wires to connect to the sensor. He mounted the sensor to the cat wheel using hot glue and a pill box cut in half, which provided the perfect offset so it could accurately detect the magnets passing by. The code is written in Python.

Upcoming improvements include adding RFID so the wheel can distinguish between the cats in this two-kitty household.

Shawn also plans to calculate how much energy the Bengals are expending, and he’ll soon be connecting the Raspberry Pi to their Google Cloud Platform account so you can all keep up with the cats’ stats.

The stats are currently available only locally

And, get this – this was Shawn’s first ever time doing anything with Raspberry Pi or Python. OK, so as an ex-programmer he had a bit of a head start, but he assures us he hasn’t touched the stuff since the 1990s. He explains: “I was totally shocked at how easy it was once I figured out how to get the Raspberry Pi to read a sensor.” Start to finish, the project took him just one week.

The post Track your cat’s activity with a homemade speedometer appeared first on Raspberry Pi.

Building well-architected serverless applications: Understanding application health – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-understanding-application-health-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explaining the example application.

Question OPS1: How do you evaluate your serverless application’s health?

This post continues part 1 of this Operational Excellence question. Previously, I covered using Amazon CloudWatch out-of-the-box standard metrics and alerts, and structured and centralized logging with the Embedded Metrics Format.

Best practice: Use application, business, and operations metrics

Identifying key performance indicators (KPIs), including business, customer, and operations outcomes, in additional to application metrics, helps to show a higher-level view of how the application and business is performing.

Business KPIs measure your application performance against business goals. For example, if fewer flight reservations are flowing through the system, the business would like to know.

Customer experience KPIs highlight the overall effectiveness of how customers use the application over time. Examples are perceived latency, time to find a booking, make a payment, etc.

Operational metrics help to see how operationally stable the application is over time. Examples are continuous integration/delivery/deployment feedback time, mean-time-between-failure/recovery, number of on-call pages and time to resolution, etc.

Custom Metrics

Embedded Metrics Format can also emit custom metrics to help understand your workload health’s impact on business.

The airline booking service uses AWS Step Functions, AWS Lambda, Amazon SNS, and Amazon DynamoDB.

In the confirm booking module function handler, I add a new namespace and dimension to associate this set of logs with this application and service.

metrics.set_namespace("ServerlessAirlineEMF")
metrics.put_dimensions({"service":"confirm_booking"})

Within the try statement within the try/except block, I emit a metric for a successful booking:

metrics.put_metric("BookingSuccessful", 1, "Count")

And within the except statement within the try/except block, I emit a metric for a failed booking:

metrics.put_metric("BookingFailed", 1, "Count")

Once I make a booking, within the CloudWatch console, I navigate to Logs | Log groups and select the Airline-ConfirmBooking-develop group. I select a log stream and find the custom metric as part of the structured log entry.

structured-log-entry

Custom metric structured log entry

I can also create a custom metric graph. Within the CloudWatch console, I navigate to Metrics. I see the ServerlessAirlineEMF Custom Namespace is available.

custom-namespace

Custom metric namespace

I select the Namespace and the available metric.

available-namespace

Available metric

I select a Line graph, add a name, and see the successful booking now plotted on the graph.

custom-metric-plotted

Plotted CloudWatch metric

I can also visualize and analyze this metric using CloudWatch Insights.

Once a booking is made, within the CloudWatch console, I navigate to Logs | Insights. I select the /aws/lambda/Airline-ConfirmBooking-develop log group. I choose Run query which shows a list of discovered fields on the right of the console.

I can search for discovered booking related fields.

cloudwatch-insights-discovered-fieldsI then enter the following in the query pane to search the logs and plot the sum of BookingReference and choose Run Query:

fields @timestamp, @message
| stats sum(@BookingReference)

cloudwatch-insights-displayed-bookingreference

CloudWatch Insights query

There are a number of other component metrics that are important to measure. Track interactions between upstream and downstream components such as message queue length, integration latency, and throttling.

Improvement plan summary:

  1. Identify user journeys and metrics from each customer transaction.
  2. Create custom metrics asynchronously instead of synchronously for improved performance, cost, and reliability outcomes.
  3. Emit business metrics from within your workload to measure application performance against business goals.
  4. Create and analyze component metrics to measure interactions with upstream and downstream components.
  5. Create and analyze operational metrics to assess the health of your continuous delivery pipeline and operational processes.

Good practice: Use distributed tracing and code is instrumented with additional context

Logging provides information on the individual point in time events the application generates. Tracing provides a wider continuous view of an application. Tracing helps to follow a user journey or transaction through the application.

AWS X-Ray is an example of a distributed tracing solution, there are a number of third-party options as well. X-Ray collects data about the requests that your application serves and also builds a service graph, which shows all the components of an application. This provides visibility to understand how upstream/downstream services may affect workload health.

For the most comprehensive view, enable X-Ray across as many services as possible and include X-Ray tracing instrumentation in code. This is the list of AWS Services integrated with X-Ray.

X-Ray receives data from services as segments. Segments for a common request are then grouped into traces. Segments can be further broken down into more granular subsegments. Custom data key-value pairs are added to segments and subsegments with annotations and metadata. Traces can also be grouped which helps with filter expressions.

AWS Lambda instruments incoming requests for all supported languages. Lambda application code can be further instrumented to emit information about its status, correlation identifiers, and business outcomes to determine transaction flows across your workload.

X-Ray tracing for a Lambda function is enabled in the Lambda Console. Select a Lambda function. I select the Airline-ReserveBooking-develop function. In the Configuration pane, I select to enable X-Ray Active tracing.

x-ray-enable

X-Ray tracing enabled

X-Ray can also be enabled via CloudFormation with the following code:

TracingConfig:
  Mode: Active

Lambda IAM permissions to write to X-Ray are added automatically when active tracing is enabled via the console. When using CloudFormation, Allow the Actions xray:PutTraceSegments and xray:PutTelemetryRecords

It is important to understand what invocations X-Ray does trace. X-Ray applies a sampling algorithm. If an upstream service, such as API Gateway with X-Ray tracing enabled, has already sampled a request, the Lambda function request is also sampled. Without an upstream request, X-Ray traces data for the first Lambda invocation each second, and then 5% of additional invocations.

For the airline application, X-Ray tracing is initiated within the shared library with the code:

from aws_xray_sdk.core import models, patch_all, xray_recorder

Segments, subsegments, annotations, and metadata are added to functions with the following example code:

segment = xray_recorder.begin_segment('segment_name')
# Start a subsegment
subsegment = xray_recorder.begin_subsegment('subsegment_name')
# Add metadata and annotations
segment.put_metadata('key', dict, 'namespace')
subsegment.put_annotation('key', 'value')
# Close the subsegment and segment
xray_recorder.end_subsegment()
xray_recorder.end_segment()

For example, within the collect payment module, an annotation is added for a successful payment with:

tracer.put_annotation("PaymentStatus", "SUCCESS")

CloudWatch ServiceLens

Once a booking is made and payment is successful, the tracing is available in the X-Ray console.

I explore how Amazon CloudWatch ServiceLens connects metrics, logs and the X-Ray tracing. Within the CloudWatch console, I navigate to ServiceLens | Service Map.

I can visualize all application resources and dependencies where X-Ray is enabled. I can trace performance or availability issues. If there was an issue connecting to SNS for example, this would be shown.

I select the Airline-CollectPayment-develop node and can view the out-of-the-box standard Lambda metrics.

I can select View Logs to jump to the CloudWatch Logs Insights console.

cloudwatch-insights-service-map-view

CloudWatch Insights Service map

I select View dashboard to see the function metrics, node map, and function details.

cloudwatch-insights-service-map-dashboard

CloudWatch Insights Service Map dashboard

I select View traces and can filter by the custom metric PaymentStatus. I select SUCCESS and chose Add to filter. I then select a trace.

cloudwatch-insights-service-map-select-trace

CloudWatch Insights Filtered traces

I see the full trace details, which show the full application transaction of a payment collection.

cloudwatch-insights-service-map-view-trace

Segments timeline

Selecting the Lambda handler subsegment – ## lambda_handler, I can view the trace Annotations and Metadata, which include the business transaction details such as Customer and PaymentStatus.

cloudwatch-insights-service-map-view-trace-annotations

X-Ray annotations

Trace groups are another feature of X-Ray and ServiceLens. Trace groups use filter expressions such as Annotation.PaymentStatus = "FAILED" which are used to view traces that match the particular group. Service graphs can also be viewed, and CloudWatch alarms created based on the group.

CloudWatch ServiceLens provides powerful capabilities to understand application performance bottlenecks and issues, helping determine how users are impacted.

Improvement plan summary:

  1. Identify common business context and system data that are commonly present across multiple transactions.
  2. Instrument SDKs and requests to upstream/downstream services to understand the flow of a transaction across system components.

Recent announcements

There have been a number of recent announcements for X-Ray and CloudWatch to improve how to evaluate serverless application health.

Conclusion

Evaluating application health helps you identify which services should be optimized to improve your customer’s experience. In part 1, I cover out-of-the-box standard metrics and alerts, as well as structured and centralized logging. In this post, I explore custom metrics and distributed tracing and show how to use ServiceLens to view logs, metrics, and traces together.

In an upcoming post, I will cover the next Operational Excellence question from the Well-Architected Serverless Lens – Approaching application lifecycle management.

Raspberry Pi puts the heart back in mid-noughties nostalgia tech

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-puts-the-heart-back-in-mid-noughties-nostalgia-tech/

Is it still the Easter holidays? Can anyone tell? Does it matter, when we have nostalgic tech bunny pets to share with you?

These little bunnies can now do much more than when they first appeared. But they’re still incredibly cute – just look at that little lopsided-ear thing they do.

The original Nabaztag bunnies were to us in the mid-noughties what Tamagotchis were to eleven-year-olds everywhere in the 1990s. They communicated through colour, light, and sound. But now (and here’s the best bit), with a simple bit of surgery and the help of a new Raspberry Pi heart, your digital desk pet will be smarter than ever. It will be able to tell you what the weather is like, and offer local speech recognition as well as “ear-based Tai Chi”. No, we’re not sure either, but we are sure that it sounds cool. And very calming.

Part of the custom kit that will breathe new life into your bunny

The design team have created what they call the TagTagTag kit. Here are the main components of said kit:

This new venture had its first outing at the Paris Maker Faire in 2018, and it looks like we’re already too late to buy one of the limited number of ready-made upgraded bunnies. However, those of you who kept hold of your original bunny might be able to source one of Nabaztag’s custom boards to upgrade it yourself if you’re prepared to be patient – head over to the project’s funding page. You’ll also need a Raspberry Pi Zero W and a microSD card. The video below is in French, but it’s captioned.

Nabaztag’s funding page also shares all of the tech specs, schematics, and open source Python code you’re going to need.

We know this might be a tricky project for which to source all the parts, but it’s just. So. Cute. Follow the rabbit on Twitter to find out when you might be able to get your hands on a custom board.

The post Raspberry Pi puts the heart back in mid-noughties nostalgia tech appeared first on Raspberry Pi.