Tag Archives: How-to

Recreate 3D Monster Maze’s 8-bit labyrinth | Wireframe issue 18

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-3d-monster-mazes-8-bit-labyrinth-wireframe-issue-18/

You too can recreate the techniques behind a pioneering 3D maze game in Python. Mark Vanstone explains how.

3D Monster Maze, released in 1982 by J.K. Greye software, written by Malcolm Evans.

3D Monster Maze

While 3D games have become more and more realistic, some may forget that 3D games on home computers started in the mists of time on machines like the Sinclair ZX81. One such pioneering game took pride of place in my collection of tapes, took many minutes to load, and required the 16K RAM pack expansion. That game was 3D Monster Maze — perhaps the most popular game released for the ZX81.

The game was released in 1982 by J.K. Greye Software, and written by Malcolm Evans. Although the graphics were incredibly low resolution by today’s standards, it became an instant hit. The idea of the game was to navigate around a randomly generated maze in search of the exit.

The problem was that a Tyrannosaurus rex also inhabited the maze, and would chase you down and have you for dinner if you didn’t escape quickly enough. The maze itself was made of straight corridors on a 16×18 grid, which the player would move around from one block to the next. The shape of the blocks were displayed by using the low-resolution pixels included in the ZX81’s character set, with 2×2 pixels per character on the screen.

The original ZX81 game drew its maze from chunky 2×2 pixel blocks.

Draw imaginary lines

There’s an interesting trick to recreating the original game’s 3D corridor display which, although quite limited, works well for a simplistic rendering of a maze. To do this, we need to draw imaginary lines diagonally from corner to corner in a square viewport: these are our vanishing point perspective guides. Then each corridor block in our view is half the width and half the height of the block nearer to us.

If we draw this out with lines showing the block positions, we get a view that looks like we’re looking down a long corridor with branches leading off left and right. In our Pygame Zero version of the maze, we’re going to use this wireframe as the basis for drawing our block elements. We’ll create graphics for blocks that are near the player, one block away, two, three, and four blocks away. We’ll need to view the blocks from the left-hand side, the right-hand side, and the centre.

The maze display is made by drawing diagonal lines to a central vanishing point.

Once we’ve created our block graphics, we’ll need to make some data to represent the layout of the maze. In this example, the maze is built from a 10×10 list of zeros and ones. We’ll set a starting position for the player and the direction they’re facing (0–3), then we’re all set to render a view of the maze from our player’s perspective.

The display is created from furthest away to nearest, so we look four blocks away from the player (in the direction they’re looking) and draw a block if there’s one indicated by the maze data to the left; we do the same on the right, and finally in the middle. Then we move towards the player by a block and repeat the process (with larger graphics) until we get to the block the player is on.

Each visible block is drawn from the back forward to make the player’s view of the corridors.

That’s all there is to it. To move backwards and forwards, just change the position in the grid the player’s standing on and redraw the display. To turn, change the direction the player’s looking and redraw. This technique’s obviously a little limited, and will only work with corridors viewed at 90-degree angles, but it launched a whole genre of games on home computers. It really was a big deal for many twelve-year-olds — as I was at the time — and laid the path for the vibrant, fast-moving 3D games we enjoy today.

Here’s Mark’s code, which recreates 3D Monster Maze’s network of corridors in Python. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code, visit our Github repository here.

Get your copy of Wireframe issue 18

You can read more features like this one in Wireframe issue 18, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 18 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate 3D Monster Maze’s 8-bit labyrinth | Wireframe issue 18 appeared first on Raspberry Pi.

Code your own path-following Lemmings in Python | Wireframe issue 17

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-path-following-lemmings-in-python-wireframe-issue-17/

Learn how to create your own obedient lemmings that follow any path put in front of them. Raspberry Pi’s own Rik Cross explains how.

The original Lemmings, first released for the Amiga, quickly spread like a virus to just about every computer and console of the day.

Lemmings

Lemmings is a puzzle-platformer, created at DMA Design, and first became available for the Amiga in 1991. The aim is to guide a number of small lemming sprites to safety, navigating traps and difficult terrain along the way. Left to their own devices, the lemmings will simply follow the path in front of them, but additional ‘special powers’ given to lemmings allow them to (among other things) dig, climb, build, and block in order to create a path to freedom (or to the next level, anyway).

Code your own lemmings

I’ll show you a simple way (using Python and Pygame) in which lemmings can be made to follow the terrain in front of them. The first step is to store the level’s terrain information, which I’ve achieved by using a two-dimensional list to store the colour of each pixel in the background ‘level’ image. In my example, I’ve used the ‘Lemcraft’ tileset by Matt Hackett (of Lost Decade Games) – taken from opengameart.org – and used the Tiled software to stitch the tiles together into a level.

The algorithm we then use can be summarised as follows: check the pixels immediately below a lemming. If the colour of those pixels isn’t the same as the background colour, then the lemming is falling. In this case, move the lemming down by one pixel on the y-axis. If the lemming isn’t falling, then it’s walking. In this case, we need to see whether there is a non-ground, background-coloured pixel in front of the lemming for it to move onto.

Sprites cling to the ground below them, navigating uneven terrain, and reversing direction when they hit an impassable obstacle.

If a pixel is found in front of the lemming (determined by its direction) that is low enough to get to (i.e. lower than its climbheight), then the lemming moves forward on the x-axis by one pixel, and upwards on the y-axis to the new ground level. However, if no suitable ground is found to move onto, then the lemming reverses its direction.

The algorithm is stored as a lemming’s update() method, which is executed for each lemming, each frame of the game. The sample level.png file can be edited, or swapped for another image altogether. If using a different image, just remember to update the level’s BACKGROUND_COLOUR in your code, stored as a (red, green, blue, alpha) tuple. You may also need to increase your lemming climbheight if you want them to be able to navigate a climb of more than four pixels.

There are other things you can do to make a full Lemmings clone. You could try replacing the yellow-rectangle lemmings in my example with pixel-art sprites with their own walk cycle animation (see my article in issue #14), or you could give your lemmings some of the special powers they’ll need to get to safety, achieved by creating flags that determine how lemmings interact with the terrain around them.

Here’s Rik’s code, which gets those path-following lemmings moving about in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 17

You can read more features like this one in Wireframe issue 17, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 17 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code your own path-following Lemmings in Python | Wireframe issue 17 appeared first on Raspberry Pi.

Recreate the sprite-following Options from Gradius using Python | Wireframe issue 16

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-the-sprite-following-options-from-gradius-using-python-wireframe-issue-16/

Learn how to create game objects that follow the path of the main player sprite. Raspberry Pi’s own Rik Cross explains all.

Options first appeared in 1985’s Gradius, but became a mainstay of numerous sequels and spin-offs, including the Salamander and Parodius series of games.

Gradius

First released by Konami in 1985, Gradius pushed the boundaries of the shoot-’em-up genre with its varied level design, dramatic boss fights, and innovative power-up system.

One of the most memorable of its power-ups was the Option — a small, drone-like blob that followed the player’s ship and effectively doubled its firepower.

By collecting more power-ups, it was possible to gather a cluster of death-dealing Options, which obediently moved wherever the player moved.

Recreate sprite-following in Python

There are a few different ways of recreating Gradius’ sprite-following, but in this article, I’ll show you a simple implementation that uses the player’s ‘position history’ to place other following items on the screen. As always, I’ll be using Python and Pygame to recreate this effect, and I’ll be making use of a spaceship image created by ‘pitrizzo’ from opengameart.org.

The first thing to do is to create a spaceship and a list of ‘power-up’ objects. Storing the power-ups in a list allows us to perform a simple calculation on a power-up to determine its position, as you’ll see later. As we’ll be iterating through the power-ups stored in a list, there’s no need to create a separate variable for each. Instead, we can use list comprehension to create the power-ups:

powerups = [Actor(‘powerup’) for p in range(3)]

The player’s position history will be a list of previous positions, stored as a list of (x,y) tuples. Each time the player’s position changes, the new position is added to the front of the list (as the new first element). We only need to know the spaceship’s recent position history, so the list is also truncated to only contain the 100 most recent positions. Although not necessary, the following code can be added to allow you to see a selection (in this case every fifth) of these previous positions:

for p in previouspositions[::5]:

screen.draw.filled_circle(p, 2, (255,0,0))

Plotting the spaceship’s position history.

Each frame of the game, this position list is used to place each of the power-ups. In our Gradius-like example, we need each of these objects to follow the player’s spaceship in a line, as if moving together in a single-file queue. To achieve this effect, a power-up’s position is determined by its position in the power-ups list, with the first power-up in the list taking up a position nearest to the player. In Python, using enumerate when iterating through a list allows us to get the power-up’s position in the list, which can then be used to determine which position in the player’s position history to use.

newposition = previouspositions[(i+1)*20]

So, the first power-up in the list (element 0 in the list) is placed at the coordinates of the twentieth ((0+1)*20) position in the spaceship’s history, the second power-up at the fourtieth position, and so on. Using this simple calculation, elements are equally spaced along the spaceship’s previous path. The only thing to be careful of here is that you have enough items in the position history for the number of items you want to follow the player!

Power-ups following a player sprite, using the player’s position history.

This leaves one more question to answer; where do we place these power-ups initially, when the spaceship has no position history? There are a few different ways of solving this problem, but the simplest is just to generate a fictitious position history at the beginning of the game. As I want power-ups to be lined up behind the spaceship initially, I again used list comprehension

to generate a list of 100 positions with ever-decreasing x-coordinates.

previouspositions = [(spaceship.x - i*spaceship.speed,spaceship.y) for i in range(100)]

With an initial spaceship position of (400,400) and a spaceship.speed of 4, this means the list will initially contain the following coordinates:

previouspositions = [(400,400),(396,400),(392,400),(388,400),...]

Storing our player’s previous position history has allowed us to create path-following power-ups with very little code. The idea of storing an object’s history can have very powerful applications. For example, a paint program could store previous commands that have been executed, and include an ‘undo’ button that can work backwards through the commands.

Here’s Rik’s code, which recreates those sprite-following Options in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 16

You can read more features like this one in Wireframe issue 16, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 16 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate the sprite-following Options from Gradius using Python | Wireframe issue 16 appeared first on Raspberry Pi.

Coding an isometric game map | Wireframe issue 15

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/coding-an-isometric-game-map-wireframe-issue-15/

Isometric graphics give 2D games the illusion of depth. Mark Vanstone explains how to make an isometric game map of your own.

Published by Quicksilva in 1983, Ant Attack was one of the earliest games to use isometric graphics. And you threw grenades at giant ants. It was brilliant.

Isometric projection

Most early arcade games were 2D, but in 1982, a new dimension emerged: isometric projection. The first isometric game to hit arcades was Sega’s pseudo-3D shooter, Zaxxon. The eye-catching format soon caught on, and other isometric titles followed: Q*bert came out the same year, and in 1983 the first isometric game for home computers was published: Ant Attack, written by Sandy White.

Ant Attack

Ant Attack was first released on the ZX Spectrum, and the aim of the game was for the player to find and rescue a hostage in a city infested with giant ants. The isometric map has since been used by countless titles, including Ultimate Play The Game’s classics Knight Lore and Alien 8, and my own educational history series ArcVenture.

Let’s look at how an isometric display is created, and code a simple example of how this can be done in Pygame Zero — so let’s start with the basics. The isometric view displays objects as if you’re looking down at 45 degrees onto them, so the top of a cube looks like a diamond shape. The scene is made by drawing cubes on a diagonal grid so that the cubes overlap and create solid-looking structures. Additional layers can be used above them to create the illusion of height.

Blocks are drawn from the back forward, one line at a time and then one layer on top of another until the whole map is drawn.

The cubes are actually two-dimensional bitmaps, which we start printing at the top of the display and move along a diagonal line, drawing cubes as we go. The map is defined by a three-dimensional list (or array). The list is the width of the map by the height of the map, and has as many layers as we want to represent in the upward direction. In our example, we’ll represent the floor as the value 0 and a block as value 1. We’ll make a border around the map and create some arches and pyramids, but you could use any method you like — such as a map editor — to create the map data.

To make things a bit easier on the processor, we only need to draw cubes that are visible in the window, so we can do a check of the coordinates before we draw each cube. Once we’ve looped over the x, y, and z axes of the data list, we should have a 3D map displayed. The whole map doesn’t fit in the window, and in a full game, the map is likely to be many times the size of the screen. To see more of the map, we can add some keyboard controls.

Here’s Mark’s isometric map, coded in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, visit our Github repository here.

If we detect keyboard presses in the update() function, all we need to do to move the map is change the coordinates we start drawing the map from. If we start drawing further to the left, the right-hand side of the map emerges, and if we draw the map higher, the lower part of the map can be seen.

We now have a basic map made of cubes that we can move around the window. If we want to make this into a game, we can expand the way the data represents the display. We could add differently shaped blocks represented by different numbers in the data, and we could include a player block which gets drawn in the draw() function and can be moved around the map. We could also have some enemies moving around — and before we know it, we’ll have a game a bit like Ant Attack.

Tiled

When writing games with large isometric maps, an editor will come in handy. You can write your own, but there are several out there that you can use. One very good one is called Tiled and can be downloaded free from mapeditor.org. Tiled allows you to define your own tilesets and export the data in various formats, including JSON, which can be easily read into Python.

Get your copy of Wireframe issue 15

You can read more features like this one in Wireframe issue 15, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 15 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Coding an isometric game map | Wireframe issue 15 appeared first on Raspberry Pi.

Make a Donkey Kong–style walk cycle | Wireframe issue 14

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/make-a-donkey-kong-style-walk-cycle-wireframe-issue-14/

Effective animation gave Donkey Kong barrels of personality. Raspberry Pi’s own Rik Cross explains how to create a similar walk cycle.

Donkey Kong wasn’t the first game to feature an animated character who could walk and jump, but on its release in 1981, it certainly had more personality than the games that came before it. You only have to compare Donkey Kong to another Nintendo arcade game that came out just two years earlier — the half-forgotten top-down shooter Sheriff — to see how quickly both technology and pixel art moved on in that brief period. Although simple by modern standards, Donkey Kong’s hero Jumpman (later known as Mario) packed movement and personality into just a few frames of animation.

In this article, I’ll show you how to use Python and Pygame to create a character with a simple walk cycle animation like Jumpman’s in Donkey Kong. The code can, however, be adapted for any game object that requires animation, and even for multiple game object animations, as I’ll explain later.

Jumpman’s (aka Mario’s) walk cycle comprised just three frames of animation.

Firstly, we’ll need some images to animate. As this article is focused on the animation code and not the theory behind creating walk cycle images, I grabbed some suitable images created by Kenney Vleugels and available at opengameart.org.

Let’s start by animating the player with a simple walk cycle. The two images to be used in the animation are stored in an images list, and an animationindex variable keeps track of the index of the current image in the list to display. So, for a very simple animation with just two different frames, the images list will contain two different images:

images = [‘walkleft1’,‘walkleft2’

To achieve a looping animation, the animationindex is repeatedly incremented, and is reset to 0 once the end of the images list is reached. Displaying the current image can then be achieved by using the animationindex to reference and draw the appropriate image in the animation cycle:

self.image = self.images[self.state][self.animationindex]

A list of images along with an index is used to loop through an animation cycle.

The problem with the code described so far is that the animationindex is incremented once per frame, and so the walk cycle will happen way too quickly, and won’t look natural. To solve this problem, we need to tell the player to update its animation every few frames, rather than every frame. To achieve this, we need another couple of variables; I’ll use animationdelay to store the number of frames to skip between displayed images, and animationtimer to store the number of frames since the last image change.

Therefore, the code needed to animate the player becomes:

self.animationtimer += 1
if self.animationtimer >= self.animationdelay:
self.animationtimer = 0
self.animationindex += 1
if self.animationindex > len(self.images) - 1:
self.animationindex = 0
self.image = self.images[self.animationindex]

So we have a player that appears to be walking, but now the problem is that the player walks constantly, and always in the same direction! The rest of this article will show you how to solve these two related problems.

There are a few different ways to approach this problem, but the method I’ll use is to make use of game object states, and then have different animations for each state. This method is a little more complicated, but it’s very adaptable.

The first thing to do is to decide on what the player’s ‘states’ might be — stand, walkleft, and walkright will do as a start. Just as we did with our previous single animation, we can now define a list of images for each of the possible player’s states. Again, there are lots of ways of structuring this data, but I’ve opted for a Python dictionary linking states and image lists:

self.images = { ‘stand’ : [‘stand1’],
‘walkleft’ : [‘walkleft1’,‘walkleft2’],
‘walkright’ : [‘walkright1’,‘walkright2’]
}

The player’s state can then be stored, and the correct image obtained by using the value of state along with the animationindex:

self.image = self.images[self.state][self.animationindex]

The correct player state can then be set by getting the keyboard input, setting the player to walkleft if the left arrow key is pressed or walkright if the right arrow key is pressed. If neither key is pressed, the player can be set to a stand state; the image list for which contains a single image of the player facing the camera.

Animation cycles can be linked to player ‘states’.

For simplicity, a maximum of two images are used for each animation cycle; adding more images would create a smoother or more realistic animation.

Using the code above, it would also be possible to easily add additional states for, say, jumping or fighting enemies. You could even take things further by defining an Animation() object for each player state. This way, you could specify the speed and other properties (such as whether or not to loop) for each animation separately, giving you greater flexibility.

Here’s Rik’s animated walk cycle, coded in Python. To get it running on your system, you’ll first need to install Pygame Zero. And to download the full code, go here.

Get your copy of Wireframe issue 14

You can read more features like this one in Wireframe issue 14, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 14 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Make a Donkey Kong–style walk cycle | Wireframe issue 14 appeared first on Raspberry Pi.

Create an arcade-style zooming starfield effect | Wireframe issue 13

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/create-an-arcade-style-zooming-starfield-effect-wireframe-issue-13/

Unparalleled depth in a 2D game: PyGame Zero extraordinaire Daniel Pope shows you how to recreate a zooming starfield effect straight out of the eighties arcade classic Gyruss.

The crowded, noisy realm of eighties amusement arcades presented something of a challenge for developers of the time: how can you make your game stand out from all the other ones surrounding it? Gyruss, released by Konami in 1983, came up with one solution. Although it was yet another alien blaster — one of a slew of similar shooters that arrived in the wake of Space Invaders, released in 1978 — it differed in one important respect: its zooming starfield created the illusion that the player’s craft was hurtling through space, and that aliens were emerging from the abyss to attack it.

This made Gyruss an entry in the ‘tube shooter’ genre — one that was first defined by Atari’s classic Tempest in 1981. But where Tempest used a vector display to create a 3D environment where enemies clambered up a series of tunnels, Gyruss used more common hardware and conventional sprites to render its aliens on the screen. Gyruss was designed by Yoshiki Okamoto (who would later go on to produce the hit Street Fighter II, among other games, at Capcom), and was born from his affection for Galaga, a 2D shoot-’em-up created by Namco.

Under the surface, Gyruss is still a 2D game like Galaga, but the cunning use of sprite animation and that zooming star effect created a sense of dynamism that its rivals lacked. The tubular design also meant that the player could move in a circle around the edge of the play area, rather than moving left and right at the bottom of the screen, as in Galaga and other fixed-screen shooters like it. Gyruss was one of the most popular arcade games of its period, probably in part because of its attention-grabbing design.

Here’s Daniel Pope’s example code, which creates a Gyruss-style zooming starfield effect in Python. To get it running on your system, you’ll first need to install Pygame Zero — find installation instructions here, and download the Python code here.

The code sample above, written by Daniel Pope, shows you how a zooming star field can work in PyGame Zero — and how, thanks to modern hardware, we can heighten the sense of movement in a way that Konami’s engineers couldn’t have hoped to achieve about 30 years ago. The code generates a cluster of stars on the screen, and creates the illusion of depth and movement by redrawing them in a new position in a randomly chosen direction each frame.

At the same time, the stars gradually increase their brightness over time, as if they’re getting closer. As a modern twist, Pope has also added an extra warp factor: holding down the Space bar increases the stars’ velocity, making that zoom into space even more exhilarating.

Get your copy of Wireframe issue 13

You can read the rest of the feature in Wireframe issue 13, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 13 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Create an arcade-style zooming starfield effect | Wireframe issue 13 appeared first on Raspberry Pi.

Recreate iconic 1980s game explosions | Wireframe issue 12

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-bombermans-iconic-explosions-wireframe-issue-12/

Rik Cross, Senior Learning Manager here at the Raspberry Pi Foundation, shows you how to recreate the deadly explosions in the classic game, Bomberman.

An early incarnation of Bomberman on the NES; the series is still going strong today under Konami’s wing.

Creating Bomberman

Bomberman was first released in the early 1980s as a tech demo for a BASIC compiler, but soon became a popular series that’s still going today. Bomberman sees players use bombs to destroy enemies and uncover doors behind destructible tiles. In this article, I’ll show you how to recreate the bombs that explode in four directions, destroying parts of the level as well as any players in their path!

The game level is a tilemap stored as a two-dimensional array. Each tile in the map is a Tile object, which contains the tile type, and corresponding image. For simplicity, a tile can be set to one of five types; GROUND, WALL, BRICK, BOMB, or EXPLOSION. In this example code, BRICK and GROUND can be exploded with bombs, but WALL cannot, but of course, this behaviour can be changed.

Each Tile object also has a timer, which is decremented each frame of the game. When a tile’s timer reaches 0, an action is carried out, which is dependent on the tile type. BOMB tiles (and surrounding tiles) turn into EXPLOSION tiles after a short delay, and EXPLOSION tiles eventually turn back into GROUND. At the start of the game, the tilemap for the level is generated, in this case consisting of mostly GROUND, with some WALL and a couple of BRICK tiles. The player starts off in the top-left tile, and moves by using the arrow keys. Pressing the SPACE key will place a bomb in the player’s current tile, which is achieved by setting the Tile at the player’s position to BOMB. The tile’s timer is also set to a small number, and once this timer is decremented to 0, the bomb tile and the tiles around it are set to EXPLOSION.

Here’s Rik’s example code, which recreates Bomberman’s explosions in Python. To get it running on your system, you’ll first need to install Pygame Zero — you can find full instructions here. And you can download the code here.

The bomb explodes outwards in four directions, with a range determined by the RANGE, which in our code is 3. As the bomb explodes out to the right, for example, the tile to the right of the bomb is checked. If such a tile exists (i.e. the position isn’t out of the level bounds) and can be exploded, then the tile’s type is set to EXPLOSION and the next tile to the right is checked. If the explosion moves out of the level bounds, or hits a WALL tile, then the explosion will stop radiating in that direction. This process is then repeated for the other directions.

There’s a nice trick for exploding the bomb without repeating the code four times, and it relies on the sine and cosine values for the four direction angles. The angles are 0° (up), 90° (right), 180° (down) and 270° (left). When exploding to the right (at an angle of 90°), sin(90) is 1 and cos(90) is 0, which corresponds to the offset direction on the x- and y-axis respectively. These values can be multiplied by the tile offset, to explode the bomb in all four directions.

Get your copy of Wireframe issue 12

You can read the rest of the feature in Wireframe issue 12, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press – delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 12 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusives. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate iconic 1980s game explosions | Wireframe issue 12 appeared first on Raspberry Pi.

Coding Breakout’s brick-breaking action | Wireframe #11

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/coding-breakouts-brick-breaking-action-wireframe-11/

Atari’s Breakout was one of the earliest video game blockbusters. Here’s how to recreate it in Python.

The original Breakout, designed by Nolan Bushnell and Steve Bristow, and famously built by a young Steve Wozniak.

Atari Breakout

The games industry owes a lot to the humble bat and ball. Designed by Allan Alcorn in 1972, Pong was a simplified version of table tennis, where the player moved a bat and scored points by ricocheting a ball past their opponent. About four years later, Atari’s Nolan Bushnell and Steve Bristow figured out a way of making Pong into a single-player game. The result was 1976’s Breakout, which rotated Pong’s action 90 degrees and replaced the second player with a wall of bricks.

Points were scored by deflecting the ball off the bat and destroying the bricks; as in Pong, the player would lose the game if the ball left the play area. Breakout was a hit for Atari, and remains one of those game ideas that has never quite faded from view; in the 1980s, Taito’s Arkanoid updated the action with collectible power-ups, multiple stages with different layouts of bricks, and enemies that disrupted the trajectory of the player’s ball.

Breakout had an impact on other genres too: game designer Tomohiro Nishikado came up with the idea for Space Invaders by switching Breakout’s bat with a base that shot bullets, while Breakout’s bricks became aliens that moved and fired back at the player.

Courtesy of Daniel Pope, here’s a simple Breakout game written in Python. To get it running on your system, you’ll first need to install Pygame Zero. And download the code for Breakout here.

Bricks and balls in Python

The code above, written by Daniel Pope, shows you just how easy it is to get a basic version of Breakout up and running in Python, using the Pygame Zero library. Like Atari’s original, this version draws a wall of blocks on the screen, sets a ball bouncing around, and gives the player a paddle, which can be controlled by moving the mouse left and right. The ball physics are simple to grasp too. The ball has a velocity, vel – which is a vector, or a pair of numbers: vx for the x direction and vy for the y direction.

The program loop checks the position of the ball and whether it’s collided with a brick or the edge of the play area. If the ball hits the left side of the play area, the ball’s x velocity vx is set to positive, thus sending it bouncing to the right. If the ball hits the right side, vx is set to a negative number, so the ball moves left. Likewise, when the ball hits the top or bottom of a brick, we set the sign of the y velocity vy, and so on for the collisions with the bat and the top of the play area and the sides of bricks. Collisions set the sign of vx and vy but never change the magnitude. This is called a perfectly elastic collision.

To this basic framework, you could add all kinds of additional features: a 2012 talk by developers Martin Jonasson and Petri Purho, which you can watch on YouTube here, shows how the Breakout concept can be given new life with the addition of a few modern design ideas.

You can read this feature and more besides in Wireframe issue 11, available now in Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from us – worldwide delivery is available. And if you’d like to own a handy digital version of the magazine, you can also download a free PDF.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusives, and for subscriptions, visit the Wireframe website to save 49% compared to newsstand pricing!

The post Coding Breakout’s brick-breaking action | Wireframe #11 appeared first on Raspberry Pi.

Coding Pang’s sprite spawning mechanic | Wireframe #10

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/pang-sprite-spawning-wireframe-10/

Rik Cross, Senior Learning Manager here at Raspberry Pi, shows you how to recreate the spawning of objects found in the balloon-bursting arcade gem Pang.

Pang: bringing balloon-hating to the masses since 1989.

Capcom’s Pang

Programmed by Mitchell and distributed by Capcom, Pang was first released as an arcade game in 1989, but was later ported to a whole host of home computers, including the ZX Spectrum, Amiga, and Commodore 64. The aim in Pang is to destroy balloons as they bounce around the screen, either alone or working together with another player, in increasingly elaborate levels. Destroying a balloon can sometimes also spawn a power-up, freezing all balloons for a short time or giving the player a better weapon with which to destroy balloons.

Initially, the player is faced with the task of destroying a small number of large balloons. However, destroying a large balloon spawns two smaller balloons, which in turn spawns two smaller balloons, and so on. Each level is only complete once all balloons have been broken up and completely destroyed. To add challenge to the game, different-sized balloons have different attributes – smaller balloons move faster and don’t bounce as high, making them more difficult to destroy.

Rik’s spawning balloons, up and running in Pygame Zero. Hit space to divide them into smaller balloons.

Spawning balloons

There are a few different ways to achieve this game mechanic, but the approach I’ll take in my example is to use various features of object orientation (as usual, my example code has been written in Python, using the Pygame Zero library). It’s also worth mentioning that for brevity, the example code only deals with simple spawning and destroying of objects, and doesn’t handle balloon movement or collision detection.

The base Enemy class is simply a subclass of Pygame Zero’s Actor class, including a static enemies list to keep track of all enemies that exist within a level. The Enemy subclass also includes a destroy() method, which removes an enemy from the enemies list and deletes the object.

There are then three further subclasses of the Enemy class, called LargeEnemy, MediumEnemy, and SmallEnemy. Each of these subclasses are instantiated with a specific image, and also include a destroy() method. This method simply calls the same destroy() method of its parent Enemy class, but additionally creates two more objects nearby — with large enemies spawning two medium enemies, and medium enemies spawning two small enemies.

Wireframe 10 Pang

Here’s Rik’s example code, which recreates Pang’s spawning balloons in Python. To get it running on your system, you’ll first need to install Pygame Zero – you can find full instructions here. And you can download the code here.

In the example code, initially two LargeEnemy objects are created, with the first object in the enemies list having its destroy() method called each time the Space key is pressed. If you run this code, you’ll see that the first large enemy is destroyed and two medium-sized enemies are created. This chain reaction of destroying and creating enemies continues until all SmallEnemy objects are destroyed (small enemies don’t create any other enemies when destroyed).

As I mentioned earlier, this isn’t the only way of achieving this behaviour, and there are advantages and disadvantages to this approach. Using subclasses for each size of enemy allows for a lot of customisation, but could get unwieldy if much more than three enemy sizes are required. One alternative is to simply have a single Enemy class, with a size attribute. The enemy’s image, the entities it creates when destroyed, and even the movement speed and bounce height could all depend on the value of the enemy size.

You can read the rest of the feature in Wireframe issue 10, available now in Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from us – worldwide delivery is available. And if you’d like to own a handy digital version of the magazine, you can also download a free PDF.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusives, and for subscriptions, visit the Wireframe website to save 49% compared to newsstand pricing!

The post Coding Pang’s sprite spawning mechanic | Wireframe #10 appeared first on Raspberry Pi.

Laser-engraved Raspberry Pi hologram

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/laser-engraved-raspberry-pi-hologram/

Inspired by an old episode of Pimoroni’s Bilge Tank, and with easy access to the laser cutter at the Raspberry Pi Foundation office, I thought it would be fun to create a light-up multi-layered hologram using a Raspberry Pi and the Pimoroni Unicorn pHAT.

Raspberry Pi layered light

Read more –

Break it to make it

First, I broke down the Raspberry Pi logo into three separate images — the black outline, the green leaves, and the red berry.

RASPBERRY PI HOLOGRAM
RASPBERRY PI HOLOGRAM
RASPBERRY PI HOLOGRAM

Fun fact: did you know that Pimoroni’s Paul Beech designed this logo as part of the ‘design us a logo’ contest we ran all the way back in August 2011?

Once I had the three separate files, I laser-engraved them onto 4cm-wide pieces of 3mm-thick clear acrylic. As there are four lines of LEDs on the Unicorn pHAT, I cut the fourth piece to illuminate the background.

RASPBERRY PI HOLOGRAM

To keep the engraved acrylic pieces together, I cut out a pair of acrylic brackets (see above) with four 3mm indentations. Then, after a bit of fiddling with the Unicorn pHAT library, I was able to light the pHAT’s rows of LEDs in white, red, green, and white.

RASPBERRY PI HOLOGRAM

The final result looks pretty spectacular, especially in the dark, and you can build on this basic idea to create fun animations — especially if you use a HAT with more rows of LEDs.

Iterations

This is just a prototype. I plan on building a sturdier frame for the pieces that securely fits a Raspberry Pi Zero W and lets users replace layers easily. As with many projects, I’m sure this will grow and grow as each interaction inspires a new add-on.

How would you build upon this basic principle?

Oh…

…we also laser-engraved this Cadbury’s Creme Egg.

The post Laser-engraved Raspberry Pi hologram appeared first on Raspberry Pi.

Coding Space Invaders’ disintegrating shields | Wireframe #9

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/coding-space-invaders-disintegrating-shields-wireframe-9/

They add strategy to a genre-defining shooter. Andrew Gillett lifts the lid on Space Invaders’ disintegrating shields.

Wireframe 9 Space Invaders

Released in 1978, Space Invaders introduced ideas so fundamental to video games that it’s hard to imagine a time before them. And it did this using custom-made hardware which by today’s standards is unimaginably slow.

Space Invaders ran on an Intel 8080 CPU operating at 2MHz. With such meagre processing power, merely moving sprites around the screen was a struggle. In modern 2D games, at the start of each frame the entire screen is reset, then all objects are displayed.

For Space Invaders’ hardware, this process would have been too slow. Instead, each time a sprite needs to move, the game first erases the sprite from the screen, then redraws it in the new position. The game also updates only one alien per frame — which leads to the effect of the aliens moving faster when there are fewer of them. These techniques cut down the number of pixels which need to be updated each frame, from nearly 60,000 to around a hundred.

Wireframe 9 Space Invaders

One of Space Invaders’ most notable features is its four shields. These provide shelter from enemy fire, but deteriorate after repeated hits. The player can take advantage of the shields’ destructible nature — by repeatedly firing at the same place on a shield’s underside, a narrow gap can be created which can then be used to take out enemies. (Of course, the player can also be shot through the same gap.)

The system of updating only the minimum necessary number of pixels works well as long as there’s no need for objects to overlap. In the case of the shields, though, what happens when objects do overlap is fundamental to how they work. Whenever a shot hits something, it’s replaced by an explosion sprite. A few frames later, the explosion sprite is deleted from the screen. If the explosion sprite overlapped with a shield, that part of the shield is also deleted.

Wireframe 9 Space Invaders

Here’s a code snippet that shows Andrew’s Space Invaders-style disintegrating shields working in Python. To get it running on your system, you’ll need to install Pygame Zero — you can find full instructions here. And download the above code here.

The code to the right displays four shields, and then bombards them with a series of shots which explode on impact. I’m using sprites which have been scaled up by ten, to make it easier to see what’s going on.

We first create two empty lists — one to hold details of any shots on screen, as well as explosions. These will be displayed on the screen every frame. Each entry in the shots list will be a dictionary data structure containing three values: a position, the sprite to be displayed, and whether the shot is in ‘exploding’ mode — in which case it’s displayed in the same position for a few frames before being deleted.

The second list, to_delete, is for sprites which need to be deleted from the screen. For simplicity, I’m using separate copies of the shot and explosion sprites where the white pixels have been changed to black (the other pixels in these sprites are set as transparent).

The function create_random_shot is called every half-second. The combination of dividing the maximum value by ten, choosing a random whole number between zero and the maximum value, and then multiplying the resulting random number by ten, ensures that the chosen X coordinate is a multiple of ten.


Wireframe 9 Space Invaders
Wireframe 9 Space Invaders

Andrew’s Space Invaders shields up and running in Pygame Zero.

In the draw function, we first check to see if it’s the first frame, as we only want to display the shields on that frame. The screen.blit method is used to display sprites, and Pygame Zero’s images object is used to specify which sprite should be displayed. We then display all sprites in the to_delete list, after which we reset it to being an empty list. Finally we display all sprites in the shots list.

Wireframe 9 Space Invaders

In the update function, we go through all sprites in the shots list, in reverse order. Going through the list backwards avoids problems that can occur when deleting items from a list inside a for loop. For each shot, we first check to see if it’s in ‘exploding’ mode. If so, its timer is reduced each frame — when it hits zero we add the shot to the to_delete list, then delete it from shots.

If the item is a normal shot rather than an explosion, we add its current position to to_delete, then update the shot’s position to move the sprite down the screen. We next check to see if the sprite has either gone off the bottom of the screen or collided with something. Pygame’s get_at method gives us the colour of a pixel at a given position. If a collision occurs, we switch the shot into ‘exploding’ mode — the explosion sprite will be displayed for five frames.

You can read the rest of the feature in Wireframe issue 9, available now in Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from us – worldwide delivery is available. And if you’d like to own a handy digital version of the magazine, you can also download a free PDF.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusives, and for subscriptions, visit the Wireframe website to save 49% compared to newsstand pricing!

The post Coding Space Invaders’ disintegrating shields | Wireframe #9 appeared first on Raspberry Pi.

Improve Build Performance and Save Time Using Local Caching in AWS CodeBuild

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/improve-build-performance-and-save-time-using-local-caching-in-aws-codebuild/

AWS CodeBuild now supports local caching, which makes it possible for you to persist intermediate build artifacts locally on the build host so that they are available for reuse in subsequent build runs.

Your build project can use one of two types of caching: Amazon S3 or local. In this blog post, we will discuss how to use the local caching feature.

Local caching stores a cache on a build host. The cache is available to that build host only for a limited time and until another build is complete. For example, when you are dealing with large Java projects, compilation might take a long time. You can speed up subsequent builds by using local caching. This is a good option for large intermediate build artifacts because the cache is immediately available on the build host.

Local caching increases build performance for:

  • Projects with a large, monolithic source code repository.
  • Projects that generate and reuse many intermediate build artifacts.
  • Projects that build large Docker images.
  • Projects with many source dependencies.

To use local caching

1. Open AWS CodeBuild console at https://console.aws.amazon.com/codesuite/codebuild/home.

2. Choose Create project.

3. In Project configuration, enter a name and description for the build project.

4. In Source, for Source provider, choose the source code provider type. In this example, we use an AWS CodeCommit repository name.

5. For Environment image, choose Managed image or Custom image, as appropriate. For environment type, choose Linux or Windows Server. Specify a runtime, runtime version, and service role for your project.

6. Configure the buildspec file for your project.

7. In Artifacts, expand Additional Configuration. For Cache type, choose Local, as shown here.

Local caching supports the following caching modes:

Source cache mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored. No changes are required in the buildspec file.

Docker layer cache mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.

Note

  • You can use a Docker layer cache in the Linux environment only.
  • The privileged flag must be set so that your project has the required Docker permissions
  • You should consider the security implications before you use a Docker layer cache.

Custom cache mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other two local cache modes. If you use a custom cache:

  • Only directories can be specified for caching. You cannot specify individual files.
  • Symlinks are used to reference cached directories.
  • Cached directories are linked to your build before it downloads its project sources. Cached items are overridden if a source item has the same name. Directories are specified using cache paths in the buildspec file.

To use source cache mode

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Source cache, as shown here.

To use Docker layer cache mode

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Docker layer cache, as shown here.

Under Privileged, select Enable this flag if you want to build Docker images or want your builds to get elevated privileges. This grants elevated privileges to the Docker process running on the build host.

To use custom cache mode

In your buildspec file, specify the cache path, as shown here.

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Custom cache, as shown here.


version: 0.2
phases:
  pre_build:
    commands:
      - echo "Enter pre_build commands"
  build:
    commands:
      - echo "Enter build commands"
      
cache:
  paths:
    - '/root/.m2/**/*'
    - '/root/.npm/**/*'
    - 'build/**/*'

Conclusion

We hope you find the information in this post helpful. If you have feedback, please leave it in the Comments section below. If you have questions, start a new thread on the AWS CodeBuild forum or contact AWS Support.

 

 

 

 

 

Using Git with AWS CodeCommit Across Multiple AWS Accounts

Post Syndicated from Steve Engledow original https://aws.amazon.com/blogs/devops/using-git-with-aws-codecommit-across-multiple-aws-accounts/

I use AWS CodeCommit to host all of my private Git repositories. My repositories are split across several AWS accounts for different purposes: personal projects, internal projects at work, and customer projects.

The CodeCommit documentation shows you how to configure and clone a repository from one place, but in this blog post I want to share how I manage my Git configuration across multiple AWS accounts.

Background

First, I have profiles configured for each of my AWS environments. I connect to some of them using IAM user credentials and others by using cross-account roles.

I intentionally do not have any credentials associated with the default profile. That way I must always be sure I have selected a profile before I run any AWS CLI commands.

Here’s an anonymized copy of my ~/.aws/config file:

[profile personal]
region = eu-west-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile work]
region = us-east-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile customer]
region = eu-west-2
source_profile = work
role_arn = arn:aws:iam::123456789012:role/CrossAccountPowerUser

If I am doing some work in one of those accounts, I run export AWS_PROFILE=work and use the AWS CLI as normal.

The problem

I use the Git credential helper so that the Git client works seamlessly with CodeCommit. However, because I use different profiles for different repositories, my use case is a little more complex than the average.

In general, to use the credential helper, all you need to do is place the following options into your ~/.gitconfig file, like this:

[credential]
    helper = !aws codecommit credential-helper [email protected]
    UserHttpPath = true

I could make this work across accounts by setting the appropriate value for AWS_PROFILE before I use Git in a repository, but there is a much neater way to deal with this situation using a feature released in Git version 2.13, conditional includes.

A solution

First, I separate my work into different folders. My ~/code/ directory looks like this:

code
    personal
        repo1
        repo2
    work
        repo3
        repo4
    customer
        repo5
        repo6

Using this layout, each folder that is directly underneath the code folder has different requirements in terms of configuration for use with CodeCommit.

Solving this has two parts; first, I create a .gitconfig file in each of the three folder locations. The .gitconfig files contain any customization (specifically, configuration for the credential helper) that I want in place while I work on projects in those folders.

For example:

[user]
    # Use a custom email address
    email = [email protected]

[credential]
    # Note the use of the --profile switch
    helper = !aws --profile work codecommit credential-helper [email protected]
    UseHttpPath = true

I also make sure to specify the AWS CLI profile to use in the .gitconfig file which means that, when I am working in the folder, I don’t need to set AWS_PROFILE before I run git push, etc.

Secondly, to make use of these folder-level .gitconfig files, I need to reference them in my global Git configuration at ~/.gitconfig

This is done through the includeIf section. For example:

[includeIf "gitdir:~/code/personal/"]
    path = ~/code/personal/.gitconfig

This example specifies that if I am working with a Git repository that is located anywhere under ~/code/personal/``, Git should load additional configuration from ~/code/personal/.gitconfig. That additional file specifies the appropriate credential helper invocation with the corresponding AWS CLI profile selected as detailed earlier.

The contents of the new file are treated as if they are inserted into the main .gitconfig file at the location of the includeIf section. This means that the included configuration will only override any configuration specified earlier in the config.

I hope you find this approach useful. If you have any questions or feedback, please free to leave them in the comments.

Learn about hourly-replication in Server Migration Service and the ability to migrate large data volumes

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/learn-about-hourly-replication-in-server-migration-service-and-the-ability-to-migrate-large-data-volumes/

This post courtesy of Shane Baldacchino, AWS Solutions Architect

AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.

In my previous blog posts, we introduced how you can use AWS Server Migration Service (AWS SMS) to migrate a popular commercial off the shelf software, WordPress into AWS.

For details and a walkthrough on how setup the AWS Server Migration Service, please see the following blog posts for Hyper-V and VMware hypervisors which will guide you through the high level process.

In this article we are going to step it up a few notches and look past common the migration of off-the-shelf software and provide you a pattern on how you can use AWS SMS and some of the recently launched features to migrate a more complicated environment, especially compression and resiliency for replication jobs and the support for data volumes greater than 4TB.

This post covers a migration of a complex internally developed eCommerce system comprising of a polyglot architecture. It is made up a Windows Microsoft IIS presentation tier, Tomcat application tier, and Microsoft SQL Server database tier. All workloads run on-premises as virtual machines in a VMware vCenter 5.5 and ESX 5.5 environment.

This theoretical customer environment has various business and infrastructure requirements.

Application downtime: During any migration activities, the application cannot be offline for more than 2 hours
Licensing: The customer has renewed their Microsoft SQL Server license for an additional 3 years and holds License Mobility with Software Assurance option for Microsoft SQL Server and therefore wants to take advantage of AWS BYOL licensing for Microsoft SQL server and Microsoft Windows Server.
Large data volumes: The Microsoft SQL Server database engine (.mdf, .ldf and .ndf files) consumes 11TB of storage

Walkthrough

Key elements of this migration process are identical to the process outlined in my previous blog posts and for more information on this process, please see the following blog posts Hyper-V and VMware hypervisors, but a high level you will need to.

• Establish your AWS environment.
• Download the SMS Connector from the AWS Management Console.
• Configure AWS SMS and Hypervisor permissions.
• Install and configure the SMS Connector appliance.
• Import your virtual machine inventory and create a replication jobs
• Launch your Amazon EC2 instances and associated NACL’s, Security Groups and AWS Elastic Load Balancers
• Change your DNS records to resolve the custom application to an AWS Elastic Load Balancer.

Before you start, ensure that your source systems OS and vCenter version are supported by AWS. For more information, see the Server Migration Service FAQ.

Planning the Migration

Once you have downloaded and configured the AWS SMS connector with your given Hypervisor you can get started in creating replication jobs.

The artifacts derived from our replication jobs with AWS SMS will be AMI’s (Amazon Machine Images) and as such we do not need to replicate each server individually and that is because we have a three-tier architecture that has commonality between servers with multiple Application and Web servers performing the same function, and as such we can leverage a common AMI and create three replication jobs.

1. Microsoft SQL Server – Database Tier
2. Ubuntu Server – Application Tier
3. IIS Web server – Webserver Tier

Performing the Replication

After validating that the SMS Connector is in a “HEALTHY” state, import your server catalog from your Hypervisor to AWS SMS. This process can take up to a minute.

Select the three servers (Microsoft SQL Server, Ubuntu Server, IIS Web server) to migrate and choose Create replication job. AWS SMS now supports creating replications jobs with frequencies as short as 1 hour, and as such to ensure our business RTO (Recovery Time Objective) of 2 hours is met we will create our replication jobs with a frequency of 1 hour. This will minimize the risk of any delta updates during the cutover windows not completing.

Given the businesses existing licensing investment in Microsoft SQL Server, they will leverage these the BYOL (Bring Your Own License) offering when creating the Microsoft SQL Server replication job.

The AWS SMS console guides you through the process. The time that the initial replication task takes to complete is dependent on available bandwidth and the size of your virtual machines.

After the initial seed replication, network bandwidth requirement is minimized as AWS SMS replicates only incremental changes occurring on the VM.

The progress updates from AWS SMS are automatically sent to AWS Migration Hub so that you can track tasks in progress.

AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions. In this post, we are using AWS SMS as a mechanism to migrate the virtual machines (VMs) and track them via AWS Migration Hub.

Migration Hub and AWS SMS are both free. You pay only for the cost of the individual migration tools that you use, and any resources being consumed on AWS

The dashboard reflects any status changes that occur in the linked services. You can see from the following image that two servers are complete whilst another is in progress.

Using Migration Hub, you can view the migration progress of all applications. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

After validating that the SMS Connector

Testing Your Replicated Instances

Thirty hours after creating the replication jobs, notification was received via AWS SNS (Simple Notification Service) that all 3 replication jobs have completed. During the 30-hour replication window the customers ISP experienced downtime and sporadic flapping of the link, but this was negated by the network auto-recovery feature of SMS. It recovered and resumed replication without any intervention.

With the replication tasks being complete. The artifact created by AWS SMS is a custom AMI that you can use to deploy an EC2 instance. Follow the usual process to launch your EC2 instance, noting that you may need to replace any host-based firewalls with security groups and NACLs and any hardware based load balancers with Elastic Load Balancing to achieve fault tolerance, scalability, performance and security.

As this environment is a 3-tier architecture with commonality been tiers (Application and Presentation Tier) we can create during the EC2 Launch process an ASG (Auto Scaling Group) to ensure that deployed capacity matches user demand. The ASG will be based on the custom AMI’s generated by the replication jobs.

When you create an EC2 instance, ensure that you pick the most suitable EC2 instance type and size to match your performance and cost requirements.

While your new EC2 instances are a replica of your on-premises VM, you should always validate that applications are functioning. How you do this differs on an application-by-application basis. You can use a combination of approaches, such as editing a local host file and testing your application, SSH, RDP and Telnet.

For our Windows Presentation and database tier, I can RDP in to my systems and validate IIS 8.0 and other services are functioning correctly.

For our Ubuntu Application tier, we can SSH in to perform validation.

Post validation of each individual server we can now continue to test the application end to end. This is because our systems have been instantiated inside a VPC with no route back to our on-premises environment which allows us to test functionality without the risk of communication back to our production application.

After validation of systems it is now time to cut over, plan your runbook accordingly to ensure you either eliminate or minimize application disruption.

Cutting Over

As the replication window specified in AWS SMS replication jobs was 1 hour, there were hourly AMI’s created that provide delta updates since the initial seed replication was performed. The customer verified the stack by executing the previously created runbook using the latest AMIs, and verified the application behaved as expected.

After another round of testing, the customer decided to plan the cutover on the coming Saturday at midnight, by announcing a two-hour scheduled maintenance window. During the cutover window, the customer took the application offline, shutdown Microsoft SQL Server instance and performed an on-demand sync of all systems.

This generate a new versioned AMI that contained all on-premise data. The customer then executed the runbook on the new AMI’s. For the application and presentation tier these AMI’s were used in the ASG configuration. After application validation Amazon Route 53 was updated to resolve the application CNAME to the Application Load Balancer CNAME used to load balance traffic to the fleet of IIS servers.

Based on the TTL (Time To Live) of your Amazon Route 53 DNS zone file, end users slowly resolve the application to AWS, in this case within 300 seconds. Once this TTL period had elapsed the customer brought their application back online and exited their maintenance window, with time to spare.

After modifying the Amazon Route 53 Zone Apex, the physical topology now looks as follows with traffic being routed to AWS.

After validation of a successful migration the customer deleted their AWS Server Migration Service replication jobs and began planning to decommission their on-premises resources.

Summary

This is an example pattern on migrate a complex custom polyglot environment in to AWS using AWS migration services, specifically leveraging many of the new features of the AWS SMS service.

Many architectures can be extended to use many of the inherent benefits of AWS, with little effort. For example this article illustrated how AWS Migration Services can be used to migrate complex environments in to AWS and then use native AWS services such as Amazon CloudWatch metrics to drive Auto Scaling policies to ensure deployed capacity matches user demand whilst technologies such as Application Load Balancers can be used to achieve fault tolerance and scalability

Think big and get building!

 

 

Tinkernut’s Beginners’ Guide to SSH

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernuts-beginners-guide-to-ssh/

We often mention SSH (Secure Shell) when we talk about headless Raspberry Pi projects — projects that involve accessing a Pi remotely. If you’re a coding creative who doesn’t know what SSH involves, we’ve got you covered with our comprehensive online guide to using SSH with your Raspberry Pi.

SSH in terminal

You know who’s also got you covered? YouTube favourite Tinkernut, with his great beginners’ guide to SSH, what it is, why we use it, and how you can use it with your device:

Beginners Guide To SSH

Me: “I have a question about controlling another computer over the internet” You: “SSH” Me: “Don’t tell me to ‘shhh’, I’m asking you a question”. Ok, enough with the play on words. If you’ve ever wanted to securely control another computer over the internet, then you’ve probably heard of SSH.

SSHhhhhhhhhh

Between our guide and Tinkernut’s video, I don’t think I need to add anything else on the subject.

So here, have this GIF, and have yourself a lovely weekend!

The post Tinkernut’s Beginners’ Guide to SSH appeared first on Raspberry Pi.

Stream Amazon CloudWatch Logs to a Centralized Account for Audit and Analysis

Post Syndicated from David Bailey original https://aws.amazon.com/blogs/architecture/stream-amazon-cloudwatch-logs-to-a-centralized-account-for-audit-and-analysis/

A key component of enterprise multi-account environments is logging. Centralized logging provides a single point of access to all salient logs generated across accounts and regions, and is critical for auditing, security and compliance. While some customers use the built-in ability to push Amazon CloudWatch Logs directly into Amazon Elasticsearch Service for analysis, others would prefer to move all logs into a centralized Amazon Simple Storage Service (Amazon S3) bucket location for access by several custom and third-party tools. In this blog post, I will show you how to forward existing and any new CloudWatch Logs log groups created in the future to a cross-account centralized logging Amazon S3 bucket.

The streaming architecture I use in the destination logging account is a streamlined version of the architecture and AWS CloudFormation templates from the Central logging in Multi-Account Environments blog post by Mahmoud Matouk. This blog post assumes some knowledge of CloudFormation, Python3 and the boto3 AWS SDK. You will need to have or configure an AWS working account and logging account, an IAM access and secret key for those accounts, and a working environment containing Python and the boto3 SDK. (For assistance, see the Getting Started Resource Center and Start Building with SDKs and Tools.) All CloudFormation templates and Python code used in this article can be found in this GitHub Repository.

Setting Up the Solution

You need to create or use an existing S3 bucket for storing CloudFormation templates and Python code for an AWS Lambda function. This S3 bucket is referred to throughout the blog post as the <S3 infrastructure-bucket>. Ensure that the bucket does not block new bucket policies or cross-account access by checking the bucket’s Permissions tab and the Public access settings button.

You also need a bucket policy that allows each account that needs to stream logs to access it when we create the AWS Lambda function below. To do so, update your bucket policy to include each new account you create and the <S3 infrastructure-bucket> ARN from the top of the Bucket policy editor page to modify this template:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                  "03XXXXXXXX85",
                  "29XXXXXXXX02",
                  "13XXXXXXXX96",
                  "37XXXXXXXX30",
                  "86XXXXXXXX95"
                ]
            },
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::<S3 infrastructure-bucket>",
                "arn:aws:s3:::<S3 infrastructure-bucket>/*"
            ]
        }
    ]
}

Clone a local copy of the CloudFormation templates and Python code from the GitHub repository. Compress the CentralLogging.py and lambda.py into a .zip file for the lambda function we create below and name it AddSubscriptionFilter.zip. Load these local files into the <S3 infrastructure-bucket>. I recommend using folders called /python for the .py files, /lambdas for the AddSubscriptionFilter.zip file and /cfn for the CloudFormation templates.

Multi-Account Configuration and the Central Logging Account

One form of multi-account configuration is the Landing Zone offering, which provides a core logging account for storing all logs for auditing. I use this account configuration as an example in this blog post. Initially, the Landing Zone setup creates several stack sets and resources, including roles, security groups, alarms, lambda functions, a cloud trail stream and an S3 bucket.

If you are not using a Landing Zone, create an appropriately named S3 bucket in the account you have chosen as a logging account. This S3 bucket will be referred to later as the <LoggingS3Bucket>. To mimic what the Landing Zone calls its logging bucket, you can use the format aws-landing-zone-logs-<Account Number><Region>, or simply pick an appropriate name for the centralized logging location. In a production environment, remember that it is critical to lock down the access to logging resources and the permissions allowed within the account to prevent deletion or tampering with the logs.

Figure 1 - Initial Landing Zone logging account resources

Figure 1 – Initial Landing Zone logging account resources

The S3 bucket – aws-landing-zone-logs-<Account Number><Region> is the most important resource created by the stack-sets for logging purposes. It contains all of the logs streamed to it from all of the accounts. Initially, the Landing Zone only sends the AWS CloudTrail and AWS Config logs to this S3 bucket.

In order to send all of the other CloudWatch Logs that are necessary for auditing, we need to add a destination and streaming mechanism to the logging account.

Logging Account Insfrastructure

The additional infrastructure required in the central logging account provides a destination for the log group subscription filters and a stream for log events that are sent from all accounts and appropriate regions to load them into the <LoggingS3Bucket> repository. The selection of these particular AWS resources is important, because Kinesis Data Streams is the only resource currently supported as a destination for cross-account CloudWatch Logs subscription filters.

The centralLogging.yml CloudFormation template automates the creation of the entire required infrastructure in the core logging account. Make sure to run it in each of the regions in which you need to centralize logs. The log group subscription filter and destination regions must match in order to successfully stream the logs.

Installation Instructions:

  1. Modify the centralLogging.yml template to add your account numbers for all of the accounts you want to stream logs from into the DestinationPolicy where you see the <AccountNumberHere> placeholders. Remove any unused placeholders.
  2. In the same DestinationPolicy, modify the final arn statement, replacing <region> with the region it will be run in (e.g., us-east-1), and the <logging account number> with the account number of the logging account where this template is to be run.
  3. Log in to the core logging account and access the AWS management console using administrator credentials.
  4. Navigate to CloudFormation and click the Create Stack button.
  5. Select Specify an Amazon S3 template URL and enter the Link for the centralLogging.yml template found in the <S3 infrastructure-bucket>.
  6. Enter a stack name, such as CentralizedLogging, and the one parameter called LoggingS3Bucket. Enter in the ARN of the logging bucket: arn:aws:s3::: <LoggingS3Bucket>. This can be obtained by opening the S3 console, clicking on the bucket icon next to this bucket, and then clicking the Copy Bucket ARN button.
  7. Skip the next page, acknowledge the creation of IAM resources, and Create the stack.
  8. When the stack completes, select the stack name to go to stack details and open the Outputs. Copy the value of the DestinationArnExport, which will be needed as a parameter for the script in the next section.

Upon successful creation of this CloudFormation stack, the following new resources will be created:

  • Amazon CloudWatch Logs Destination
  • Amazon Kinesis Stream
  • Amazon Kinesis Firehose Stream
  • Two AWS Identity and Access Management (IAM) Roles
Figure 2 - New infrastructure required in the centralized logging account

Figure 2 – New infrastructure required in the centralized logging account

Because the Landing Zone is a multi-account offering, the Log Destination is required to be the destination for all subscription filters. The key feature of the destination is its DestinationPolicy. Whenever a new account is added to the environment, its account number needs to be added to this DestinationPolicy in order for logs to be sent to it from the new account. Add the new account number in the centralLogging.yml CloudFormation template, and run an update in CloudFormation to complete the addition. A sample Destination Policy looks like this:

{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect" : "Allow",
      "Principal" : {
        "AWS" : [
          "03XXXXXXXX85",
          "29XXXXXXXX02",
          "13XXXXXXXX96",
          "37XXXXXXXX30",
          "86XXXXXXXX95"
        ]
      },
      "Action" : "logs:PutSubscriptionFilter",
      "Resource" : "arn:aws:logs:<Region>:<LoggingAccountNumber>:destination:CentralLogDestination"
    }
  ]
}

The Kinesis Stream get records from the Logs Destination and holds them for 48 hours. Kinesis Streams scale by adding shards. The CloudFormation template starts the stream with two shards. You need to monitor this as instances and applications are deployed into the accounts, however, because all CloudWatch log objects will flow through this stream, and it will need to be scaled up at some point. To scale, change the number of shards (ShardCount) in the Kinesis Stream resource (KinesisLoggingStream) to the required number. See the Amazon Kinesis Data Streams FAQ documentation to confirm the capacity and throughput of each shard.

Kinesis Firehose provides a simple and efficient mechanism to retrieve the records from the Kinesis Stream and load them into the <LoggingS3Bucket> repository. It uses the CloudFormation template parameter to know where to load the logs. All of the CloudWatch logs loaded by Firehose will be under the prefix /CentralizedAccountsLog. The buffering hints for Firehose suggest that the logs be loaded every 5 minutes or 50 MB. Leave the CompressionFormat UNCOMPRESSED, since the logs are already compressed.

There are two AWS Identity and Access Management (IAM) roles created for this infrastructure. The first, CWLtoKinesisRole is used by the destination to allow CloudWatch Logs from all regions to use the destination to put the log object records into the Kinesis Stream, as well as to pass the role. The second, FirehoseDeliveryRole, allows Firehose to get the log object records from the Kinesis Stream, and then to load them into S3 logging bucket.

Once you have successfully created this infrastructure, the next step is to add the subscription filters to existing log groups.

Adding Subscription Filters to Existing Log Groups

The next step in the process is to add subscription filters for the Log Destination in the core logging account to all existing log groups. Several log groups are created by the Landing Zone, or you may have created them by using various AWS services or by logging application events. For every new AWS account, you will need to run the init_account_central_logging.py Python script to add the subscription filters to all the existing log groups.

The init_account_central_logging.py script takes one parameter, which is the Log Destination ARN. Use the Destination ARN you copied from the stack details output in the previous section as the parameter to the script.

The init_account_central_logging.py script first adds this Destination ARN to the AWS Systems Manager Parameter Store so that the core logic that creates the subscription filter can use it. The script then gets a list of all existing log groups, iterates over them, deletes any existing subscription filters (because there can only be one subscription filter per log group and attempting to create another would cause an error), and then adds the new subscription filter to the centralized logging account to the Log Destination.

Figure 3 - Run script to add subscription filters to existing log groups

Figure 3 – Run script to add subscription filters to existing log groups

Installation Instructions:

  1. Make sure that Python and boto3 are installed and accessible in the client computer – consider loading into a virtual environment to keep dependencies separate.
  2. Set the AWS_PROFILE environment variable to the appropriate AWS account profile.
  3. Log in to the proper account, and obtain administrator or other credentials with appropriate permissions, and add the account access key and secret key to the AWS credentials file.
  4. Set the region and output in the AWS config file.
  5. Download and place two python files into a working directory: init_account_central_logging.py and CentralLogging.py.
  6. Run the script using the command python3 ./init_account_central_logging.py -d <LogDestinationArn>.

Use the AWS Management Console to validate the results. Navigate to CloudWatch Logs and view all of the log groups. Each one should now have a subscription filter named “Logs (CentralLogDestination).”

Automatically Adding Subscription Filters to New Log Groups

The final step to set up the centralized log streaming capability is to run a CloudFormation script to create resources that automatically add subscription filters to new log groups. New log groups are created in accounts by resources (e.g., Lambda functions) and by applications. A subscription filter must be added to every new log group in order to deliver its log events to the logging account,

The AddSubscriptionFilter.yml CloudFormation template contains resources to automatically add subscription filters.

First, it creates a role that allows it to access the lambda code that is stored in a centralized location – the <S3 infrastructure-bucket>. (Remember that its S3 bucket policy must contain this account number in order to access the lambda code.)

Second, the template creates the AddSubscriptionLambda, which reuses the core logic shared by the script in the last section. It retrieves the proper destination from the Parameter Store, deletes any existing subscription filter from the log group, and adds the new subscription filter to the newly created log group. This lambda function is triggered by a CloudWatch event rule.

Third, the CloudFormation creates a Lambda Permission, which allows the event trigger to invoke this particular lambda.

Finally, the CloudFormation template creates an Amazon CloudWatch Events Rule that acts as a trigger for the lambda. This rule looks for an event coming from CloudTrail that signals the creation of a new log group. For each create log group event found, it invokes the AddSubscriptionLambda.

Figure 4 - Infrastructure to automatically add a subscription filter to a new log group and the log flow to the centralized account

Figure 4 – Infrastructure to automatically add a subscription filter to a new log group and the log flow to the centralized account

Installation Instructions:

(Important note: This functionality requires that the LogDestination parameter be properly set to the LogDestinationArn in the Parameter Store before the Lambda will run successfully. The script in the previous step sets this parameter, or it can be done manually. Make certain that the destination specified is in this same region.)

  1. Ensure that the <S3 infrastructure-bucket> has the AddSubscriptionFilter.zip file containing the Python code files lambda.py and CentralLogging.py.
  2. Log in to the appropriate account, and access using administrator credentials. Make sure that the region is set properly.
  3. Navigate to Cloudformation and click the Create Stack button.
  4. Select Specify an Amazon S3 template URL and enter the Link for the AddSubscriptionFilter.yml template found in <S3 infrastructure-bucket>
  5. Enter a stack name, such as AddSubscription.
  6. Enter the two parameters, the <S3 infrastructure-bucket> name (not ARN) and the folder and file name (e.g., lambdas/AddSubscriptionFilter.zip)
  7. Skip the next page, acknowledge the creation of IAM resources, and Create the stack.

In order to test that the automated addition of subscription filters is working properly, use the AWS Management Console to navigate to CloudWatch Logs and click the Actions button. Select Create New Log Group and enter a random log group name, such as “testLogGroup.” When first created, the log group will not have a subscription filter. After a few minutes, refresh the display and you should see the new subscription filter on the log group. At this point, you can delete the test log group.

New Account Setup

As a reminder, when you add new accounts that you want to have stream log events to the central logging account, you will need to configure the new accounts in two places in order for this functionality to work properly.

First, add the account number to the LoggingDestination property DestinationPolicy in the centralLogging.yml template. Then, update the CloudFormation stack.

Second, modify the bucket policy for the <S3 infrastructure-bucket>. Select the Permissions tab, then the Bucket Policy button. Add the new account to allow cross-account access to the lambda code by adding the line “arn:aws:iam::<new account number>:root” to the Principal.AWS list.

Conclusion

Centralized logging is a key component in enterprise multi-account architectures. In this blog post, I have built on the central logging in multi-account environments streaming architecture to automatically subscribe all CloudWatch Logs log groups to send all log events to an S3 bucket in a designated logging account. The solution uses a script to add subscription filters to existing log groups, and a lambda function to automatically place a subscription filter on all new log groups created within the account. This can be used to forward application logs, security logs, VPC flow logs, or any other important logs that are required for audit, security, or compliance purposes.

About the author

David BaileyDavid Bailey is a Cloud Infrastructure Architect with AWS Professional Services specializing in serverless application architecture, IoT, and artificial intelligence. He has spent decades architecting and developing complex custom software applications, as well as teaching internationally on object-oriented design, expert systems, and neural networks.

 

 

Getting started with your Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/getting-started-raspberry-pi/

Here on the Raspberry Pi blog, we often share impressive builds made by community members who have advanced making and coding skills. But what about those of you who are just getting started?

Getting started with Raspberry Pi

For you, we’ve been working hard to update and polish our Getting started resources, including a brand-new video to help you get to grips with your new Pi.

Getting started with Raspberry Pi

Whether you’re new to electronics and the Raspberry Pi, or a seasoned pro looking to share your knowledge and skills with others, sit back and watch us walk you through the basics of setting up our powerful little computer.

How to set up your Raspberry Pi || Getting started with #RaspberryPi

Learn how to set up your Raspberry Pi for the first time, from plugging in peripherals to loading Raspbian.

We’ve tried to make this video as easy to follow as possible, with only the essential explanations and steps.

getting started with raspberry pi

As with everything we produce, we want this video to be accessible to the entire world, so if you can translate its text into another language, please follow this link to submit your translation directly through YouTube. You can also add translations to our other YouTube videos here! As a thank you, we’ll display your username in the video descriptions to acknowledge your contributions.

New setup guides and resources

Alongside our shiny new homepage, we’ve also updated our Help section to reflect our newest tech and demonstrate the easiest way for beginners to start their Raspberry Pi journey. We’re now providing a first-time setup guide, and also a walk-through for using your Raspberry Pi that shows you all sort of things you can do with it. And with guides to our official add-on devices and a troubleshooting section, our updated Help page is your one-stop shop for getting the most out of your Pi.

getting started with raspberry pi

For parents and teachers, we offer guides on introducing Raspberry Pi and digital making to your children and students. And for those of you who are visual learners, we’ve curated a collection of our videos to help you get making.

As with our videos, we’re looking for people whose first language isn’t English to help us translate our resources. If you’re able to donate some of your time to support this cause, please sign up here.

The forums

We’re very proud of our forum community. Since the birth of the Raspberry Pi, our forums have been the place to go for additional support, conversation, and project bragging.

Raspberry Pi forums

If your question isn’t answered on our Help page, there’s no better place to go than the forums. Nine times out of ten, your question will already have been asked and answered there! And if not, then our friendly forum community will be happy to share their wealth of knowledge and help you out.

Events and clubs

Raspberry Pi and digital making enthusiasts come together across the world at various events and clubs, including Raspberry Jams, Code Club and CoderDojo, and Coolest Projects. These events are perfect for learning more about how people use Raspberry Pi and other technologies for digital making — as a hobby and as a tool for education.

getting started with raspberry pi

Keep up to date

To keep track of all the goings-on of the Raspberry Pi Foundation, be sure to follow us on Twitter, Facebook, and Instagram, and sign up to our Raspberry Pi Weekly newsletter and the monthly Raspberry Pi LEARN education newsletter.

The post Getting started with your Raspberry Pi appeared first on Raspberry Pi.

Make your own custom LEDs using hot glue!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/homemade-custom-leds-using-hot-glue/

Tired of using the same old plastic LEDs in your projects? It’s time to grab a hot glue gun and some confectionary moulds to create your own custom LEDs!

make your own custom LEDs for Raspberry Pi

Blinky LEDs!

Lighting up an LED is the standard first step into the world of digital making with a Raspberry Pi. For example, at our two-day Picademy training events, budding Raspberry Pi Certified Educators are shown the ropes of classroom digital making by learning how to connect an LED to a Pi and use code to make it blink.

Anastasia Hanneken on Twitter

Blinking LED Light @Raspberry_Pi #picademy! https://t.co/zhTODYsBxp

And while LEDs come in various sizes, they’re all pretty much the same shape: small, coloured domes of plastic with pointy legs that always manage to draw blood when I grab them from the depths of my maker drawer.

So why not do away with the boring and make some new LEDs based on your favourite characters and shapes?

Making custom LEDs with a whole lotta hot glue

The process of creating your own custom LEDs is pretty simple, but it’s not without its risk — namely, burnt fingertips and sizzled LEDs! So be careful when making these, and supervise young children throughout the process.

The moulds

I used flexible ice cube trays, but you could also use chocolate moulds. As long as the mould is flexible, this should work — I haven’t tried hard plastic moulds, so I can’t make any promises for those. Also be sure to test whether your mould will withstand the heat of the hot glue!

Check your LEDs

Before you submerge your LEDs in hot glue, check to make sure they work. The easiest way to do this is to set up a testing station using a Pi, a breadboard, some jumper wires, and a resistor. To save having to write code, I used the 3V3 pin and a ground pin.

make your own custom LEDs for Raspberry Pi

Remember, the shorter of the two legs connects to the ground pin, while the longer goes to 3V3. If you mix this up, you may end up with a fried LED like this poor LEGO man.

make your own custom LEDs for Raspberry Pi

Everything isn’t awesome.

Once you’ve confirmed that your LED works, bend its legs to make it easier to insert it into the glue.

Glue

Next, grab a hot glue gun and fill a mould. The glue will take a while to cool, so you have some time to make sure that all nooks and crannies are filled before you insert an LED.

make your own custom LEDs for Raspberry Pi

Tip: test a corner of your mould with the tip of your glue gun to check how heat-resistant it is. One of my moulds didn’t enjoy heat and began to bubble.

Once your mould is properly filled, push an LED into the glue, holding on to the legs to keep your fingertips safe. Have a wiggle around to find the bottom and sides of your mould and ensure that your LED is in the centre.

make your own custom LEDs for Raspberry Pi

Pick a colour best suited to your mould. You could try using multiple LEDs on larger moulds to introduce more colours!

You may notice that the LED tries to sink a little and the legs begin to drop. Keep an eye out and adjust them if you need to. They’ll stop moving once the glue begins to set.

make your own custom LEDs for Raspberry Pi

These took about ten minutes to cool down.

Be patient

Don’t rush. The hot glue will take time to cool down, especially if you’re using a larger mould like the one for this Stormtrooper helmet.

Custom hot glue LED

Here I used a gumdrop LED, which is larger than your standard maker kit LED.

You’ll know that the glue has set when the shape pulls away easily from the mould. It should just pop out when its ready.

make your own custom LEDs for Raspberry Pi

Pop!

Light it up

Test your new custom LED one more time on your testing rig to ensure you haven’t damaged the connections.

make your own custom LEDs for Raspberry Pi

As with all LEDs, they look better in the dark (and terrible when you try to take a photo of them), so try testing them in a dim room or at night. You could also use a box to create a small testing lab if you’re planning to make a lot of these.

Custom hot glue LED
Custom hot glue LED
Custom hot glue LED

Now it’s your turn

What custom LED would you want to make? How would you use it in your next project? And what other fun hacks have you used to augment tech for your builds?

The post Make your own custom LEDs using hot glue! appeared first on Raspberry Pi.

Announcing Local Build Support for AWS CodeBuild

Post Syndicated from Karthik Thirugnanasambandam original https://aws.amazon.com/blogs/devops/announcing-local-build-support-for-aws-codebuild/

Today, we’re excited to announce local build support in AWS CodeBuild.

AWS CodeBuild is a fully managed build service. There are no servers to provision and scale, or software to install, configure, and operate. You just specify the location of your source code, choose your build settings, and CodeBuild runs build scripts for compiling, testing, and packaging your code.

In this blog post, I’ll show you how to set up CodeBuild locally to build and test a sample Java application.

By building an application on a local machine you can:

  • Test the integrity and contents of a buildspec file locally.
  • Test and build an application locally before committing.
  • Identify and fix errors quickly from your local development environment.

Prerequisites

In this post, I am using AWS Cloud9 IDE as my development environment.

If you would like to use AWS Cloud9 as your IDE, follow the express setup steps in the AWS Cloud9 User Guide.

The AWS Cloud9 IDE comes with Docker and Git already installed. If you are going to use your laptop or desktop machine as your development environment, install Docker and Git before you start.

Steps to build CodeBuild image locally

Run git clone https://github.com/aws/aws-codebuild-docker-images.git to download this repository to your local machine.

$ git clone https://github.com/aws/aws-codebuild-docker-images.git

Lets build a local CodeBuild image for JDK 8 environment. The Dockerfile for JDK 8 is present in /aws-codebuild-docker-images/ubuntu/java/openjdk-8.

Edit the Dockerfile to remove the last line ENTRYPOINT [“dockerd-entrypoint.sh”] and save the file.

Run cd ubuntu/java/openjdk-8 to change the directory in your local workspace.

Run docker build -t aws/codebuild/java:openjdk-8 . to build the Docker image locally. This command will take few minutes to complete.

$ cd aws-codebuild-docker-images
$ cd ubuntu/java/openjdk-8
$ docker build -t aws/codebuild/java:openjdk-8 .

Steps to setup CodeBuild local agent

Run the following Docker pull command to download the local CodeBuild agent.

$ docker pull amazon/aws-codebuild-local:latest --disable-content-trust=false

Now you have the local agent image on your machine and can run a local build.

Run the following git command to download a sample Java project.

$ git clone https://github.com/karthiksambandam/sample-web-app.git

Steps to use the local agent to build a sample project

Let’s build the sample Java project using the local agent.

Execute the following Docker command to run the local agent and build the sample web app repository you cloned earlier.

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/java:openjdk-8" -e "ARTIFACTS=/home/ec2-user/environment/artifacts" -e "SOURCE=/home/ec2-user/environment/sample-web-app" amazon/aws-codebuild-local

Note: We need to provide three environment variables namely  IMAGE_NAME, SOURCE and ARTIFACTS.

IMAGE_NAME: The name of your build environment image.

SOURCE: The absolute path to your source code directory.

ARTIFACTS: The absolute path to your artifact output folder.

When you run the sample project, you get a runtime error that says the YAML file does not exist. This is because a buildspec.yml file is not included in the sample web project. AWS CodeBuild requires a buildspec.yml to run a build. For more information about buildspec.yml, see Build Spec Example in the AWS CodeBuild User Guide.

Let’s add a buildspec.yml file with the following content to the sample-web-app folder and then rebuild the project.

version: 0.2

phases:
  build:
    commands:
      - echo Build started on `date`
      - mvn install

artifacts:
  files:
    - target/javawebdemo.war

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/java:openjdk-8" -e "ARTIFACTS=/home/ec2-user/environment/artifacts" -e "SOURCE=/home/ec2-user/environment/sample-web-app" amazon/aws-codebuild-local

This time your build should be successful. Upon successful execution, look in the /artifacts folder for the final built artifacts.zip file to validate.

Conclusion:

In this blog post, I showed you how to quickly set up the CodeBuild local agent to build projects right from your local desktop machine or laptop. As you see, local builds can improve developer productivity by helping you identify and fix errors quickly.

I hope you found this post useful. Feel free to leave your feedback or suggestions in the comments.

CI/CD with Data: Enabling Data Portability in a Software Delivery Pipeline with AWS Developer Tools, Kubernetes, and Portworx

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/cicd-with-data-enabling-data-portability-in-a-software-delivery-pipeline-with-aws-developer-tools-kubernetes-and-portworx/

This post is written by Eric Han – Vice President of Product Management Portworx and Asif Khan – Solutions Architect

Data is the soul of an application. As containers make it easier to package and deploy applications faster, testing plays an even more important role in the reliable delivery of software. Given that all applications have data, development teams want a way to reliably control, move, and test using real application data or, at times, obfuscated data.

For many teams, moving application data through a CI/CD pipeline, while honoring compliance and maintaining separation of concerns, has been a manual task that doesn’t scale. At best, it is limited to a few applications, and is not portable across environments. The goal should be to make running and testing stateful containers (think databases and message buses where operations are tracked) as easy as with stateless (such as with web front ends where they are often not).

Why is state important in testing scenarios? One reason is that many bugs manifest only when code is tested against real data. For example, we might simply want to test a database schema upgrade but a small synthetic dataset does not exercise the critical, finer corner cases in complex business logic. If we want true end-to-end testing, we need to be able to easily manage our data or state.

In this blog post, we define a CI/CD pipeline reference architecture that can automate data movement between applications. We also provide the steps to follow to configure the CI/CD pipeline.

 

Stateful Pipelines: Need for Portable Volumes

As part of continuous integration, testing, and deployment, a team may need to reproduce a bug found in production against a staging setup. Here, the hosting environment is comprised of a cluster with Kubernetes as the scheduler and Portworx for persistent volumes. The testing workflow is then automated by AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild.

Portworx offers Kubernetes storage that can be used to make persistent volumes portable between AWS environments and pipelines. The addition of Portworx to the AWS Developer Tools continuous deployment for Kubernetes reference architecture adds persistent storage and storage orchestration to a Kubernetes cluster. The example uses MongoDB as the demonstration of a stateful application. In practice, the workflow applies to any containerized application such as Cassandra, MySQL, Kafka, and Elasticsearch.

Using the reference architecture, a developer calls CodePipeline to trigger a snapshot of the running production MongoDB database. Portworx then creates a block-based, writable snapshot of the MongoDB volume. Meanwhile, the production MongoDB database continues serving end users and is uninterrupted.

Without the Portworx integrations, a manual process would require an application-level backup of the database instance that is outside of the CI/CD process. For larger databases, this could take hours and impact production. The use of block-based snapshots follows best practices for resilient and non-disruptive backups.

As part of the workflow, CodePipeline deploys a new MongoDB instance for staging onto the Kubernetes cluster and mounts the second Portworx volume that has the data from production. CodePipeline triggers the snapshot of a Portworx volume through an AWS Lambda function, as shown here

 

 

 

AWS Developer Tools with Kubernetes: Integrated Workflow with Portworx

In the following workflow, a developer is testing changes to a containerized application that calls on MongoDB. The tests are performed against a staging instance of MongoDB. The same workflow applies if changes were on the server side. The original production deployment is scheduled as a Kubernetes deployment object and uses Portworx as the storage for the persistent volume.

The continuous deployment pipeline runs as follows:

  • Developers integrate bug fix changes into a main development branch that gets merged into a CodeCommit master branch.
  • Amazon CloudWatch triggers the pipeline when code is merged into a master branch of an AWS CodeCommit repository.
  • AWS CodePipeline sends the new revision to AWS CodeBuild, which builds a Docker container image with the build ID.
  • AWS CodeBuild pushes the new Docker container image tagged with the build ID to an Amazon ECR registry.
  • Kubernetes downloads the new container (for the database client) from Amazon ECR and deploys the application (as a pod) and staging MongoDB instance (as a deployment object).
  • AWS CodePipeline, through a Lambda function, calls Portworx to snapshot the production MongoDB and deploy a staging instance of MongoDB• Portworx provides a snapshot of the production instance as the persistent storage of the staging MongoDB
    • The MongoDB instance mounts the snapshot.

At this point, the staging setup mimics a production environment. Teams can run integration and full end-to-end tests, using partner tooling, without impacting production workloads. The full pipeline is shown here.

 

Summary

This reference architecture showcases how development teams can easily move data between production and staging for the purposes of testing. Instead of taking application-specific manual steps, all operations in this CodePipeline architecture are automated and tracked as part of the CI/CD process.

This integrated experience is part of making stateful containers as easy as stateless. With AWS CodePipeline for CI/CD process, developers can easily deploy stateful containers onto a Kubernetes cluster with Portworx storage and automate data movement within their process.

The reference architecture and code are available on GitHub:

● Reference architecture: https://github.com/portworx/aws-kube-codesuite
● Lambda function source code for Portworx additions: https://github.com/portworx/aws-kube-codesuite/blob/master/src/kube-lambda.py

For more information about persistent storage for containers, visit the Portworx website. For more information about Code Pipeline, see the AWS CodePipeline User Guide.