Tag Archives: How-to

Create an arcade-style zooming starfield effect | Wireframe issue 13

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/create-an-arcade-style-zooming-starfield-effect-wireframe-issue-13/

Unparalleled depth in a 2D game: PyGame Zero extraordinaire Daniel Pope shows you how to recreate a zooming starfield effect straight out of the eighties arcade classic Gyruss.

The crowded, noisy realm of eighties amusement arcades presented something of a challenge for developers of the time: how can you make your game stand out from all the other ones surrounding it? Gyruss, released by Konami in 1983, came up with one solution. Although it was yet another alien blaster — one of a slew of similar shooters that arrived in the wake of Space Invaders, released in 1978 — it differed in one important respect: its zooming starfield created the illusion that the player’s craft was hurtling through space, and that aliens were emerging from the abyss to attack it.

This made Gyruss an entry in the ‘tube shooter’ genre — one that was first defined by Atari’s classic Tempest in 1981. But where Tempest used a vector display to create a 3D environment where enemies clambered up a series of tunnels, Gyruss used more common hardware and conventional sprites to render its aliens on the screen. Gyruss was designed by Yoshiki Okamoto (who would later go on to produce the hit Street Fighter II, among other games, at Capcom), and was born from his affection for Galaga, a 2D shoot-’em-up created by Namco.

Under the surface, Gyruss is still a 2D game like Galaga, but the cunning use of sprite animation and that zooming star effect created a sense of dynamism that its rivals lacked. The tubular design also meant that the player could move in a circle around the edge of the play area, rather than moving left and right at the bottom of the screen, as in Galaga and other fixed-screen shooters like it. Gyruss was one of the most popular arcade games of its period, probably in part because of its attention-grabbing design.

Here’s Daniel Pope’s example code, which creates a Gyruss-style zooming starfield effect in Python. To get it running on your system, you’ll first need to install Pygame Zero — find installation instructions here, and download the Python code here.

The code sample above, written by Daniel Pope, shows you how a zooming star field can work in PyGame Zero — and how, thanks to modern hardware, we can heighten the sense of movement in a way that Konami’s engineers couldn’t have hoped to achieve about 30 years ago. The code generates a cluster of stars on the screen, and creates the illusion of depth and movement by redrawing them in a new position in a randomly chosen direction each frame.

At the same time, the stars gradually increase their brightness over time, as if they’re getting closer. As a modern twist, Pope has also added an extra warp factor: holding down the Space bar increases the stars’ velocity, making that zoom into space even more exhilarating.

Get your copy of Wireframe issue 13

You can read the rest of the feature in Wireframe issue 13, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press — delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 13 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Create an arcade-style zooming starfield effect | Wireframe issue 13 appeared first on Raspberry Pi.

Recreate iconic 1980s game explosions | Wireframe issue 12

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/recreate-bombermans-iconic-explosions-wireframe-issue-12/

Rik Cross, Senior Learning Manager here at the Raspberry Pi Foundation, shows you how to recreate the deadly explosions in the classic game, Bomberman.

An early incarnation of Bomberman on the NES; the series is still going strong today under Konami’s wing.

Creating Bomberman

Bomberman was first released in the early 1980s as a tech demo for a BASIC compiler, but soon became a popular series that’s still going today. Bomberman sees players use bombs to destroy enemies and uncover doors behind destructible tiles. In this article, I’ll show you how to recreate the bombs that explode in four directions, destroying parts of the level as well as any players in their path!

The game level is a tilemap stored as a two-dimensional array. Each tile in the map is a Tile object, which contains the tile type, and corresponding image. For simplicity, a tile can be set to one of five types; GROUND, WALL, BRICK, BOMB, or EXPLOSION. In this example code, BRICK and GROUND can be exploded with bombs, but WALL cannot, but of course, this behaviour can be changed.

Each Tile object also has a timer, which is decremented each frame of the game. When a tile’s timer reaches 0, an action is carried out, which is dependent on the tile type. BOMB tiles (and surrounding tiles) turn into EXPLOSION tiles after a short delay, and EXPLOSION tiles eventually turn back into GROUND. At the start of the game, the tilemap for the level is generated, in this case consisting of mostly GROUND, with some WALL and a couple of BRICK tiles. The player starts off in the top-left tile, and moves by using the arrow keys. Pressing the SPACE key will place a bomb in the player’s current tile, which is achieved by setting the Tile at the player’s position to BOMB. The tile’s timer is also set to a small number, and once this timer is decremented to 0, the bomb tile and the tiles around it are set to EXPLOSION.

Here’s Rik’s example code, which recreates Bomberman’s explosions in Python. To get it running on your system, you’ll first need to install Pygame Zero — you can find full instructions here. And you can download the code here.

The bomb explodes outwards in four directions, with a range determined by the RANGE, which in our code is 3. As the bomb explodes out to the right, for example, the tile to the right of the bomb is checked. If such a tile exists (i.e. the position isn’t out of the level bounds) and can be exploded, then the tile’s type is set to EXPLOSION and the next tile to the right is checked. If the explosion moves out of the level bounds, or hits a WALL tile, then the explosion will stop radiating in that direction. This process is then repeated for the other directions.

There’s a nice trick for exploding the bomb without repeating the code four times, and it relies on the sine and cosine values for the four direction angles. The angles are 0° (up), 90° (right), 180° (down) and 270° (left). When exploding to the right (at an angle of 90°), sin(90) is 1 and cos(90) is 0, which corresponds to the offset direction on the x- and y-axis respectively. These values can be multiplied by the tile offset, to explode the bomb in all four directions.

Get your copy of Wireframe issue 12

You can read the rest of the feature in Wireframe issue 12, available now at Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from Raspberry Pi Press – delivery is available worldwide. And if you’d like a handy digital version of the magazine, you can also download issue 12 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusives. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate iconic 1980s game explosions | Wireframe issue 12 appeared first on Raspberry Pi.

Coding Breakout’s brick-breaking action | Wireframe #11

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/coding-breakouts-brick-breaking-action-wireframe-11/

Atari’s Breakout was one of the earliest video game blockbusters. Here’s how to recreate it in Python.

The original Breakout, designed by Nolan Bushnell and Steve Bristow, and famously built by a young Steve Wozniak.

Atari Breakout

The games industry owes a lot to the humble bat and ball. Designed by Allan Alcorn in 1972, Pong was a simplified version of table tennis, where the player moved a bat and scored points by ricocheting a ball past their opponent. About four years later, Atari’s Nolan Bushnell and Steve Bristow figured out a way of making Pong into a single-player game. The result was 1976’s Breakout, which rotated Pong’s action 90 degrees and replaced the second player with a wall of bricks.

Points were scored by deflecting the ball off the bat and destroying the bricks; as in Pong, the player would lose the game if the ball left the play area. Breakout was a hit for Atari, and remains one of those game ideas that has never quite faded from view; in the 1980s, Taito’s Arkanoid updated the action with collectible power-ups, multiple stages with different layouts of bricks, and enemies that disrupted the trajectory of the player’s ball.

Breakout had an impact on other genres too: game designer Tomohiro Nishikado came up with the idea for Space Invaders by switching Breakout’s bat with a base that shot bullets, while Breakout’s bricks became aliens that moved and fired back at the player.

Courtesy of Daniel Pope, here’s a simple Breakout game written in Python. To get it running on your system, you’ll first need to install Pygame Zero. And download the code for Breakout here.

Bricks and balls in Python

The code above, written by Daniel Pope, shows you just how easy it is to get a basic version of Breakout up and running in Python, using the Pygame Zero library. Like Atari’s original, this version draws a wall of blocks on the screen, sets a ball bouncing around, and gives the player a paddle, which can be controlled by moving the mouse left and right. The ball physics are simple to grasp too. The ball has a velocity, vel – which is a vector, or a pair of numbers: vx for the x direction and vy for the y direction.

The program loop checks the position of the ball and whether it’s collided with a brick or the edge of the play area. If the ball hits the left side of the play area, the ball’s x velocity vx is set to positive, thus sending it bouncing to the right. If the ball hits the right side, vx is set to a negative number, so the ball moves left. Likewise, when the ball hits the top or bottom of a brick, we set the sign of the y velocity vy, and so on for the collisions with the bat and the top of the play area and the sides of bricks. Collisions set the sign of vx and vy but never change the magnitude. This is called a perfectly elastic collision.

To this basic framework, you could add all kinds of additional features: a 2012 talk by developers Martin Jonasson and Petri Purho, which you can watch on YouTube here, shows how the Breakout concept can be given new life with the addition of a few modern design ideas.

You can read this feature and more besides in Wireframe issue 11, available now in Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from us – worldwide delivery is available. And if you’d like to own a handy digital version of the magazine, you can also download a free PDF.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusives, and for subscriptions, visit the Wireframe website to save 49% compared to newsstand pricing!

The post Coding Breakout’s brick-breaking action | Wireframe #11 appeared first on Raspberry Pi.

Coding Pang’s sprite spawning mechanic | Wireframe #10

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/pang-sprite-spawning-wireframe-10/

Rik Cross, Senior Learning Manager here at Raspberry Pi, shows you how to recreate the spawning of objects found in the balloon-bursting arcade gem Pang.

Pang: bringing balloon-hating to the masses since 1989.

Capcom’s Pang

Programmed by Mitchell and distributed by Capcom, Pang was first released as an arcade game in 1989, but was later ported to a whole host of home computers, including the ZX Spectrum, Amiga, and Commodore 64. The aim in Pang is to destroy balloons as they bounce around the screen, either alone or working together with another player, in increasingly elaborate levels. Destroying a balloon can sometimes also spawn a power-up, freezing all balloons for a short time or giving the player a better weapon with which to destroy balloons.

Initially, the player is faced with the task of destroying a small number of large balloons. However, destroying a large balloon spawns two smaller balloons, which in turn spawns two smaller balloons, and so on. Each level is only complete once all balloons have been broken up and completely destroyed. To add challenge to the game, different-sized balloons have different attributes – smaller balloons move faster and don’t bounce as high, making them more difficult to destroy.

Rik’s spawning balloons, up and running in Pygame Zero. Hit space to divide them into smaller balloons.

Spawning balloons

There are a few different ways to achieve this game mechanic, but the approach I’ll take in my example is to use various features of object orientation (as usual, my example code has been written in Python, using the Pygame Zero library). It’s also worth mentioning that for brevity, the example code only deals with simple spawning and destroying of objects, and doesn’t handle balloon movement or collision detection.

The base Enemy class is simply a subclass of Pygame Zero’s Actor class, including a static enemies list to keep track of all enemies that exist within a level. The Enemy subclass also includes a destroy() method, which removes an enemy from the enemies list and deletes the object.

There are then three further subclasses of the Enemy class, called LargeEnemy, MediumEnemy, and SmallEnemy. Each of these subclasses are instantiated with a specific image, and also include a destroy() method. This method simply calls the same destroy() method of its parent Enemy class, but additionally creates two more objects nearby — with large enemies spawning two medium enemies, and medium enemies spawning two small enemies.

Wireframe 10 Pang

Here’s Rik’s example code, which recreates Pang’s spawning balloons in Python. To get it running on your system, you’ll first need to install Pygame Zero – you can find full instructions here. And you can download the code here.

In the example code, initially two LargeEnemy objects are created, with the first object in the enemies list having its destroy() method called each time the Space key is pressed. If you run this code, you’ll see that the first large enemy is destroyed and two medium-sized enemies are created. This chain reaction of destroying and creating enemies continues until all SmallEnemy objects are destroyed (small enemies don’t create any other enemies when destroyed).

As I mentioned earlier, this isn’t the only way of achieving this behaviour, and there are advantages and disadvantages to this approach. Using subclasses for each size of enemy allows for a lot of customisation, but could get unwieldy if much more than three enemy sizes are required. One alternative is to simply have a single Enemy class, with a size attribute. The enemy’s image, the entities it creates when destroyed, and even the movement speed and bounce height could all depend on the value of the enemy size.

You can read the rest of the feature in Wireframe issue 10, available now in Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from us – worldwide delivery is available. And if you’d like to own a handy digital version of the magazine, you can also download a free PDF.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusives, and for subscriptions, visit the Wireframe website to save 49% compared to newsstand pricing!

The post Coding Pang’s sprite spawning mechanic | Wireframe #10 appeared first on Raspberry Pi.

Laser-engraved Raspberry Pi hologram

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/laser-engraved-raspberry-pi-hologram/

Inspired by an old episode of Pimoroni’s Bilge Tank, and with easy access to the laser cutter at the Raspberry Pi Foundation office, I thought it would be fun to create a light-up multi-layered hologram using a Raspberry Pi and the Pimoroni Unicorn pHAT.

Raspberry Pi layered light

Read more –

Break it to make it

First, I broke down the Raspberry Pi logo into three separate images — the black outline, the green leaves, and the red berry.

RASPBERRY PI HOLOGRAM
RASPBERRY PI HOLOGRAM
RASPBERRY PI HOLOGRAM

Fun fact: did you know that Pimoroni’s Paul Beech designed this logo as part of the ‘design us a logo’ contest we ran all the way back in August 2011?

Once I had the three separate files, I laser-engraved them onto 4cm-wide pieces of 3mm-thick clear acrylic. As there are four lines of LEDs on the Unicorn pHAT, I cut the fourth piece to illuminate the background.

RASPBERRY PI HOLOGRAM

To keep the engraved acrylic pieces together, I cut out a pair of acrylic brackets (see above) with four 3mm indentations. Then, after a bit of fiddling with the Unicorn pHAT library, I was able to light the pHAT’s rows of LEDs in white, red, green, and white.

RASPBERRY PI HOLOGRAM

The final result looks pretty spectacular, especially in the dark, and you can build on this basic idea to create fun animations — especially if you use a HAT with more rows of LEDs.

Iterations

This is just a prototype. I plan on building a sturdier frame for the pieces that securely fits a Raspberry Pi Zero W and lets users replace layers easily. As with many projects, I’m sure this will grow and grow as each interaction inspires a new add-on.

How would you build upon this basic principle?

Oh…

…we also laser-engraved this Cadbury’s Creme Egg.

The post Laser-engraved Raspberry Pi hologram appeared first on Raspberry Pi.

Coding Space Invaders’ disintegrating shields | Wireframe #9

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/coding-space-invaders-disintegrating-shields-wireframe-9/

They add strategy to a genre-defining shooter. Andrew Gillett lifts the lid on Space Invaders’ disintegrating shields.

Wireframe 9 Space Invaders

Released in 1978, Space Invaders introduced ideas so fundamental to video games that it’s hard to imagine a time before them. And it did this using custom-made hardware which by today’s standards is unimaginably slow.

Space Invaders ran on an Intel 8080 CPU operating at 2MHz. With such meagre processing power, merely moving sprites around the screen was a struggle. In modern 2D games, at the start of each frame the entire screen is reset, then all objects are displayed.

For Space Invaders’ hardware, this process would have been too slow. Instead, each time a sprite needs to move, the game first erases the sprite from the screen, then redraws it in the new position. The game also updates only one alien per frame — which leads to the effect of the aliens moving faster when there are fewer of them. These techniques cut down the number of pixels which need to be updated each frame, from nearly 60,000 to around a hundred.

Wireframe 9 Space Invaders

One of Space Invaders’ most notable features is its four shields. These provide shelter from enemy fire, but deteriorate after repeated hits. The player can take advantage of the shields’ destructible nature — by repeatedly firing at the same place on a shield’s underside, a narrow gap can be created which can then be used to take out enemies. (Of course, the player can also be shot through the same gap.)

The system of updating only the minimum necessary number of pixels works well as long as there’s no need for objects to overlap. In the case of the shields, though, what happens when objects do overlap is fundamental to how they work. Whenever a shot hits something, it’s replaced by an explosion sprite. A few frames later, the explosion sprite is deleted from the screen. If the explosion sprite overlapped with a shield, that part of the shield is also deleted.

Wireframe 9 Space Invaders

Here’s a code snippet that shows Andrew’s Space Invaders-style disintegrating shields working in Python. To get it running on your system, you’ll need to install Pygame Zero — you can find full instructions here. And download the above code here.

The code to the right displays four shields, and then bombards them with a series of shots which explode on impact. I’m using sprites which have been scaled up by ten, to make it easier to see what’s going on.

We first create two empty lists — one to hold details of any shots on screen, as well as explosions. These will be displayed on the screen every frame. Each entry in the shots list will be a dictionary data structure containing three values: a position, the sprite to be displayed, and whether the shot is in ‘exploding’ mode — in which case it’s displayed in the same position for a few frames before being deleted.

The second list, to_delete, is for sprites which need to be deleted from the screen. For simplicity, I’m using separate copies of the shot and explosion sprites where the white pixels have been changed to black (the other pixels in these sprites are set as transparent).

The function create_random_shot is called every half-second. The combination of dividing the maximum value by ten, choosing a random whole number between zero and the maximum value, and then multiplying the resulting random number by ten, ensures that the chosen X coordinate is a multiple of ten.


Wireframe 9 Space Invaders
Wireframe 9 Space Invaders

Andrew’s Space Invaders shields up and running in Pygame Zero.

In the draw function, we first check to see if it’s the first frame, as we only want to display the shields on that frame. The screen.blit method is used to display sprites, and Pygame Zero’s images object is used to specify which sprite should be displayed. We then display all sprites in the to_delete list, after which we reset it to being an empty list. Finally we display all sprites in the shots list.

Wireframe 9 Space Invaders

In the update function, we go through all sprites in the shots list, in reverse order. Going through the list backwards avoids problems that can occur when deleting items from a list inside a for loop. For each shot, we first check to see if it’s in ‘exploding’ mode. If so, its timer is reduced each frame — when it hits zero we add the shot to the to_delete list, then delete it from shots.

If the item is a normal shot rather than an explosion, we add its current position to to_delete, then update the shot’s position to move the sprite down the screen. We next check to see if the sprite has either gone off the bottom of the screen or collided with something. Pygame’s get_at method gives us the colour of a pixel at a given position. If a collision occurs, we switch the shot into ‘exploding’ mode — the explosion sprite will be displayed for five frames.

You can read the rest of the feature in Wireframe issue 9, available now in Tesco, WHSmith, and all good independent UK newsagents.

Or you can buy Wireframe directly from us – worldwide delivery is available. And if you’d like to own a handy digital version of the magazine, you can also download a free PDF.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusives, and for subscriptions, visit the Wireframe website to save 49% compared to newsstand pricing!

The post Coding Space Invaders’ disintegrating shields | Wireframe #9 appeared first on Raspberry Pi.

Improve Build Performance and Save Time Using Local Caching in AWS CodeBuild

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/improve-build-performance-and-save-time-using-local-caching-in-aws-codebuild/

AWS CodeBuild now supports local caching, which makes it possible for you to persist intermediate build artifacts locally on the build host so that they are available for reuse in subsequent build runs.

Your build project can use one of two types of caching: Amazon S3 or local. In this blog post, we will discuss how to use the local caching feature.

Local caching stores a cache on a build host. The cache is available to that build host only for a limited time and until another build is complete. For example, when you are dealing with large Java projects, compilation might take a long time. You can speed up subsequent builds by using local caching. This is a good option for large intermediate build artifacts because the cache is immediately available on the build host.

Local caching increases build performance for:

  • Projects with a large, monolithic source code repository.
  • Projects that generate and reuse many intermediate build artifacts.
  • Projects that build large Docker images.
  • Projects with many source dependencies.

To use local caching

1. Open AWS CodeBuild console at https://console.aws.amazon.com/codesuite/codebuild/home.

2. Choose Create project.

3. In Project configuration, enter a name and description for the build project.

4. In Source, for Source provider, choose the source code provider type. In this example, we use an AWS CodeCommit repository name.

5. For Environment image, choose Managed image or Custom image, as appropriate. For environment type, choose Linux or Windows Server. Specify a runtime, runtime version, and service role for your project.

6. Configure the buildspec file for your project.

7. In Artifacts, expand Additional Configuration. For Cache type, choose Local, as shown here.

Local caching supports the following caching modes:

Source cache mode caches Git metadata for primary and secondary sources. After the cache is created, subsequent builds pull only the change between commits. This mode is a good choice for projects with a clean working directory and a source that is a large Git repository. If you choose this option and your project does not use a Git repository (GitHub, GitHub Enterprise, or Bitbucket), the option is ignored. No changes are required in the buildspec file.

Docker layer cache mode caches existing Docker layers. This mode is a good choice for projects that build or pull large Docker images. It can prevent the performance issues caused by pulling large Docker images down from the network.

Note

  • You can use a Docker layer cache in the Linux environment only.
  • The privileged flag must be set so that your project has the required Docker permissions
  • You should consider the security implications before you use a Docker layer cache.

Custom cache mode caches directories you specify in the buildspec file. This mode is a good choice if your build scenario is not suited to one of the other two local cache modes. If you use a custom cache:

  • Only directories can be specified for caching. You cannot specify individual files.
  • Symlinks are used to reference cached directories.
  • Cached directories are linked to your build before it downloads its project sources. Cached items are overridden if a source item has the same name. Directories are specified using cache paths in the buildspec file.

To use source cache mode

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Source cache, as shown here.

To use Docker layer cache mode

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Docker layer cache, as shown here.

Under Privileged, select Enable this flag if you want to build Docker images or want your builds to get elevated privileges. This grants elevated privileges to the Docker process running on the build host.

To use custom cache mode

In your buildspec file, specify the cache path, as shown here.

In the build project configuration, under Artifacts, expand Additional Configuration. For Cache type, choose Local. Select Custom cache, as shown here.


version: 0.2
phases:
  pre_build:
    commands:
      - echo "Enter pre_build commands"
  build:
    commands:
      - echo "Enter build commands"
      
cache:
  paths:
    - '/root/.m2/**/*'
    - '/root/.npm/**/*'
    - 'build/**/*'

Conclusion

We hope you find the information in this post helpful. If you have feedback, please leave it in the Comments section below. If you have questions, start a new thread on the AWS CodeBuild forum or contact AWS Support.

 

 

 

 

 

Using Git with AWS CodeCommit Across Multiple AWS Accounts

Post Syndicated from Steve Engledow original https://aws.amazon.com/blogs/devops/using-git-with-aws-codecommit-across-multiple-aws-accounts/

I use AWS CodeCommit to host all of my private Git repositories. My repositories are split across several AWS accounts for different purposes: personal projects, internal projects at work, and customer projects.

The CodeCommit documentation shows you how to configure and clone a repository from one place, but in this blog post I want to share how I manage my Git configuration across multiple AWS accounts.

Background

First, I have profiles configured for each of my AWS environments. I connect to some of them using IAM user credentials and others by using cross-account roles.

I intentionally do not have any credentials associated with the default profile. That way I must always be sure I have selected a profile before I run any AWS CLI commands.

Here’s an anonymized copy of my ~/.aws/config file:

[profile personal]
region = eu-west-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile work]
region = us-east-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile customer]
region = eu-west-2
source_profile = work
role_arn = arn:aws:iam::123456789012:role/CrossAccountPowerUser

If I am doing some work in one of those accounts, I run export AWS_PROFILE=work and use the AWS CLI as normal.

The problem

I use the Git credential helper so that the Git client works seamlessly with CodeCommit. However, because I use different profiles for different repositories, my use case is a little more complex than the average.

In general, to use the credential helper, all you need to do is place the following options into your ~/.gitconfig file, like this:

[credential]
    helper = !aws codecommit credential-helper [email protected]
    UserHttpPath = true

I could make this work across accounts by setting the appropriate value for AWS_PROFILE before I use Git in a repository, but there is a much neater way to deal with this situation using a feature released in Git version 2.13, conditional includes.

A solution

First, I separate my work into different folders. My ~/code/ directory looks like this:

code
    personal
        repo1
        repo2
    work
        repo3
        repo4
    customer
        repo5
        repo6

Using this layout, each folder that is directly underneath the code folder has different requirements in terms of configuration for use with CodeCommit.

Solving this has two parts; first, I create a .gitconfig file in each of the three folder locations. The .gitconfig files contain any customization (specifically, configuration for the credential helper) that I want in place while I work on projects in those folders.

For example:

[user]
    # Use a custom email address
    email = [email protected]

[credential]
    # Note the use of the --profile switch
    helper = !aws --profile work codecommit credential-helper [email protected]
    UseHttpPath = true

I also make sure to specify the AWS CLI profile to use in the .gitconfig file which means that, when I am working in the folder, I don’t need to set AWS_PROFILE before I run git push, etc.

Secondly, to make use of these folder-level .gitconfig files, I need to reference them in my global Git configuration at ~/.gitconfig

This is done through the includeIf section. For example:

[includeIf "gitdir:~/code/personal/"]
    path = ~/code/personal/.gitconfig

This example specifies that if I am working with a Git repository that is located anywhere under ~/code/personal/``, Git should load additional configuration from ~/code/personal/.gitconfig. That additional file specifies the appropriate credential helper invocation with the corresponding AWS CLI profile selected as detailed earlier.

The contents of the new file are treated as if they are inserted into the main .gitconfig file at the location of the includeIf section. This means that the included configuration will only override any configuration specified earlier in the config.

I hope you find this approach useful. If you have any questions or feedback, please free to leave them in the comments.

Learn about hourly-replication in Server Migration Service and the ability to migrate large data volumes

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/learn-about-hourly-replication-in-server-migration-service-and-the-ability-to-migrate-large-data-volumes/

This post courtesy of Shane Baldacchino, AWS Solutions Architect

AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.

In my previous blog posts, we introduced how you can use AWS Server Migration Service (AWS SMS) to migrate a popular commercial off the shelf software, WordPress into AWS.

For details and a walkthrough on how setup the AWS Server Migration Service, please see the following blog posts for Hyper-V and VMware hypervisors which will guide you through the high level process.

In this article we are going to step it up a few notches and look past common the migration of off-the-shelf software and provide you a pattern on how you can use AWS SMS and some of the recently launched features to migrate a more complicated environment, especially compression and resiliency for replication jobs and the support for data volumes greater than 4TB.

This post covers a migration of a complex internally developed eCommerce system comprising of a polyglot architecture. It is made up a Windows Microsoft IIS presentation tier, Tomcat application tier, and Microsoft SQL Server database tier. All workloads run on-premises as virtual machines in a VMware vCenter 5.5 and ESX 5.5 environment.

This theoretical customer environment has various business and infrastructure requirements.

Application downtime: During any migration activities, the application cannot be offline for more than 2 hours
Licensing: The customer has renewed their Microsoft SQL Server license for an additional 3 years and holds License Mobility with Software Assurance option for Microsoft SQL Server and therefore wants to take advantage of AWS BYOL licensing for Microsoft SQL server and Microsoft Windows Server.
Large data volumes: The Microsoft SQL Server database engine (.mdf, .ldf and .ndf files) consumes 11TB of storage

Walkthrough

Key elements of this migration process are identical to the process outlined in my previous blog posts and for more information on this process, please see the following blog posts Hyper-V and VMware hypervisors, but a high level you will need to.

• Establish your AWS environment.
• Download the SMS Connector from the AWS Management Console.
• Configure AWS SMS and Hypervisor permissions.
• Install and configure the SMS Connector appliance.
• Import your virtual machine inventory and create a replication jobs
• Launch your Amazon EC2 instances and associated NACL’s, Security Groups and AWS Elastic Load Balancers
• Change your DNS records to resolve the custom application to an AWS Elastic Load Balancer.

Before you start, ensure that your source systems OS and vCenter version are supported by AWS. For more information, see the Server Migration Service FAQ.

Planning the Migration

Once you have downloaded and configured the AWS SMS connector with your given Hypervisor you can get started in creating replication jobs.

The artifacts derived from our replication jobs with AWS SMS will be AMI’s (Amazon Machine Images) and as such we do not need to replicate each server individually and that is because we have a three-tier architecture that has commonality between servers with multiple Application and Web servers performing the same function, and as such we can leverage a common AMI and create three replication jobs.

1. Microsoft SQL Server – Database Tier
2. Ubuntu Server – Application Tier
3. IIS Web server – Webserver Tier

Performing the Replication

After validating that the SMS Connector is in a “HEALTHY” state, import your server catalog from your Hypervisor to AWS SMS. This process can take up to a minute.

Select the three servers (Microsoft SQL Server, Ubuntu Server, IIS Web server) to migrate and choose Create replication job. AWS SMS now supports creating replications jobs with frequencies as short as 1 hour, and as such to ensure our business RTO (Recovery Time Objective) of 2 hours is met we will create our replication jobs with a frequency of 1 hour. This will minimize the risk of any delta updates during the cutover windows not completing.

Given the businesses existing licensing investment in Microsoft SQL Server, they will leverage these the BYOL (Bring Your Own License) offering when creating the Microsoft SQL Server replication job.

The AWS SMS console guides you through the process. The time that the initial replication task takes to complete is dependent on available bandwidth and the size of your virtual machines.

After the initial seed replication, network bandwidth requirement is minimized as AWS SMS replicates only incremental changes occurring on the VM.

The progress updates from AWS SMS are automatically sent to AWS Migration Hub so that you can track tasks in progress.

AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions. In this post, we are using AWS SMS as a mechanism to migrate the virtual machines (VMs) and track them via AWS Migration Hub.

Migration Hub and AWS SMS are both free. You pay only for the cost of the individual migration tools that you use, and any resources being consumed on AWS

The dashboard reflects any status changes that occur in the linked services. You can see from the following image that two servers are complete whilst another is in progress.

Using Migration Hub, you can view the migration progress of all applications. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

After validating that the SMS Connector

Testing Your Replicated Instances

Thirty hours after creating the replication jobs, notification was received via AWS SNS (Simple Notification Service) that all 3 replication jobs have completed. During the 30-hour replication window the customers ISP experienced downtime and sporadic flapping of the link, but this was negated by the network auto-recovery feature of SMS. It recovered and resumed replication without any intervention.

With the replication tasks being complete. The artifact created by AWS SMS is a custom AMI that you can use to deploy an EC2 instance. Follow the usual process to launch your EC2 instance, noting that you may need to replace any host-based firewalls with security groups and NACLs and any hardware based load balancers with Elastic Load Balancing to achieve fault tolerance, scalability, performance and security.

As this environment is a 3-tier architecture with commonality been tiers (Application and Presentation Tier) we can create during the EC2 Launch process an ASG (Auto Scaling Group) to ensure that deployed capacity matches user demand. The ASG will be based on the custom AMI’s generated by the replication jobs.

When you create an EC2 instance, ensure that you pick the most suitable EC2 instance type and size to match your performance and cost requirements.

While your new EC2 instances are a replica of your on-premises VM, you should always validate that applications are functioning. How you do this differs on an application-by-application basis. You can use a combination of approaches, such as editing a local host file and testing your application, SSH, RDP and Telnet.

For our Windows Presentation and database tier, I can RDP in to my systems and validate IIS 8.0 and other services are functioning correctly.

For our Ubuntu Application tier, we can SSH in to perform validation.

Post validation of each individual server we can now continue to test the application end to end. This is because our systems have been instantiated inside a VPC with no route back to our on-premises environment which allows us to test functionality without the risk of communication back to our production application.

After validation of systems it is now time to cut over, plan your runbook accordingly to ensure you either eliminate or minimize application disruption.

Cutting Over

As the replication window specified in AWS SMS replication jobs was 1 hour, there were hourly AMI’s created that provide delta updates since the initial seed replication was performed. The customer verified the stack by executing the previously created runbook using the latest AMIs, and verified the application behaved as expected.

After another round of testing, the customer decided to plan the cutover on the coming Saturday at midnight, by announcing a two-hour scheduled maintenance window. During the cutover window, the customer took the application offline, shutdown Microsoft SQL Server instance and performed an on-demand sync of all systems.

This generate a new versioned AMI that contained all on-premise data. The customer then executed the runbook on the new AMI’s. For the application and presentation tier these AMI’s were used in the ASG configuration. After application validation Amazon Route 53 was updated to resolve the application CNAME to the Application Load Balancer CNAME used to load balance traffic to the fleet of IIS servers.

Based on the TTL (Time To Live) of your Amazon Route 53 DNS zone file, end users slowly resolve the application to AWS, in this case within 300 seconds. Once this TTL period had elapsed the customer brought their application back online and exited their maintenance window, with time to spare.

After modifying the Amazon Route 53 Zone Apex, the physical topology now looks as follows with traffic being routed to AWS.

After validation of a successful migration the customer deleted their AWS Server Migration Service replication jobs and began planning to decommission their on-premises resources.

Summary

This is an example pattern on migrate a complex custom polyglot environment in to AWS using AWS migration services, specifically leveraging many of the new features of the AWS SMS service.

Many architectures can be extended to use many of the inherent benefits of AWS, with little effort. For example this article illustrated how AWS Migration Services can be used to migrate complex environments in to AWS and then use native AWS services such as Amazon CloudWatch metrics to drive Auto Scaling policies to ensure deployed capacity matches user demand whilst technologies such as Application Load Balancers can be used to achieve fault tolerance and scalability

Think big and get building!

 

 

Tinkernut’s Beginners’ Guide to SSH

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernuts-beginners-guide-to-ssh/

We often mention SSH (Secure Shell) when we talk about headless Raspberry Pi projects — projects that involve accessing a Pi remotely. If you’re a coding creative who doesn’t know what SSH involves, we’ve got you covered with our comprehensive online guide to using SSH with your Raspberry Pi.

SSH in terminal

You know who’s also got you covered? YouTube favourite Tinkernut, with his great beginners’ guide to SSH, what it is, why we use it, and how you can use it with your device:

Beginners Guide To SSH

Me: “I have a question about controlling another computer over the internet” You: “SSH” Me: “Don’t tell me to ‘shhh’, I’m asking you a question”. Ok, enough with the play on words. If you’ve ever wanted to securely control another computer over the internet, then you’ve probably heard of SSH.

SSHhhhhhhhhh

Between our guide and Tinkernut’s video, I don’t think I need to add anything else on the subject.

So here, have this GIF, and have yourself a lovely weekend!

The post Tinkernut’s Beginners’ Guide to SSH appeared first on Raspberry Pi.

Stream Amazon CloudWatch Logs to a Centralized Account for Audit and Analysis

Post Syndicated from David Bailey original https://aws.amazon.com/blogs/architecture/stream-amazon-cloudwatch-logs-to-a-centralized-account-for-audit-and-analysis/

A key component of enterprise multi-account environments is logging. Centralized logging provides a single point of access to all salient logs generated across accounts and regions, and is critical for auditing, security and compliance. While some customers use the built-in ability to push Amazon CloudWatch Logs directly into Amazon Elasticsearch Service for analysis, others would prefer to move all logs into a centralized Amazon Simple Storage Service (Amazon S3) bucket location for access by several custom and third-party tools. In this blog post, I will show you how to forward existing and any new CloudWatch Logs log groups created in the future to a cross-account centralized logging Amazon S3 bucket.

The streaming architecture I use in the destination logging account is a streamlined version of the architecture and AWS CloudFormation templates from the Central logging in Multi-Account Environments blog post by Mahmoud Matouk. This blog post assumes some knowledge of CloudFormation, Python3 and the boto3 AWS SDK. You will need to have or configure an AWS working account and logging account, an IAM access and secret key for those accounts, and a working environment containing Python and the boto3 SDK. (For assistance, see the Getting Started Resource Center and Start Building with SDKs and Tools.) All CloudFormation templates and Python code used in this article can be found in this GitHub Repository.

Setting Up the Solution

You need to create or use an existing S3 bucket for storing CloudFormation templates and Python code for an AWS Lambda function. This S3 bucket is referred to throughout the blog post as the <S3 infrastructure-bucket>. Ensure that the bucket does not block new bucket policies or cross-account access by checking the bucket’s Permissions tab and the Public access settings button.

You also need a bucket policy that allows each account that needs to stream logs to access it when we create the AWS Lambda function below. To do so, update your bucket policy to include each new account you create and the <S3 infrastructure-bucket> ARN from the top of the Bucket policy editor page to modify this template:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                  "03XXXXXXXX85",
                  "29XXXXXXXX02",
                  "13XXXXXXXX96",
                  "37XXXXXXXX30",
                  "86XXXXXXXX95"
                ]
            },
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::<S3 infrastructure-bucket>",
                "arn:aws:s3:::<S3 infrastructure-bucket>/*"
            ]
        }
    ]
}

Clone a local copy of the CloudFormation templates and Python code from the GitHub repository. Compress the CentralLogging.py and lambda.py into a .zip file for the lambda function we create below and name it AddSubscriptionFilter.zip. Load these local files into the <S3 infrastructure-bucket>. I recommend using folders called /python for the .py files, /lambdas for the AddSubscriptionFilter.zip file and /cfn for the CloudFormation templates.

Multi-Account Configuration and the Central Logging Account

One form of multi-account configuration is the Landing Zone offering, which provides a core logging account for storing all logs for auditing. I use this account configuration as an example in this blog post. Initially, the Landing Zone setup creates several stack sets and resources, including roles, security groups, alarms, lambda functions, a cloud trail stream and an S3 bucket.

If you are not using a Landing Zone, create an appropriately named S3 bucket in the account you have chosen as a logging account. This S3 bucket will be referred to later as the <LoggingS3Bucket>. To mimic what the Landing Zone calls its logging bucket, you can use the format aws-landing-zone-logs-<Account Number><Region>, or simply pick an appropriate name for the centralized logging location. In a production environment, remember that it is critical to lock down the access to logging resources and the permissions allowed within the account to prevent deletion or tampering with the logs.

Figure 1 - Initial Landing Zone logging account resources

Figure 1 – Initial Landing Zone logging account resources

The S3 bucket – aws-landing-zone-logs-<Account Number><Region> is the most important resource created by the stack-sets for logging purposes. It contains all of the logs streamed to it from all of the accounts. Initially, the Landing Zone only sends the AWS CloudTrail and AWS Config logs to this S3 bucket.

In order to send all of the other CloudWatch Logs that are necessary for auditing, we need to add a destination and streaming mechanism to the logging account.

Logging Account Insfrastructure

The additional infrastructure required in the central logging account provides a destination for the log group subscription filters and a stream for log events that are sent from all accounts and appropriate regions to load them into the <LoggingS3Bucket> repository. The selection of these particular AWS resources is important, because Kinesis Data Streams is the only resource currently supported as a destination for cross-account CloudWatch Logs subscription filters.

The centralLogging.yml CloudFormation template automates the creation of the entire required infrastructure in the core logging account. Make sure to run it in each of the regions in which you need to centralize logs. The log group subscription filter and destination regions must match in order to successfully stream the logs.

Installation Instructions:

  1. Modify the centralLogging.yml template to add your account numbers for all of the accounts you want to stream logs from into the DestinationPolicy where you see the <AccountNumberHere> placeholders. Remove any unused placeholders.
  2. In the same DestinationPolicy, modify the final arn statement, replacing <region> with the region it will be run in (e.g., us-east-1), and the <logging account number> with the account number of the logging account where this template is to be run.
  3. Log in to the core logging account and access the AWS management console using administrator credentials.
  4. Navigate to CloudFormation and click the Create Stack button.
  5. Select Specify an Amazon S3 template URL and enter the Link for the centralLogging.yml template found in the <S3 infrastructure-bucket>.
  6. Enter a stack name, such as CentralizedLogging, and the one parameter called LoggingS3Bucket. Enter in the ARN of the logging bucket: arn:aws:s3::: <LoggingS3Bucket>. This can be obtained by opening the S3 console, clicking on the bucket icon next to this bucket, and then clicking the Copy Bucket ARN button.
  7. Skip the next page, acknowledge the creation of IAM resources, and Create the stack.
  8. When the stack completes, select the stack name to go to stack details and open the Outputs. Copy the value of the DestinationArnExport, which will be needed as a parameter for the script in the next section.

Upon successful creation of this CloudFormation stack, the following new resources will be created:

  • Amazon CloudWatch Logs Destination
  • Amazon Kinesis Stream
  • Amazon Kinesis Firehose Stream
  • Two AWS Identity and Access Management (IAM) Roles
Figure 2 - New infrastructure required in the centralized logging account

Figure 2 – New infrastructure required in the centralized logging account

Because the Landing Zone is a multi-account offering, the Log Destination is required to be the destination for all subscription filters. The key feature of the destination is its DestinationPolicy. Whenever a new account is added to the environment, its account number needs to be added to this DestinationPolicy in order for logs to be sent to it from the new account. Add the new account number in the centralLogging.yml CloudFormation template, and run an update in CloudFormation to complete the addition. A sample Destination Policy looks like this:

{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect" : "Allow",
      "Principal" : {
        "AWS" : [
          "03XXXXXXXX85",
          "29XXXXXXXX02",
          "13XXXXXXXX96",
          "37XXXXXXXX30",
          "86XXXXXXXX95"
        ]
      },
      "Action" : "logs:PutSubscriptionFilter",
      "Resource" : "arn:aws:logs:<Region>:<LoggingAccountNumber>:destination:CentralLogDestination"
    }
  ]
}

The Kinesis Stream get records from the Logs Destination and holds them for 48 hours. Kinesis Streams scale by adding shards. The CloudFormation template starts the stream with two shards. You need to monitor this as instances and applications are deployed into the accounts, however, because all CloudWatch log objects will flow through this stream, and it will need to be scaled up at some point. To scale, change the number of shards (ShardCount) in the Kinesis Stream resource (KinesisLoggingStream) to the required number. See the Amazon Kinesis Data Streams FAQ documentation to confirm the capacity and throughput of each shard.

Kinesis Firehose provides a simple and efficient mechanism to retrieve the records from the Kinesis Stream and load them into the <LoggingS3Bucket> repository. It uses the CloudFormation template parameter to know where to load the logs. All of the CloudWatch logs loaded by Firehose will be under the prefix /CentralizedAccountsLog. The buffering hints for Firehose suggest that the logs be loaded every 5 minutes or 50 MB. Leave the CompressionFormat UNCOMPRESSED, since the logs are already compressed.

There are two AWS Identity and Access Management (IAM) roles created for this infrastructure. The first, CWLtoKinesisRole is used by the destination to allow CloudWatch Logs from all regions to use the destination to put the log object records into the Kinesis Stream, as well as to pass the role. The second, FirehoseDeliveryRole, allows Firehose to get the log object records from the Kinesis Stream, and then to load them into S3 logging bucket.

Once you have successfully created this infrastructure, the next step is to add the subscription filters to existing log groups.

Adding Subscription Filters to Existing Log Groups

The next step in the process is to add subscription filters for the Log Destination in the core logging account to all existing log groups. Several log groups are created by the Landing Zone, or you may have created them by using various AWS services or by logging application events. For every new AWS account, you will need to run the init_account_central_logging.py Python script to add the subscription filters to all the existing log groups.

The init_account_central_logging.py script takes one parameter, which is the Log Destination ARN. Use the Destination ARN you copied from the stack details output in the previous section as the parameter to the script.

The init_account_central_logging.py script first adds this Destination ARN to the AWS Systems Manager Parameter Store so that the core logic that creates the subscription filter can use it. The script then gets a list of all existing log groups, iterates over them, deletes any existing subscription filters (because there can only be one subscription filter per log group and attempting to create another would cause an error), and then adds the new subscription filter to the centralized logging account to the Log Destination.

Figure 3 - Run script to add subscription filters to existing log groups

Figure 3 – Run script to add subscription filters to existing log groups

Installation Instructions:

  1. Make sure that Python and boto3 are installed and accessible in the client computer – consider loading into a virtual environment to keep dependencies separate.
  2. Set the AWS_PROFILE environment variable to the appropriate AWS account profile.
  3. Log in to the proper account, and obtain administrator or other credentials with appropriate permissions, and add the account access key and secret key to the AWS credentials file.
  4. Set the region and output in the AWS config file.
  5. Download and place two python files into a working directory: init_account_central_logging.py and CentralLogging.py.
  6. Run the script using the command python3 ./init_account_central_logging.py -d <LogDestinationArn>.

Use the AWS Management Console to validate the results. Navigate to CloudWatch Logs and view all of the log groups. Each one should now have a subscription filter named “Logs (CentralLogDestination).”

Automatically Adding Subscription Filters to New Log Groups

The final step to set up the centralized log streaming capability is to run a CloudFormation script to create resources that automatically add subscription filters to new log groups. New log groups are created in accounts by resources (e.g., Lambda functions) and by applications. A subscription filter must be added to every new log group in order to deliver its log events to the logging account,

The AddSubscriptionFilter.yml CloudFormation template contains resources to automatically add subscription filters.

First, it creates a role that allows it to access the lambda code that is stored in a centralized location – the <S3 infrastructure-bucket>. (Remember that its S3 bucket policy must contain this account number in order to access the lambda code.)

Second, the template creates the AddSubscriptionLambda, which reuses the core logic shared by the script in the last section. It retrieves the proper destination from the Parameter Store, deletes any existing subscription filter from the log group, and adds the new subscription filter to the newly created log group. This lambda function is triggered by a CloudWatch event rule.

Third, the CloudFormation creates a Lambda Permission, which allows the event trigger to invoke this particular lambda.

Finally, the CloudFormation template creates an Amazon CloudWatch Events Rule that acts as a trigger for the lambda. This rule looks for an event coming from CloudTrail that signals the creation of a new log group. For each create log group event found, it invokes the AddSubscriptionLambda.

Figure 4 - Infrastructure to automatically add a subscription filter to a new log group and the log flow to the centralized account

Figure 4 – Infrastructure to automatically add a subscription filter to a new log group and the log flow to the centralized account

Installation Instructions:

(Important note: This functionality requires that the LogDestination parameter be properly set to the LogDestinationArn in the Parameter Store before the Lambda will run successfully. The script in the previous step sets this parameter, or it can be done manually. Make certain that the destination specified is in this same region.)

  1. Ensure that the <S3 infrastructure-bucket> has the AddSubscriptionFilter.zip file containing the Python code files lambda.py and CentralLogging.py.
  2. Log in to the appropriate account, and access using administrator credentials. Make sure that the region is set properly.
  3. Navigate to Cloudformation and click the Create Stack button.
  4. Select Specify an Amazon S3 template URL and enter the Link for the AddSubscriptionFilter.yml template found in <S3 infrastructure-bucket>
  5. Enter a stack name, such as AddSubscription.
  6. Enter the two parameters, the <S3 infrastructure-bucket> name (not ARN) and the folder and file name (e.g., lambdas/AddSubscriptionFilter.zip)
  7. Skip the next page, acknowledge the creation of IAM resources, and Create the stack.

In order to test that the automated addition of subscription filters is working properly, use the AWS Management Console to navigate to CloudWatch Logs and click the Actions button. Select Create New Log Group and enter a random log group name, such as “testLogGroup.” When first created, the log group will not have a subscription filter. After a few minutes, refresh the display and you should see the new subscription filter on the log group. At this point, you can delete the test log group.

New Account Setup

As a reminder, when you add new accounts that you want to have stream log events to the central logging account, you will need to configure the new accounts in two places in order for this functionality to work properly.

First, add the account number to the LoggingDestination property DestinationPolicy in the centralLogging.yml template. Then, update the CloudFormation stack.

Second, modify the bucket policy for the <S3 infrastructure-bucket>. Select the Permissions tab, then the Bucket Policy button. Add the new account to allow cross-account access to the lambda code by adding the line “arn:aws:iam::<new account number>:root” to the Principal.AWS list.

Conclusion

Centralized logging is a key component in enterprise multi-account architectures. In this blog post, I have built on the central logging in multi-account environments streaming architecture to automatically subscribe all CloudWatch Logs log groups to send all log events to an S3 bucket in a designated logging account. The solution uses a script to add subscription filters to existing log groups, and a lambda function to automatically place a subscription filter on all new log groups created within the account. This can be used to forward application logs, security logs, VPC flow logs, or any other important logs that are required for audit, security, or compliance purposes.

About the author

David BaileyDavid Bailey is a Cloud Infrastructure Architect with AWS Professional Services specializing in serverless application architecture, IoT, and artificial intelligence. He has spent decades architecting and developing complex custom software applications, as well as teaching internationally on object-oriented design, expert systems, and neural networks.

 

 

Getting started with your Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/getting-started-raspberry-pi/

Here on the Raspberry Pi blog, we often share impressive builds made by community members who have advanced making and coding skills. But what about those of you who are just getting started?

Getting started with Raspberry Pi

For you, we’ve been working hard to update and polish our Getting started resources, including a brand-new video to help you get to grips with your new Pi.

Getting started with Raspberry Pi

Whether you’re new to electronics and the Raspberry Pi, or a seasoned pro looking to share your knowledge and skills with others, sit back and watch us walk you through the basics of setting up our powerful little computer.

How to set up your Raspberry Pi || Getting started with #RaspberryPi

Learn how to set up your Raspberry Pi for the first time, from plugging in peripherals to loading Raspbian.

We’ve tried to make this video as easy to follow as possible, with only the essential explanations and steps.

getting started with raspberry pi

As with everything we produce, we want this video to be accessible to the entire world, so if you can translate its text into another language, please follow this link to submit your translation directly through YouTube. You can also add translations to our other YouTube videos here! As a thank you, we’ll display your username in the video descriptions to acknowledge your contributions.

New setup guides and resources

Alongside our shiny new homepage, we’ve also updated our Help section to reflect our newest tech and demonstrate the easiest way for beginners to start their Raspberry Pi journey. We’re now providing a first-time setup guide, and also a walk-through for using your Raspberry Pi that shows you all sort of things you can do with it. And with guides to our official add-on devices and a troubleshooting section, our updated Help page is your one-stop shop for getting the most out of your Pi.

getting started with raspberry pi

For parents and teachers, we offer guides on introducing Raspberry Pi and digital making to your children and students. And for those of you who are visual learners, we’ve curated a collection of our videos to help you get making.

As with our videos, we’re looking for people whose first language isn’t English to help us translate our resources. If you’re able to donate some of your time to support this cause, please sign up here.

The forums

We’re very proud of our forum community. Since the birth of the Raspberry Pi, our forums have been the place to go for additional support, conversation, and project bragging.

Raspberry Pi forums

If your question isn’t answered on our Help page, there’s no better place to go than the forums. Nine times out of ten, your question will already have been asked and answered there! And if not, then our friendly forum community will be happy to share their wealth of knowledge and help you out.

Events and clubs

Raspberry Pi and digital making enthusiasts come together across the world at various events and clubs, including Raspberry Jams, Code Club and CoderDojo, and Coolest Projects. These events are perfect for learning more about how people use Raspberry Pi and other technologies for digital making — as a hobby and as a tool for education.

getting started with raspberry pi

Keep up to date

To keep track of all the goings-on of the Raspberry Pi Foundation, be sure to follow us on Twitter, Facebook, and Instagram, and sign up to our Raspberry Pi Weekly newsletter and the monthly Raspberry Pi LEARN education newsletter.

The post Getting started with your Raspberry Pi appeared first on Raspberry Pi.

Make your own custom LEDs using hot glue!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/homemade-custom-leds-using-hot-glue/

Tired of using the same old plastic LEDs in your projects? It’s time to grab a hot glue gun and some confectionary moulds to create your own custom LEDs!

make your own custom LEDs for Raspberry Pi

Blinky LEDs!

Lighting up an LED is the standard first step into the world of digital making with a Raspberry Pi. For example, at our two-day Picademy training events, budding Raspberry Pi Certified Educators are shown the ropes of classroom digital making by learning how to connect an LED to a Pi and use code to make it blink.

Anastasia Hanneken on Twitter

Blinking LED Light @Raspberry_Pi #picademy! https://t.co/zhTODYsBxp

And while LEDs come in various sizes, they’re all pretty much the same shape: small, coloured domes of plastic with pointy legs that always manage to draw blood when I grab them from the depths of my maker drawer.

So why not do away with the boring and make some new LEDs based on your favourite characters and shapes?

Making custom LEDs with a whole lotta hot glue

The process of creating your own custom LEDs is pretty simple, but it’s not without its risk — namely, burnt fingertips and sizzled LEDs! So be careful when making these, and supervise young children throughout the process.

The moulds

I used flexible ice cube trays, but you could also use chocolate moulds. As long as the mould is flexible, this should work — I haven’t tried hard plastic moulds, so I can’t make any promises for those. Also be sure to test whether your mould will withstand the heat of the hot glue!

Check your LEDs

Before you submerge your LEDs in hot glue, check to make sure they work. The easiest way to do this is to set up a testing station using a Pi, a breadboard, some jumper wires, and a resistor. To save having to write code, I used the 3V3 pin and a ground pin.

make your own custom LEDs for Raspberry Pi

Remember, the shorter of the two legs connects to the ground pin, while the longer goes to 3V3. If you mix this up, you may end up with a fried LED like this poor LEGO man.

make your own custom LEDs for Raspberry Pi

Everything isn’t awesome.

Once you’ve confirmed that your LED works, bend its legs to make it easier to insert it into the glue.

Glue

Next, grab a hot glue gun and fill a mould. The glue will take a while to cool, so you have some time to make sure that all nooks and crannies are filled before you insert an LED.

make your own custom LEDs for Raspberry Pi

Tip: test a corner of your mould with the tip of your glue gun to check how heat-resistant it is. One of my moulds didn’t enjoy heat and began to bubble.

Once your mould is properly filled, push an LED into the glue, holding on to the legs to keep your fingertips safe. Have a wiggle around to find the bottom and sides of your mould and ensure that your LED is in the centre.

make your own custom LEDs for Raspberry Pi

Pick a colour best suited to your mould. You could try using multiple LEDs on larger moulds to introduce more colours!

You may notice that the LED tries to sink a little and the legs begin to drop. Keep an eye out and adjust them if you need to. They’ll stop moving once the glue begins to set.

make your own custom LEDs for Raspberry Pi

These took about ten minutes to cool down.

Be patient

Don’t rush. The hot glue will take time to cool down, especially if you’re using a larger mould like the one for this Stormtrooper helmet.

Custom hot glue LED

Here I used a gumdrop LED, which is larger than your standard maker kit LED.

You’ll know that the glue has set when the shape pulls away easily from the mould. It should just pop out when its ready.

make your own custom LEDs for Raspberry Pi

Pop!

Light it up

Test your new custom LED one more time on your testing rig to ensure you haven’t damaged the connections.

make your own custom LEDs for Raspberry Pi

As with all LEDs, they look better in the dark (and terrible when you try to take a photo of them), so try testing them in a dim room or at night. You could also use a box to create a small testing lab if you’re planning to make a lot of these.

Custom hot glue LED
Custom hot glue LED
Custom hot glue LED

Now it’s your turn

What custom LED would you want to make? How would you use it in your next project? And what other fun hacks have you used to augment tech for your builds?

The post Make your own custom LEDs using hot glue! appeared first on Raspberry Pi.

Announcing Local Build Support for AWS CodeBuild

Post Syndicated from Karthik Thirugnanasambandam original https://aws.amazon.com/blogs/devops/announcing-local-build-support-for-aws-codebuild/

Today, we’re excited to announce local build support in AWS CodeBuild.

AWS CodeBuild is a fully managed build service. There are no servers to provision and scale, or software to install, configure, and operate. You just specify the location of your source code, choose your build settings, and CodeBuild runs build scripts for compiling, testing, and packaging your code.

In this blog post, I’ll show you how to set up CodeBuild locally to build and test a sample Java application.

By building an application on a local machine you can:

  • Test the integrity and contents of a buildspec file locally.
  • Test and build an application locally before committing.
  • Identify and fix errors quickly from your local development environment.

Prerequisites

In this post, I am using AWS Cloud9 IDE as my development environment.

If you would like to use AWS Cloud9 as your IDE, follow the express setup steps in the AWS Cloud9 User Guide.

The AWS Cloud9 IDE comes with Docker and Git already installed. If you are going to use your laptop or desktop machine as your development environment, install Docker and Git before you start.

Steps to build CodeBuild image locally

Run git clone https://github.com/aws/aws-codebuild-docker-images.git to download this repository to your local machine.

$ git clone https://github.com/aws/aws-codebuild-docker-images.git

Lets build a local CodeBuild image for JDK 8 environment. The Dockerfile for JDK 8 is present in /aws-codebuild-docker-images/ubuntu/java/openjdk-8.

Edit the Dockerfile to remove the last line ENTRYPOINT [“dockerd-entrypoint.sh”] and save the file.

Run cd ubuntu/java/openjdk-8 to change the directory in your local workspace.

Run docker build -t aws/codebuild/java:openjdk-8 . to build the Docker image locally. This command will take few minutes to complete.

$ cd aws-codebuild-docker-images
$ cd ubuntu/java/openjdk-8
$ docker build -t aws/codebuild/java:openjdk-8 .

Steps to setup CodeBuild local agent

Run the following Docker pull command to download the local CodeBuild agent.

$ docker pull amazon/aws-codebuild-local:latest --disable-content-trust=false

Now you have the local agent image on your machine and can run a local build.

Run the following git command to download a sample Java project.

$ git clone https://github.com/karthiksambandam/sample-web-app.git

Steps to use the local agent to build a sample project

Let’s build the sample Java project using the local agent.

Execute the following Docker command to run the local agent and build the sample web app repository you cloned earlier.

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/java:openjdk-8" -e "ARTIFACTS=/home/ec2-user/environment/artifacts" -e "SOURCE=/home/ec2-user/environment/sample-web-app" amazon/aws-codebuild-local

Note: We need to provide three environment variables namely  IMAGE_NAME, SOURCE and ARTIFACTS.

IMAGE_NAME: The name of your build environment image.

SOURCE: The absolute path to your source code directory.

ARTIFACTS: The absolute path to your artifact output folder.

When you run the sample project, you get a runtime error that says the YAML file does not exist. This is because a buildspec.yml file is not included in the sample web project. AWS CodeBuild requires a buildspec.yml to run a build. For more information about buildspec.yml, see Build Spec Example in the AWS CodeBuild User Guide.

Let’s add a buildspec.yml file with the following content to the sample-web-app folder and then rebuild the project.

version: 0.2

phases:
  build:
    commands:
      - echo Build started on `date`
      - mvn install

artifacts:
  files:
    - target/javawebdemo.war

$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/java:openjdk-8" -e "ARTIFACTS=/home/ec2-user/environment/artifacts" -e "SOURCE=/home/ec2-user/environment/sample-web-app" amazon/aws-codebuild-local

This time your build should be successful. Upon successful execution, look in the /artifacts folder for the final built artifacts.zip file to validate.

Conclusion:

In this blog post, I showed you how to quickly set up the CodeBuild local agent to build projects right from your local desktop machine or laptop. As you see, local builds can improve developer productivity by helping you identify and fix errors quickly.

I hope you found this post useful. Feel free to leave your feedback or suggestions in the comments.

CI/CD with Data: Enabling Data Portability in a Software Delivery Pipeline with AWS Developer Tools, Kubernetes, and Portworx

Post Syndicated from Kausalya Rani Krishna Samy original https://aws.amazon.com/blogs/devops/cicd-with-data-enabling-data-portability-in-a-software-delivery-pipeline-with-aws-developer-tools-kubernetes-and-portworx/

This post is written by Eric Han – Vice President of Product Management Portworx and Asif Khan – Solutions Architect

Data is the soul of an application. As containers make it easier to package and deploy applications faster, testing plays an even more important role in the reliable delivery of software. Given that all applications have data, development teams want a way to reliably control, move, and test using real application data or, at times, obfuscated data.

For many teams, moving application data through a CI/CD pipeline, while honoring compliance and maintaining separation of concerns, has been a manual task that doesn’t scale. At best, it is limited to a few applications, and is not portable across environments. The goal should be to make running and testing stateful containers (think databases and message buses where operations are tracked) as easy as with stateless (such as with web front ends where they are often not).

Why is state important in testing scenarios? One reason is that many bugs manifest only when code is tested against real data. For example, we might simply want to test a database schema upgrade but a small synthetic dataset does not exercise the critical, finer corner cases in complex business logic. If we want true end-to-end testing, we need to be able to easily manage our data or state.

In this blog post, we define a CI/CD pipeline reference architecture that can automate data movement between applications. We also provide the steps to follow to configure the CI/CD pipeline.

 

Stateful Pipelines: Need for Portable Volumes

As part of continuous integration, testing, and deployment, a team may need to reproduce a bug found in production against a staging setup. Here, the hosting environment is comprised of a cluster with Kubernetes as the scheduler and Portworx for persistent volumes. The testing workflow is then automated by AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild.

Portworx offers Kubernetes storage that can be used to make persistent volumes portable between AWS environments and pipelines. The addition of Portworx to the AWS Developer Tools continuous deployment for Kubernetes reference architecture adds persistent storage and storage orchestration to a Kubernetes cluster. The example uses MongoDB as the demonstration of a stateful application. In practice, the workflow applies to any containerized application such as Cassandra, MySQL, Kafka, and Elasticsearch.

Using the reference architecture, a developer calls CodePipeline to trigger a snapshot of the running production MongoDB database. Portworx then creates a block-based, writable snapshot of the MongoDB volume. Meanwhile, the production MongoDB database continues serving end users and is uninterrupted.

Without the Portworx integrations, a manual process would require an application-level backup of the database instance that is outside of the CI/CD process. For larger databases, this could take hours and impact production. The use of block-based snapshots follows best practices for resilient and non-disruptive backups.

As part of the workflow, CodePipeline deploys a new MongoDB instance for staging onto the Kubernetes cluster and mounts the second Portworx volume that has the data from production. CodePipeline triggers the snapshot of a Portworx volume through an AWS Lambda function, as shown here

 

 

 

AWS Developer Tools with Kubernetes: Integrated Workflow with Portworx

In the following workflow, a developer is testing changes to a containerized application that calls on MongoDB. The tests are performed against a staging instance of MongoDB. The same workflow applies if changes were on the server side. The original production deployment is scheduled as a Kubernetes deployment object and uses Portworx as the storage for the persistent volume.

The continuous deployment pipeline runs as follows:

  • Developers integrate bug fix changes into a main development branch that gets merged into a CodeCommit master branch.
  • Amazon CloudWatch triggers the pipeline when code is merged into a master branch of an AWS CodeCommit repository.
  • AWS CodePipeline sends the new revision to AWS CodeBuild, which builds a Docker container image with the build ID.
  • AWS CodeBuild pushes the new Docker container image tagged with the build ID to an Amazon ECR registry.
  • Kubernetes downloads the new container (for the database client) from Amazon ECR and deploys the application (as a pod) and staging MongoDB instance (as a deployment object).
  • AWS CodePipeline, through a Lambda function, calls Portworx to snapshot the production MongoDB and deploy a staging instance of MongoDB• Portworx provides a snapshot of the production instance as the persistent storage of the staging MongoDB
    • The MongoDB instance mounts the snapshot.

At this point, the staging setup mimics a production environment. Teams can run integration and full end-to-end tests, using partner tooling, without impacting production workloads. The full pipeline is shown here.

 

Summary

This reference architecture showcases how development teams can easily move data between production and staging for the purposes of testing. Instead of taking application-specific manual steps, all operations in this CodePipeline architecture are automated and tracked as part of the CI/CD process.

This integrated experience is part of making stateful containers as easy as stateless. With AWS CodePipeline for CI/CD process, developers can easily deploy stateful containers onto a Kubernetes cluster with Portworx storage and automate data movement within their process.

The reference architecture and code are available on GitHub:

● Reference architecture: https://github.com/portworx/aws-kube-codesuite
● Lambda function source code for Portworx additions: https://github.com/portworx/aws-kube-codesuite/blob/master/src/kube-lambda.py

For more information about persistent storage for containers, visit the Portworx website. For more information about Code Pipeline, see the AWS CodePipeline User Guide.

Tinkernut’s hidden Coke bottle spy cam

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernuts-spy-cam/

Go undercover and keep an eye on your stuff with this brilliant secret Coke bottle spy cam from Tinkernut!

Secret Coke Bottle SPY CAM! – Weekend Hacker #1803

SPECIAL NOTE*** THE FULL TUTORIAL WILL BE AVAILABLE NEXT WEEK April Fools! What a terrible day. So many pranks. You can’t believe anything you read. People invading your space. The mental and physical anguish of enduring the day. It’s time to fight back! Let’s catch the perps in action by making a device that always watches.

Keeping tabs

A Raspberry Pi Zero W, a small camera, and a rechargeable Lithium Polymer (LiPo) battery constitute the bulk of this project’s tech. A pair of 3D-printed parts, and gelatine-solidified Coke Zero make up the fake fizzy body.

Tinkernut Coke bottle Raspberry Pi Spy Cam

“So let’s make this video as short as possible and just buy a cheap pre-made spy cam off of Amazon. Just kidding,” Tinkernut jokes in the tutorial video for the project, before going through the step-by-step process of using the Raspberry Pi to “DIY this the right way”.

After accessing the Zero W from his laptop via SSH, Tinkernut opted for using the rpi_camera_surveillance_system Python script written by GitHub user RuiSantosdotme to control the spy cam. Luckily, this meant no additional library setup, and basically no lag on the video feed.

What we want to do is create a script that activates the camera and serves it to a web page so that we can access it from any web browser. There are plenty of different ways to do this (Motion, Raspivid, etc), but I found a simple Python script that does everything I need it to do and doesn’t require any extra software or libraries to install. The best thing about it is that the lag time is practically unnoticeable.

With the code in place, every boot-up of the Raspberry Pi automatically launches both the script and a web page of the live video, allowing for constant monitoring of potential sneaks and thieves.

Tinkernut Coke bottle Raspberry Pi Spy Cam

The projects is powered by a 1500mAh LiPo battery and the Adafruit LiPo charger. It also includes a simple on/off switch, which Tinkernut wired to the charger and the Pi’s PP1 and PP6 connector pads.

Tinkernut Coke bottle Raspberry Pi Spy Cam

Tinkernut decided to use a Coke Zero bottle for the build, incorporating 3D-printed parts to house the Pi, and a mix of Coke and gelatine to create a realistic-looking filling for the bottle. However, the setup can be transferred to pretty much any hollow item in your home, say, a cookie jar or a cracker box. So get creative and get spying!

A complete spy cam how-to

If you’d like to make your own secret spy cam, you can find a tutorial for Tinkernut’s build at hackster.io, or follow along with his video below. Also make sure to subscribe his YouTube channel to be updated on all his newest builds — they’re rather splendid.

BUILD: Coke Bottle SPY CAM! – Tinkernut Workbench

Learn how to take a regular Coke Zero bottle, cram a Raspberry Pi and webcam inside of it, and have it still look like a regular Coke Zero bottle. Why would you want to do this? To spy on those irritating April Fooligans!!!

And if you’re interested in more spy-themed digital making projects, check out our complete 007 how-to guide for links to tutorials such as our Sense HAT puzzle box, Parent detector, and Laser tripwire.

The post Tinkernut’s hidden Coke bottle spy cam appeared first on Raspberry Pi.

How to migrate a Hue database from an existing Amazon EMR cluster

Post Syndicated from Anvesh Ragi original https://aws.amazon.com/blogs/big-data/how-to-migrate-a-hue-database-from-an-existing-amazon-emr-cluster/

Hadoop User Experience (Hue) is an open-source, web-based, graphical user interface for use with Amazon EMR and Apache Hadoop. The Hue database stores things like users, groups, authorization permissions, Apache Hive queries, Apache Oozie workflows, and so on.

There might come a time when you want to migrate your Hue database to a new EMR cluster. For example, you might want to upgrade from an older version of the Amazon EMR AMI (Amazon Machine Image), but your Hue application and its database have had a lot of customization.You can avoid re-creating these user entities and retain query/workflow histories in Hue by migrating the existing Hue database, or remote database in Amazon RDS, to a new cluster.

By default, Hue user information and query histories are stored in a local MySQL database on the EMR cluster’s master node. However, you can create one or more Hue-enabled clusters using a configuration stored in Amazon S3 and a remote MySQL database in Amazon RDS. This allows you to preserve user information and query history that Hue creates without keeping your Amazon EMR cluster running.

This post describes the step-by-step process for migrating the Hue database from an existing EMR cluster.

Note: Amazon EMR supports different Hue versions across different AMI releases. Keep in mind the compatibility of Hue versions between the old and new clusters in this migration activity. Currently, Hue 3.x.x versions are not compatible with Hue 4.x.x versions, and therefore a migration between these two Hue versions might create issues. In addition, Hue 3.10.0 is not backward compatible with its previous 3.x.x versions.

Before you begin

First, let’s create a new testUser in Hue on an existing EMR cluster, as shown following:

You will use these credentials later to log in to Hue on the new EMR cluster and validate whether you have successfully migrated the Hue database.

Let’s get started!

Migration how-to

Follow these steps to migrate your database to a new EMR cluster and then validate the migration process.

1.) Make a backup of the existing Hue database.

Use SSH to connect to the master node of the old cluster, as shown following (if you are using Linux/Unix/macOS), and dump the Hue database to a JSON file.

$ ssh -i ~/key.pem [email protected]
$ /usr/lib/hue/build/env/bin/hue dumpdata > ./hue-mysql.json

Edit the hue-mysql.json output file by removing all JSON objects that have useradmin.userprofile in the model field, and save the file. For example, remove the objects as shown following:

{
  "pk": 1,
  "model": "useradmin.userprofile",
  "fields": {
    "last_activity": "2018-01-10T11:41:04",
    "creation_method": "HUE",
    "first_login": false,
    "user": 1,
    "home_directory": "/user/hue_admin"
  }
},

2.) Store the hue-mysql.json file on persistent storage like Amazon S3.

You can copy the file from the old EMR cluster to Amazon S3 using the AWS CLI or Secure Copy (SCP) client. For example, the following uses the AWS CLI:

$ aws s3 cp ./hue-mysql.json s3://YourBucketName/folder/

3.) Recover/reload the backed-up Hue database into the new EMR cluster.

a.) Use SSH to connect to the master node of the new EMR cluster, and stop the Hue service that is already running.

$ ssh -i ~/key.pem [email protected]
$ sudo stop hue
hue stop/waiting

b.) Connect to the Hue database—either the local MySQL database or the remote database in Amazon RDS for your cluster as shown following, using the mysql client.

$ mysql -h HOST –u USER –pPASSWORD

For a local MySQL database, you can find the hostname, user name, and password for connecting to the database in the /etc/hue/conf/hue.ini file on the master node.

[[database]]
    engine = mysql
    name = huedb
    case_insensitive_collation = utf8_unicode_ci
    test_charset = utf8
    test_collation = utf8_bin
    host = ip-172-31-37-133.us-west-2.compute.internal
    user = hue
    test_name = test_huedb
    password = QdWbL3Ai6GcBqk26
    port = 3306

Based on the preceding example configuration, the sample command is as follows. (Replace the host, user, and password details based on your EMR cluster settings.)

$ mysql -h ip-172-31-37-133.us-west-2.compute.internal -u hue -pQdWbL3Ai6GcBqk26

c.) Drop the existing Hue database with the name huedb from the MySQL server.

mysql> DROP DATABASE IF EXISTS huedb;

d.) Create a new empty database with the same name huedb.

mysql> CREATE DATABASE huedb DEFAULT CHARACTER SET utf8 DEFAULT COLLATE=utf8_bin;

e.) Now, synchronize Hue with its database huedb.

$ sudo /usr/lib/hue/build/env/bin/hue syncdb --noinput
$ sudo /usr/lib/hue/build/env/bin/hue migrate

(This populates the new huedb with all Hue tables that are required.)

f.) Log in to MySQL again, and drop the foreign key to clean tables.

mysql> SHOW CREATE TABLE huedb.auth_permission;

In the following example, replace <id value> with the actual value from the preceding output.

mysql> ALTER TABLE huedb.auth_permission DROP FOREIGN KEY
content_type_id_refs_id_<id value>;

g.) Delete the contents of the django_content_type

mysql> DELETE FROM huedb.django_content_type;

h.) Download the backed-up Hue database dump from Amazon S3 to the new EMR cluster, and load it into Hue.

$ aws s3 cp s3://YourBucketName/folder/hue-mysql.json ./
$ sudo /usr/lib/hue/build/env/bin/hue loaddata ./hue-mysql.json

i.) In MySQL, add the foreign key content_type_id back to the auth_permission

mysql> use huedb;
mysql> ALTER TABLE huedb.auth_permission ADD FOREIGN KEY (`content_type_id`) REFERENCES `django_content_type` (`id`);

j.) Start the Hue service again.

$ sudo start hue
hue start/running, process XXXX

That’s it! Now, verify whether you can successfully access the Hue UI, and sign in using your existing testUser credentials.

After a successful sign in to Hue on the new EMR cluster, you should see a similar Hue homepage as shown following with testUser as the user signed in:

Conclusion

You have now learned how to migrate an existing Hue database to a new Amazon EMR cluster and validate the migration process. If you have any similar Amazon EMR administration topics that you want to see covered in a future post, please let us know in the comments below.


Additional Reading

If you found this post useful, be sure to check out Anomaly Detection Using PySpark, Hive, and Hue on Amazon EMR and Dynamically Create Friendly URLs for Your Amazon EMR Web Interfaces.


About the Author


Anvesh Ragi is a Big Data Support Engineer with Amazon Web Services. He works closely with AWS customers to provide them architectural and engineering assistance for their data processing workflows. In his free time, he enjoys traveling and going for hikes.

Your Hard Drive Crashed — Get Working Again Fast with Backblaze

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/how-to-recover-your-files-with-backblaze/

holding a hard drive and diagnostic tools
The worst thing for a computer user has happened. The hard drive on your computer crashed, or your computer is lost or completely unusable.

Fortunately, you’re a Backblaze customer with a current backup in the cloud. That’s great. The challenge is that you’ve got a presentation to make in just 48 hours and the document and materials you need for the presentation were on the hard drive that crashed.

Relax. Backblaze has your data (and your back). The question is, how do you get what you need to make that presentation deadline?

Here are some strategies you could use.

One — The first approach is to get back the presentation file and materials you need to meet your presentation deadline as quickly as possible. You can use another computer (maybe even your smartphone) to make that presentation.

Two — The second approach is to get your computer (or a new computer, if necessary) working again and restore all the files from your Backblaze backup.

Let’s start with Option One, which gets you back to work with just the files you need now as quickly as possible.

Option One — You’ve Got a Deadline and Just Need Your Files

Getting Back to Work Immediately

You want to get your computer working again as soon as possible, but perhaps your top priority is getting access to the files you need for your presentation. The computer can wait.

Find a Computer to Use

First of all. You’re going to need a computer to use. If you have another computer handy, you’re all set. If you don’t, you’re going to need one. Here are some ideas on where to find one:

  • Family and Friends
  • Work
  • Neighbors
  • Local library
  • Local school
  • Community or religious organization
  • Local computer shop
  • Online store

Laptop computer

If you have a smartphone that you can use to give your presentation or to print materials, that’s great. With the Backblaze app for iOS and Android, you can download files directly from your Backblaze account to your smartphone. You also have the option with your smartphone to email or share files from your Backblaze backup so you can use them elsewhere.

Laptop with smartphone

Download The File(s) You Need

Once you have the computer, you need to connect to your Backblaze backup through a web browser or the Backblaze smartphone app.

Backblaze Web Admin

Sign into your Backblaze account. You can download the files directly or use the share link to share files with yourself or someone else.

If you need step-by-step instructions on retrieving your files, see Restore the Files to the Drive section below. You also can find help at https://help.backblaze.com/hc/en-us/articles/217665888-How-to-Create-a-Restore-from-Your-Backblaze-Backup.

Smartphone App

If you have an iOS or Android smartphone, you can use the Backblaze app and retrieve the files you need. You then could view the file on your phone, use a smartphone app with the file, or email it to yourself or someone else.

Backblaze Smartphone app (iOS)

Backblaze Smartphone app (iOS)

Using one of the approaches above, you got your files back in time for your presentation. Way to go!

Now, the next step is to get the computer with the bad drive running again and restore all your files, or, if that computer is no longer usable, restore your Backblaze backup to a new computer.

Option Two — You Need a Working Computer Again

Getting the Computer with the Failed Drive Running Again (or a New Computer)

If the computer with the failed drive can’t be saved, then you’re going to need a new computer. A new computer likely will come with the operating system installed and ready to boot. If you’ve got a running computer and are ready to restore your files from Backblaze, you can skip forward to Restore the Files to the Drive.

If you need to replace the hard drive in your computer before you restore your files, you can continue reading.

Buy a New Hard Drive to Replace the Failed Drive

The hard drive is gone, so you’re going to need a new drive. If you have a computer or electronics store nearby, you could get one there. Another choice is to order a drive online and pay for one or two-day delivery. You have a few choices:

  1. Buy a hard drive of the same type and size you had
  2. Upgrade to a drive with more capacity
  3. Upgrade to an SSD. SSDs cost more but they are faster, more reliable, and less susceptible to jolts, magnetic fields, and other hazards that can affect a drive. Otherwise, they work the same as a hard disk drive (HDD) and most likely will work with the same connector.


Hard Disk Drive (HDD)Solid State Drive (SSD)

Hard Disk Drive (HDD)

Solid State Drive (SSD)


Be sure that the drive dimensions are compatible with where you’re going to install the drive in your computer, and the drive connector is compatible with your computer system (SATA, PCIe, etc.) Here’s some help.

Install the Drive

If you’re handy with computers, you can install the drive yourself. It’s not hard, and there are numerous videos on YouTube and elsewhere on how to do this. Just be sure to note how everything was connected so you can get everything connected and put back together correctly. Also, be sure that you discharge any static electricity from your body by touching something metallic before you handle anything inside the computer. If all this sounds like too much to handle, find a friend or a local computer store to help you.

Note:  If the drive that failed is a boot drive for your operating system (either Macintosh or Windows), you need to make sure that the drive is bootable and has the operating system files on it. You may need to reinstall from an operating system source disk or install files.

Restore the Files to the Drive

To start, you will need to sign in to the Backblaze website with your registered email address and password. Visit https://secure.backblaze.com/user_signin.htm to login.

Sign In to Your Backblaze Account

Selecting the Backup

Once logged in, you will be brought to the account Overview page. On this page, all of the computers registered for backup under your account are shown with some basic information about each. Select the backup from which you wish to restore data by using the appropriate “Restore” button.

Screenshot of Admin for Selecting the Type of Restore

Selecting the Type of Restore

Backblaze offers three different ways in which you can receive your restore data: downloadable ZIP file, USB flash drive, or USB hard drive. The downloadable ZIP restore option will create a ZIP file of the files you request that is made available for download for 7 days. ZIP restores do not have any additional cost and are a great option for individual files or small sets of data.

Depending on the speed of your internet connection to the Backblaze data center, downloadable restores may not always be the best option for restoring very large amounts of data. ZIP restores are limited to 500 GB per request and a maximum of 5 active requests can be submitted under a single account at any given time.

USB flash and hard drive restores are built with the data you request and then shipped to an address of your choosing via FedEx Overnight or FedEx Priority International. USB flash restores cost $99 and can contain up to 128 GB (110,000 MB of data) and USB hard drive restores cost $189 and can contain up to 4TB max (3,500,000 MB of data). Both include the cost of shipping.

You can return the ZIP drive within 30 days for a full refund with our Restore Return Refund Program, effectively making the process of restoring free, even with a shipped USB drive.

Screenshot of Admin for Selecting the Backup

Selecting Files for Restore

Using the left hand file viewer, navigate to the location of the files you wish to restore. You can use the disclosure triangles to see subfolders. Clicking on a folder name will display the folder’s files in the right hand file viewer. If you are attempting to restore files that have been deleted or are otherwise missing or files from a failed or disconnected secondary or external hard drive, you may need to change the time frame parameters.

Put checkmarks next to disks, files or folders you’d like to recover. Once you have selected the files and folders you wish to restore, select the “Continue with Restore” button above or below the file viewer. Backblaze will then build the restore via the option you select (ZIP or USB drive). You’ll receive an automated email notifying you when the ZIP restore has been built and is ready for download or when the USB restore drive ships.

If you are using the downloadable ZIP option, and the restore is over 2 GB, we highly recommend using the Backblaze Downloader for better speed and reliability. We have a guide on using the Backblaze Downloader for Mac OS X or for Windows.

For additional assistance, visit our help files at https://help.backblaze.com/hc/en-us/articles/217665888-How-to-Create-a-Restore-from-Your-Backblaze-Backup

Screenshot of Admin for Selecting Files for Restore

Extracting the ZIP

Recent versions of both macOS and Windows have built-in capability to extract files from a ZIP archive. If the built-in capabilities aren’t working for you, you can find additional utilities for Macintosh and Windows.

Reactivating your Backblaze Account

Now that you’ve got a working computer again, you’re going to need to reinstall Backblaze Backup (if it’s not on the system already) and connect with your existing account. Start by downloading and reinstalling Backblaze.

If you’ve restored the files from your Backblaze Backup to your new computer or drive, you don’t want to have to reupload the same files again to your Backblaze backup. To let Backblaze know that this computer is on the same account and has the same files, you need to use “Inherit Backup State.” See https://help.backblaze.com/hc/en-us/articles/217666358-Inherit-Backup-State

Screenshot of Admin for Inherit Backup State

That’s It

You should be all set, either with the files you needed for your presentation, or with a restored computer that is again ready to do productive work.

We hope your presentation wowed ’em.

If you have any additional questions on restoring from a Backblaze backup, please ask away in the comments. Also, be sure to check out our help resources at https://www.backblaze.com/help.html.

The post Your Hard Drive Crashed — Get Working Again Fast with Backblaze appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.