Tag Archives: Uncategorized

Optimizing Disk Usage on Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/optimizing-disk-usage-on-amazon-ecs/

My colleague Jay McConnell sent a nice guest post that describes how to track and optimize the disk spaced used in your Amazon ECS cluster.

Failure to monitor disk space utilization can cause problems that prevent Docker containers from working as expected. Amazon EC2 instance disks are used for multiple purposes, such as Docker daemon logs, containers, and images. This post covers techniques to monitor and reclaim disk space on the cluster of EC2 instances used to run your containers.

Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to run applications easily on a managed cluster of Amazon EC2 instances. You can use ECS to schedule the placement of containers across a cluster of EC2 instances based on your resource needs, isolation policies, and availability requirements.

The ECS-optimized AMI stores images and containers in an EBS volume that uses the devicemapper storage driver in a direct-lvm configuration. As devicemapper stores every image and container in a thin-provisioned virtual device, free space for container storage is not visible through standard Linux utilities such as df. This poses an administrative challenge when it comes to monitoring free space and can also result in increased time troubleshooting task failures, as the cause may not be immediately obvious.

Disk space errors can result in new tasks failing to launch with the following error message:

 Error running deviceCreate (createSnapDevice) dm_task_run failed

NOTE: The scripts and techniques described in this post were tested against the ECS 2016.03.a AMI. You may need to modify these techniques depending on your operating system and environment.


You can use Amazon CloudWatch custom metrics to track EC2 instance disk usage. After a CloudWatch metric is created, you can add a CloudWatch alarm to alert you proactively, before low disk space causes a problem on your cluster.

Step 1: Create an IAM role

The first step is to ensure that the EC2 instance profile for the EC2 instances in the ECS cluster uses the “cloudwatch:PutMetricData” policy, as this is required to publish to CloudWatch.
In the IAM console, choose Policies, Create Policy. Choose Create Your Own Policy, name it “CloudwatchPutMetricData”, and paste in the following policy in JSON:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "CloudwatchPutMetricData",
            "Effect": "Allow",
            "Action": [
            "Resource": [

After you have saved the policy, navigate to Roles and select the role attached to the EC2 instances in your ECS cluster. Choose Attach Policy, select the “CloudwatchPutMetricData” policy, and choose Attach Policy.

Step 2: Push metrics to CloudWatch

Open a shell to each EC2 instance in the ECS cluster. Open a text editor and create the following bash script:


### Get docker free data and metadata space and push to CloudWatch metrics
### requirements:
###  * must be run from inside an EC2 instance
###  * docker with devicemapper backing storage
###  * aws-cli configured with instance-profile/user with the put-metric-data permissions
###  * local user with rights to run docker cli commands
### Created by Jay McConnell

# install aws-cli, bc and jq if required
if [ ! -f /usr/bin/aws ]; then
  yum -qy -d 0 -e 0 install aws-cli
if [ ! -f /usr/bin/bc ]; then
  yum -qy -d 0 -e 0 install bc
if [ ! -f /usr/bin/jq ]; then
  yum -qy -d 0 -e 0 install jq

# Collect region and instanceid from metadata
AWSREGION=`curl -ss | jq -r .region`

function convertUnits {
  # convert units back to bytes as both docker api and cli only provide freindly units
  if [ "$1" == "b" ] ; then
    echo $2
  elif [ "$1" == "kb" ] ; then 
    echo "$2*1000" | bc | awk '{print $1}' FS="."
  elif [ "$1" == "mb" ] ; then
    echo "$2*1000*1000" | bc | awk '{print $1}' FS="."
  elif [ "$1" == "gb" ] ; then
    echo "$2*1000*1000*1000" | bc | awk '{print $1}' FS="."
  elif [ "$1" == "tb" ] ; then
    echo "$2*1000*1000*1000*1000" | bc | awk '{print $1}' FS="."
    echo "Unknown unit $1"
    exit 1

function getMetric {
  # Get freespace and split unit
  if [ "$1" == "Data" ] || [ "$1" == "Metadata" ] ; then
    echo $(docker info | grep "$1 Space Available" | awk '{print tolower($5), $4}')
    echo "Metric must be either 'Data' or 'Metadata'"
    exit 1

data=$(convertUnits `getMetric Data`)
aws cloudwatch put-metric-data --value $data --namespace ECS/$AWSINSTANCEID --unit Bytes --metric-name FreeDataStorage --region $AWSREGION
data=$(convertUnits `getMetric Metadata`)
aws cloudwatch put-metric-data --value $data --namespace ECS/$AWSINSTANCEID --unit Bytes --metric-name FreeMetadataStorage --region $AWSREGION

Next, set the script to be executable:

chmod +x /path/to/metricscript.sh

Now, schedule the script to run every 5 minutes via cron. To do this, create the file /etc/cron.d/ecsmetrics with the following contents:

*/5 * * * * root /path/to/metricscript.sh

This pulls both free data and metadata every 5 minutes and push them to CloudWatch with the namespace ECS/.

Disk cleanup

The next step is to clean up the disk, either automatically on a schedule or manually. This post covers cleanup of tasks and images; there is a great blog post, Send ECS Container Logs to CloudWatch Logs for Centralized Monitoring, that covers pushing log files to CloudWatch. Using CloudWatch Logs instead of local log files reduces disk utilization and provides a resilient and centralized place from which to manage logs.

Take a look at what you can do to remove unneeded containers and images from your instances.

Delete containers

Stopped containers should be deleted if they are no longer needed. The ECS agent, by default, deletes all containers that have exited every 3 hours. This behavior can be customized by adding the following to /etc/ecs/ecs.config:


This sets the frequency of the task to 10 minutes.
For this change to take effect, the ECS agent needs to be restarted, which can be done via ssh:

stop ecs; start ecs

To set this up for new instances, attach the following EC2 user data:

cat /etc/ecs/ecs.config | grep -v 'ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION' > /tmp/ecs.config
echo "ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION=5m" >> /tmp/ecs.config
mv -f /tmp/ecs.config /etc/ecs/
stop ecs
start ecs

Delete images

By default, Docker caches images indefinitely. Cached images can be useful to reduce the time needed to launch new tasks: if the image is cached, the container can be started from the cache. If you have a lot of images that are rarely used, as is common in CI or development environments, then cleaning these out is a good idea. Use the following commands to remove unused images:

List images:

docker images

Delete an image:

docker rmi IMAGE

This could be condensed and saved to a bash script:

docker images -q | xargs --no-run-if-empty docker rmi

Set the script to be executable:

chmod +x /path/to/cleanupscript.sh

Execute the script daily via cron by creating a file called /etc/cron.d/dockerImageCleanup with the following contents:

00 00 * * * root /path/to/cleanupscript.sh


The techniques described in this post provide visibility into a critical component of running Docker—the disk space used on the cluster’s EC2 instances—and techniques to clean up unnecessary storage. If you have any questions or suggestions for other best practices, please comment below.

A media player for Scott

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/media-player-scott/

Projects don’t have to be hugely complicated to make a huge difference. In Luxembourg, Alain Wall has used a Raspberry Pi to make a very simple media player for his autistic son, Scott. It’s very easy to use, very robust, and easy to clean; and it offers Scott a limited (so not overwhelming) but meaningful degree of choice. Here’s Scott using his player. Watch to the end for the best smile in the world.

Dem Scott sain neien TV. Scott’s new TV

Hei ass den Scott deen sain neien Mediaplayer test. En kann sech seng Filmer selwer starten an stoppen. A media player nearly indestructible an controllable with 6 Buttons to choose a movie Deutsch: http://awallelectronic.blogspot.lu/2016/04/scott-tv.html English: http://www.instructables.com/id/ScottTV-a-Simple-Media-Player-for-My-Austic-Son/ or https://hackaday.io/project/11000-scotttv-a-simple-mediaplayer-for-my-autistic-son

Alain hooked up six big piezo buttons and some speakers to a 20-in monitor and a Raspberry Pi – this isn’t the most complicated build you’ll see around these parts. (You can see a how-to guide over at Instructables.) But it is one of the most effective: as Alain says, “Scott loves it.”

Here’s another video from Alain demonstrating the setup.

Scott TV Simple MediaPlayer For My Autistic Son Scott

This is a simple media player for my autistic son. It had to be easy to use, nearly indestructible and easy to clean http://www.instructables.com/id/ScottTV-a-Simple-Media-Player-for-My-Austic-Son/ Deutsch: http://awallelectronic.blogspot.lu/2016/04/scott-tv.html

Thanks very much for sharing the project, Alain; all the very best from us at Pi Towers to you and the rest of the family, especially Scott!


The post A media player for Scott appeared first on Raspberry Pi.

Raspberry Pi telehealth kit piloted in NHS

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/raspberry-pi-telehealth-kit-piloted-nhs/

I had to spend a couple of nights in hospital last year – the first time I’d been on a hospital ward in about fifteen years. Things have moved on since my last visit: being me, the difference I really noticed was the huge number of computers, often on wheely trolley devices so they could be pushed around the ward, and often only used for one task. There was one at A&E when I came in, used to check NHS numbers and notes; another for paramedics to do a temperature check (this was at the height of the Ebola scare). When my blood was taken for some tests, another mobile computer was hooked up to the vials of blood and the testing hardware right next to my bed, feeding back results to a database; one controlled my drip, another monitored my oxygen levels, breathing, heart rate and so on on the ward. PCs for logging and checking were everywhere. I’m sure the operating room was full of the things too, but I was a bit unconscious at that point, so had stopped counting. (I’m fine now, by the way. Thanks for worrying.)


The huge variety of specialised and generic compute in the hospital gave me something to think about other than myself (which was very, very welcome under the circumstances). Namely, how much all this was costing; and how you could use Raspberry Pis to take some of that cost out. Here’s a study from 2009 about some of the devices used on a ward. That’s a heck of a lot of machines. We know from long experience at Raspberry Pi that specialised embedded hardware is often very, very expensive; manufacturers can put a premium on devices used in specialised environments, and increasingly, people using those devices are swapping them out for something based on Raspberry Pi (about a third of our sales go into embedded compute in industry, for factory automation and similar purposes). And we know that the NHS is financially pressed.

This is a long-winded way of saying that we’re really, really pleased to see a Raspberry Pi being trialled in the NHS.

This is the MediPi. It’s a device for heart patients to use at home to measure health statistics, which means they don’t need daily visits from a medical professional. Telehealth devices like this are usually built on iPads using 3G and Bluetooth with specially commissioned custom software and custom peripherals, which is a really expensive way to do a few simple things.


MediPi is being trialled this year with heart failure patients in an NHS trust in the south of England. Richard Robinson, the developer, is a a technical integration specialist at the Health and Social Care Information Centre (HSCIC) who has a particular interest in Raspberry Pi. He was shocked to find studies suggesting that devices like this were costing the NHS at least £2,000 a year per patient, making telehealth devices just too expensive for many NHS trusts to be able to use in any numbers. MediPi is much cheaper. The whole kit – that is, the Pi the touchscreen, a blood pressure cuff, a finger oximeter and some diagnostic scales – comes in at £250 (the hope is that building devices like this in bulk will bring prices even lower). And it’s all built on open-source software.

MediPi issues on-screen instructions showing patients how to take and record their measurements. When they hit the “transmit” button MediPi compresses and encrypts the data, and sends it to their clinician. Doctors have asked to be able to send messages to patients using the device, and patients can reply to them. MediPi also includes a heart questionnaire which patients respond to daily using the touch screen.

Richard Robinson says:

We created a secure platform which can message using Spine messaging and also message using any securely enabled network. We have designed it to be patient-friendly, so it has a simple touch-tiled dashboard interface and various help screens, and it’s low cost.

Clinicians don’t want to be overwhelmed with enormous amounts of data so we have developed a concentrator that will take the data and allow clinicians certain views, such as alerts for ‘out of threshold’ values.

My aim for this is that we demonstrate that telehealth is affordable at scale.

We’re really excited about this trial, and we’ll be keeping an eye on how things pan out. We’d love to see more of this sort of cost-reducing innovation in the heath sector; the Raspberry Pi is stable enough and cheap enough to provide it.

The post Raspberry Pi telehealth kit piloted in NHS appeared first on Raspberry Pi.

New 8-megapixel camera board on sale at $25

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/new-8-megapixel-camera-board-sale-25/

The 5-megapixel visible-light camera board was our first official accessory back in 2013, and it remains one of your favourite add-ons. They’ve found their way into a bunch of fun projects, including telescopes, kites, science lessons and of course the Naturebytes camera trap. It was soon joined by the Pi NoIR infrared-sensitive version, which not only let you see in the dark, but also opened the door to hyperspectral imaging hacks.

As many of you know, the OmniVision OV5647 sensor used in both boards was end-of-lifed at the end of 2014. Our partners both bought up large stockpiles, but these are now almost completely depleted, so we needed to do something new. Fortunately, we’d already struck up conversation with Sony’s image sensor division, and so in the nick of time we’re able to announce the immediate availability of both visible-light and infrared cameras based on the Sony IMX219 8-megapixel sensor, at the same low price of $25. They’re available today from our partners RS Components and element14, and should make their way to your favourite reseller soon.

Visible light camera v2

The visible light camera…

...and its infrared cousin

…and its infrared cousin

In our testing, IMX219 has proven to be a fantastic choice. You can read all the gory details about IMX219 and the Exmor R back-illuminated sensor architecture on Sony’s website, but suffice to say this is more than just a resolution upgrade: it’s a leap forward in image quality, colour fidelity and low-light performance.

VideoCore IV includes a sophisticated image sensor pipeline (ISP). This converts “raw” Bayer-format RGB input images from the sensor into YUV-format output images, while correcting for sensor and module artefacts such as thermal and shot noise, defective pixels, lens shading and image distortion. Tuning the ISP to work with a particular sensor is a time-consuming, specialist activity: there are only a handful of people with the necessary skills, and we’re very lucky that Naush Patuck, formerly of Broadcom’s imaging team, volunteered to take this on for IMX219.

Naush says:

Regarding the tuning process, I guess you could say the bulk of the effort went into the lens shading and AWB tuning. Apart from the fixed shading correction, our auto lens shading algorithm takes care of module to module manufacturing variations. AWB is tricky because we must ensure correct results over a large section of the colour temperature curve; in the case of the IMX219, we used images illuminated by light sources from 1800K [very “cool” reddish light] all the way up to 16000K [very “hot” bluish light].

The goal of auto white balance (AWB) is to recover the “true” colours in a scene regardless of the colour temperature of the light illuminating it: filming a white object should result in white pixels in sunlight, or under LED, fluorescent or incandescent lights. You can see from these pairs of before and after images that Naush’s tune does a great job under very challenging conditions.

AWB with high colour temperature

AWB at higher colour temperature

AWB at lower colour temperature

AWB at lower colour temperature

As always, we’re indebted to a host of people for their help getting these products out of the door. Dave Stevenson and James Hughes (hope you and Elaine are having a great honeymoon, James!) wrote most of our camera platform code. Mike Stimson designed the board (his second Raspberry Pi product after Zero). Phil Holden, Shinichi Goseki, Qiang Li and many others at Sony went out of their way to help us get access to the information Naush needed to tune the ISP.

We’re really happy with the way the new camera board has turned out, and we can’t wait to see what you do with it. Head over to RS Components or element14 to pick one up today.

The post New 8-megapixel camera board on sale at $25 appeared first on Raspberry Pi.

Scratch performance – feel the speed!

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/scratch-performance-raspberry-pi/

The Scratch programming language, developed at MIT, has become the cornerstone of computing education at the primary level. Running the Scratch environment well was an early goal for Raspberry Pi. Since early 2013 we’ve been working with Tim Rowledge, Smalltalk hacker extraordinaire. Tim has been beavering away, improving the Scratch codebase and porting it to newer versions of the Squeak virtual machine. Ben Avison chipped in with ARM-optimised versions of Squeak’s graphics operations, and of course we did our bit by releasing two new generations of the Raspberry Pi hardware.

We thought you’d enjoy these two videos. The first shows Andrew Oliver’s Scratch implementation of Pacman running on an Intel Core i5 laptop with “standard” Scratch 1.4. (Yes, that Andrew Oliver. Thanks Andrew!) The second shows the same code running on a Raspberry Pi 3 with Tim’s optimised Scratch. The Raspberry Pi version is roughly twice as fast.

Pacman running on a Macbook i5 under MIT Scratch

A demonstration of how much slower standard Scratch can be than the optimised NuScratch that’s available for Raspberry Pi

PacMan running on Pi 3 under NuScratch

This is “PacMan running on Pi 3 under NuScratch” by raspberrypi on Vimeo, the home for high quality videos and the people who love them.

This is a great example of the sort of attention-to-detail work that we like to focus on, and that can make the difference between a mediocre user experience and the desktop-equivalent experience that we aspire to for Raspberry Pi 3. We think it’s as important to work as hard on improving and incrementing software as it is to do the same with the hardware it runs on. We’ve done similar work with Kodi and Epiphany, and you can expect a lot more of this from us over the next couple of years.

The post Scratch performance – feel the speed! appeared first on Raspberry Pi.

Raspberry Pi, Preserving Digital Heritage

Post Syndicated from Oliver Quinlan original https://www.raspberrypi.org/blog/pis-preserving-digital-heritage/

The Raspberry Pi computer was inspired by the machines of the 80s, which were used interchangeably for programming and gaming. In fact, many of you will remember typing in the pages of code from a magazine to make a game. Some people used them as a basis on which to build their own games, taking the early steps into what has become an important industry.

Micro User magazine was an important part of the early computing education of a lot of people who work at Raspberry Pi. Mike Cook, who now writes for our official magazine, The MagPi, was author of the monthly Body Building feature.

In the 1980s, Micro User magazine was an important part of the early computing education of a lot of people who now work at Raspberry Pi. Mike Cook, who now writes for our official magazine, The MagPi, was author of the monthly Body Building hardware feature.

Nowadays, computer games are a crucial part of our cultural history. We see this in the enthusiasm for retro games projects that people create with our computers.

A trip down 8-bit memory lane is a lot of fun, but there’s a serious side to the preservation of games too. The games and machines that inspired a generation of digital creatives are old and obsolete. There will soon come a time when they no longer work; a lot of work is done by organisations like the Centre for Computing History in Cambridge to preserve old hardware, but it’s an uphill battle against the moulds that find the medium inside floppy discs so attractive, the leakage of electrolytic capacitors, tin whiskers developing in solder, and a million and one other sorts of entropy. In the future, there could be no way to revisit this part of our culture in the same way we can with books and objects without the work of archivists and historians.

A tiny part of the Centre for Computing History's collection on display

A tiny part of the Centre for Computing History’s collection on display

The cultural side of games is clear in the way they represent real places. The Museum of London are exploring this with an exhibit of representations of London in games. The earliest example is in 1982 text-based adventure game Streets of London for the ZX Spectrum; more recent ones include Tomb Raider III and Broken Sword.

Streets of London

You can’t understand a game by looking at it in a museum case: it has to be experienced. The museum collection includes ZX Spectrum and Commodore 64 machines, but the curators found that these old computers were not robust enough for ‘hands-on’ exhibits. Long load times from cassettes, 30-year-old worn keyboards and obsolete monitor connections all hampered their efforts.

Step up the Raspberry Pi, and the resources for retro gaming provided by RetroPie and the many emulators it supports. This seems appropriate, given that the Pi is the inheritor of the DIY ethos of these early games machines. All the interactive exhibits are powered by Raspberry Pis, emulating Spectrums, Commodore 64s, and even a Windows 95 PC.

Commodore 64 emulator

What’s on-screen is only part of the experience, so the exhibits also have authentic input devices. Adventure game commands are typed (and mis-typed) into the squashy rubber-membrane keys of an adapted Spectrum keyboard. Platform antics are controlled with a C64-like joystick (instinctive flailing of the controller to make characters jump higher is optional). Even the original manuals are included, as referring to them was so often an important part of the experience.

Spectrum keyboard

As custodians of cultural history, it’s also important that the museum uses the right processes to preserve the games. They have acquired copies of games on the original cassettes and disks, and carefully transferred them to modern media. This is important for copyright, to ensure the authenticity of the code, and for the completeness of the collection.

It’s easy to forget that games are important historical artefacts. They tell us about past experiences, and the way they represent places and events is a part of our cultural history. Although digital artefacts are quickly obsolete, people are going to great lengths to develop ways of preserving them for generations to come.

Seeing representations of London in video games alongside the art, objects and literature in the collection at the Museum of London shows just how much a part of life digital objects are now. It also shows how the history of the early video games era is being passed on through the Raspberry Pi. It’s not just inspiring a new generation of digital creatives. It’s also helping us all to remember and understand our digital heritage.

London in Video Games is on display at The Museum of London until the end of April, and the museum plans to continue to explore digital preservation and games emulation. We know there are lots of people in our community with expertise in emulation and archiving of retro games: let us know in the comments if you might be able to lend your expertise to projects like this.

The post Raspberry Pi, Preserving Digital Heritage appeared first on Raspberry Pi.

Astro Pi: the animated adventures of Izzy and Ed

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/astro-pi-animated-adventures-izzy-ed/

Right now, two Raspberry Pi computers are orbiting Earth on board the International Space Station.

Our intrepid Astro Pi units Izzy and Ed launched in December and were deployed by British ESA astronaut Tim Peake in February. We’ve seen the first part of their animated adventures; now we bring you the second part of their story, featuring some very special guests.

Private Video on Vimeo

Join the web’s most supportive community of creators and get high-quality tools for hosting, sharing, and streaming videos in gorgeous HD with no ads.

We’re especially excited that our Astro Pis have met Robonaut, NASA’s humanoid robot, as well as human crew members from ESA, NASA and Roscosmos.

After Ed and Izzy finished running apps and experiments coded by UK school students, they entered a flight recorder mode where they saved sensor readings to a database every ten seconds. They each recorded their orientation and acceleration, as well as temperature, humidity and pressure, over a period of about two weeks. We’ve now made the data they recorded on the ISS available for everyone to download, so you can analyse it any way you like, and we’ve also prepared a Flight Data Analysis resource to help you interpret and handle the data. We’re really looking forward to seeing how you use these data to analyse and interpret the movement of the space station and the environment on board.

Both Astro Pi units have been tweeting about some of their activities, including some great Earth observation images from Izzy, and they’re also talking about opportunities to get involved with their mission. Follow Ed and Izzy on Twitter to see what they’re up to!

The post Astro Pi: the animated adventures of Izzy and Ed appeared first on Raspberry Pi.

New – Managed Platform Updates for AWS Elastic Beanstalk

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-managed-platform-updates-for-aws-elastic-beanstalk/

AWS Elastic Beanstalk simplifies the process of deploying and running web applications and web services. You simply upload your code and Elastic Beanstalk will take care of the details. This includes provisioning capacity, setting up load balancing and auto scaling, and arranging for application health monitoring. You can build Elastic Beanstalk applications using a variety of platforms and languages including Java, PHP, Ruby, Node.ps, Python, .NET, Go, and Docker.

Elastic Beanstalk regularly releases new versions of supported platforms with operating system, web & app server, and language & framework updates. Until now, you needed to initiate a manual update (via the Elastic Beanstalk Console, command line interface, or API) to update your Elastic Beanstalk environments to the new version of the platform or language.  This gave you full control over the timing of updates, but left you with one more thing to remember and to manage.

Managed Platform Updates
Today we are making Elastic Beanstalk even more powerful by adding support for managed platform updates. You simply select a weekly maintenance window and Elastic Beanstalk will update your environment to the latest platform version automatically.

The updates are installed using an immutable  deployment model to ensure that no changes are made to the existing environment until the updated replacement instances are available and deemed healthy (according to the health check that you have configured for the application). If issues are detected during the update, traffic will continue to be routed to the existing instances. The immutable deployment model also ensures that your application will remain available during updates in order to minimize disruption to your users.

You can choose to install minor updates and patches automatically, and you can also trigger updates outside of the maintenance window. Because major updates typically require careful testing before being deployed, they will not take place automatically and must be triggered manually.

You can configure managed updates from the Elastic Beanstalk Console. First, enable them in the Configuration tab:

And then manage them in the Managed Updates tab:

Available Now
This new feature is available now and you can start using it today. There’s no charge for the feature, but you will pay for any additional EC2 instances that are used to ensure a seamless update.




European Maker Week

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/european-maker-week/

A large part of the Raspberry Pi community identify as makers. We all love to make things – from robots to yarn to pottery to art – and share our creations with others. European Maker Week is a celebration of this rapidly growing community, and it takes place between 30 May and 5 June in 28 countries.

European Maker Week banner: "a celebration of makers and innovators all over Europe"

EMW is an initiative promoted by European Commission and implemented by Maker Faire Rome in collaboration with Startup Europe. Over 80 events are scheduled for the week so there’s plenty to get involved with. And if you’re running a Raspberry Jam that week, you can submit it to the EMW website to be included on the map.

Map showing European Maker Week events in countries across Europe

European Maker Week events

This weekend, Maker Faire UK takes place in Newcastle. Maker Faire Rome, the largest in Europe, takes place in October, and their call for makers opens on 26 April – it’s a great opportunity to show off your latest Raspberry Pi project, or to attend and observe the great hacks on display in the city of Rome. This year a prize of €100,000 is available for the best maker project with the highest social impact.

Banners at the entrance to Maker Faire Rome: "16-18 Ottobre 2015" and "Scopri. Inventa. Crea."


Maker Faire Rome

There are many ways of connecting with the wider maker community. We strongly encourage you to check out a Maker Faire if you get the chance, and if you’re near a hackspace, a maker space, a fab lab or a repair café, you’ll find people there who are happy to share skills and tools. And, of course, there are Raspberry Jams around the world for you to get involved with too, such Raspberry Jam Berlin, Pi and More in Trier, and Rhône Raspberry Jam. A jam doesn’t have to be a huge event, it can be a small gathering – why not think about setting one up? Head over to our Jam page to find out how to get started!

The post European Maker Week appeared first on Raspberry Pi.

Weather, security and temperature cam

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/weather-security-temperature-cam/

We see a lot of Raspberry Pis being used as security cameras – check out this fine example that we blogged back in 2013 – they’re a cheap and effective solution for people who want to deter burglars and vandals.

This very serious-looking camera housing is only £5.49 on Amazon - click the image to buy.

This very serious-looking fake camera housing is only £5.49 on Amazon – click the image to buy, and then stick a camera board inside.

The good folks at Adafruit had one of those ideas that makes you slap yourself in the forehead for not coming up with it yourself. They’ve made a camera system which can upload images to the cloud, so you can check on it from wherever you are – but it also uploads other sensor data of your choosing (in this example, temperature) and graphs it using matplotlib. A sort of proto-Nest, if you will.

camera_monitor_picam_and_temp_on_pitft v1

We’re using Adafruit’s adafruit.io here: it’s their new Internet of Things API. It’s still in Beta, but pretty solid; we’d be interested to hear how you get on with it.

You can find an exhaustive how-to here. Jeremy Blythe from Adafruit says:

This project uses two Raspberry PIs – a sender and a receiver. The sender has a Raspberry Pi Camera and an MCP9808 temperature sensor to publish data to adafruit.io. The receiver, a dashboard somewhere else in the world, subscribes to this data feed and displays it.

This dashboard Raspberry Pi has a PiTFT and displays the image whenever it’s sent to the feed (every 5 minutes), the current temperature is overlaid on the image using pygame. The final cherry on the cake here is that if you tap the screen you flip to the graph view. This takes the data from the feed using the io-client-python data method, pulls out the last 24 hours and uses matplotlib to draw a graph of temp/time. Of course, you can see the feeds in the adafruit.io online dashboard too!

There’s a lot you can do in terms of feature-creep here; we’re thinking about what other sensors you could usefully add, and what else you might be able to do with a big dataset of images. Go wild – and tell us if you make one yourselves!


The post Weather, security and temperature cam appeared first on Raspberry Pi.

Edinburgh Mini Maker Faire

Post Syndicated from Laura Clay original https://www.raspberrypi.org/blog/edinburgh-mini-maker-faire/

Not all the tech fun in the UK happens down near Pi Towers in Cambridge. Here in Scotland, the Mini Maker Faire has been the Edinburgh International Science Festival’s grand finale for four years now. This year’s was the biggest yet, so I headed over to see what was going on. There were plenty of projects using Raspberry Pis, loads of new maker spaces and Jams, and even a mildly terrifying giant robot stalking around the courtyard. I’m sure someone did a headcount of the children at the end, don’t worry.


The first person to spot my neon Pi T-shirt was Tony from Newcastle MakerSpace, promoting the MakerFaire coming up on 23 April and attracting over 10,000 attendees. His mini Pi-powered Pacman arcade cabinet drew a sizeable queue, and his dinky Pi Zero game controllers looked like the ultimate in portable gaming: just plug into a TV and play!

MakLabs are also springing up across Scotland, with the largest meeting in Glasgow. Their showpiece was a Bigtrak-style toy tank with a webcam, controlled by REST and with a Pi acting as a server. While the internet was somewhat patchy in a hundred-year-old former veterinary school, it was still an impressive build.

Aberdeen boasts the 57North hacklab. It was hard to miss their amateur radio station tracker, with a PDP-8 minicomputer for added flashing light goodness. The hulking unit consisted of a Pi, two screens and the open-source XASTIR tracking software, showing the various stations.


The newest Makerspace on the block is in Dundee, Scotland’s gaming capital, so it seemed fitting that a tiny minimalist Pi Zero platform game, using a Pimoroni pHAT, was pride of place. They’re running weekly meetups and hope to set up a Jam in the near future.

Finally, we spotted Robotical, a PhD project now seeking crowdfunding for its adorable walking robots. We watched a tense football match between two bots, controlled by Model B Pis in their back and with micro:bit remote controls to move them. (The red robot won, incidentally. My gaming reflexes aren’t what they used to be.)


It was great to see what the community up here is doing with their Pis, and I’m looking forward to the Edinburgh Raspberry Jam on the 30 April where there will no doubt be even more brilliant projects being demonstrated.

The post Edinburgh Mini Maker Faire appeared first on Raspberry Pi.

Game Boy Zero

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/game-boy-zero/

We see a lot of Pi Zero retro gaming mods, but I think this one might just take the biscuit.


This rather beautiful mod from Wermy (leave your real name in the comments if you’d like us to use it, Wermy!) has a few details that really make it stand out. Pi Zero in a controller or hand-held device isn’t new: we’ve seen it before. But this one’s got a couple of special features. First up, there’s this glorious cartridge hack:


What you’re seeing here is a customised Game Boy cartridge which has been re-soldered and gently Dremeled to house a micro SD adapter, which will accept any micro SD you pop in there, and enable the Pi Zero inside the Game Boy itself to read from it. (Wurmy’s running Emulation Station on the Game Boy Zero.)

People with sharp eyes will have noticed that the Game Boy Zero has one big cosmetic difference (aside from that display) from the original Game Boy. It has two extra buttons, so you can play SNES, NES, and later Game Boy model games on there. There are also a couple of shoulder triggers. (The buttons Wurmy has used are from a SNES, and he says they’re very similar in look and feel to what you’ll find on the original Game Boy.)

The screen’s a little composite display from Adafruit, which was a little larger than the original display, and required some careful removing of struts inside the case. Wurmy’s added three buttons inside the case to control brightness, colour and contrast, along with a USB Bluetooth adaptor – it’s a tight fit to get everything inside the case, but he’s done a stand-up job.

final layout

Here it is in action.

Game Boy Zero with custom SD card reader game cartridge

UPDATE: I set up a blog where I’ll be posting how-to guides for this project. You can also enter there for a chance to win the one I’ll be building! http://www.sudomod.com I made a RetroPie handheld using a Raspberry Pi Zero and an original DMG-01 Game Boy.

Wurmy’s documenting the build here (and running a giveaway so you can win one of these gorgeous little things): head over to read more!

Oh – and to preempt Pi Zero stock woe in the comments, we’ve got some news from Eben:

Raspberry Pi Zero production is restarting in Wales next Monday after a hiatus to allow us to focus on Raspberry Pi 3 (a million units built and counting :D). We have placed 250ku of new orders, and are aiming to produce at least 50ku/month for the rest of this year. Distribution will continue to be via Pimoroni, Pi Hut, Adafruit and Micro Center for now.

To thank you for your patience, we’ve taken advantage of the hiatus to add a (much requested) new feature. I’ll leave you all to guess what it is (it’s not WiFi).

We expect the new Raspberry Pi Zero units (with the new feature) to be available in two to three weeks’ time. They’ll be stocked exclusively in the usual Pi Zero stores: The Pi Hut, Adafruit, Pimoroni and Micro Center.

The post Game Boy Zero appeared first on Raspberry Pi.

Building Enterprise Level Web Applications on AWS Lambda with the DEEP Framework

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/building-enterprise-level-web-applications-on-aws-lambda-with-deep/

This is a guest post by Eugene Istrati, the co-creator of the DEEP Framework, a full-stack web framework that enables developers to build cloud-native applications using microservices architecture.


From the beginning, Mitoc Group has been building web applications for enterprise customers. We are a small group of developers who are helping customers with their entire web development process, from conception through execution and down to maintenance. Being in the business of doing everything is very hard, and it would be impossible without using AWS foundational services, but we incrementally needed more. That is why we became early adopters of the serverless computing approach and developed an ecosystem called Digital Enterprise End-to-end Platform (DEEP) with AWS Lambda at the core.

In this post, we dive deeper into how DEEP is using AWS Lambda to empower developers to build cloud-native applications or platforms using microservices architecture. We will walk you through the process of identifying the front-end, back-end and data tiers required to build web applications with AWS Lambda at the core. We will focus on the structure of the AWS Lambda functions we use, as well as security, performance and benchmarking steps that we take to build enterprise-level web applications.

Enterprise-level web applications

Our approach to web development is full-stack and user-driven, focused on UI (aka the user interface) and UX (aka user eXperience). Before going into the details, we’d like to emphasize the strategical (biased and opinionated) decisions we made early on:

  • We don’t say “no” to customers; every problem is seriously evaluated and sometimes we offer options that involve our direct competitors.
  • We are developers and we focus only on the application level; everything else (platform level and infrastructure level) must be managed by AWS.
  • We focus 20% of effort to solve 80% of work load; everything must be automated and pushed on the service side rather than the client side.

To be honest and fair, it doesn’t work all the time as expected, but it does help us to learn fast and move quickly, sustainably and incrementally solving business problems through technical solutions that really matter. However, the definition of “really matter” differs from customer to customer, quite uniquely in some cases.

Nevertheless, what we have learned from our customers is that enterprise-level web applications must provide the following common expectations:


This post describes how we transformed a self-managed task management application (aka todo app) in minutes. The original version can be seen on www.todomvc.com and the original code can be downloaded from https://github.com/tastejs/todomvc/tree/master/examples/angularjs.

The architecture of every web application we build or transform, including the one described above, is similar to the reference architecture of the realtime voting application published recently by AWS on GitHub.

The todo app is written in AngularJS and deployed on Amazon S3, behind Amazon CloudFront (front-end). Task management is processed by AWS Lambda, optionally behind Amazon API Gateway (back-end). Task metadata is stored in Amazon DynamoDB (data tier). The transformed todo app, along with instructions on how to install and deploy this web application, is described in the Building Scalable Web Apps with AWS Lambda and Home-Grown Serverless blog post and the todo code is available on GitHub.

Let’s look at AWS Lambda functions and the value proposition they offer to us and our customers.

AWS Lambda functions

The goal of the todo app is to manage tasks in a self-service mode. End users can view tasks, create new tasks, mark or unmark a task as done, and clear completed tasks. From the UI point of view, that leads to four user interactions that require different back-end calls:

  • web service that retrieves tasks
  • web service that creates tasks
  • web service that deletes tasks
  • web service that updates tasks

A simple reordering of the above identified back-end services calls leads to basic CRUD (create, retrieve, update, delete) operations on the Task data object. These are the simple logical steps that we take to identify the front-end, back-end, and data tiers of (drums beating, trumpets playing) our approach to microservices, which we prefer to call microapplications.

Therefore, coming back to AWS Lambda, we have written four small Node.js functions that are context-bounded and self-sustained (each microservice corresponds to the above identified back-end web service):

Microservice that retrieves tasks
'use strict';

import DeepFramework from 'deep-framework';

export default class Handler extends DeepFramework.Core.AWS.Lambda.Runtime {
   * @param {Array} args
  constructor(...args) {

   * @param request
  handle(request) {
    let taskId = request.getParam('Id');

    if (taskId) {
      this.retrieveTask(taskId, (task) => {
        return this.createResponse(task).send();
    } else {
      this.retrieveAllTasks((result) => {
        return this.createResponse(result).send();

   * @param {Function} callback
  retrieveAllTasks(callback) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.findAll((err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);

      return callback(task.Items);

   * @param {String} taskId
   * @param {Function} callback
  retrieveTask(taskId, callback) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.findOneById(taskId, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);

      return callback(task ? task.get() : null);
Microservice that creates a task
'use strict';

import DeepFramework from 'deep-framework';

export default class extends DeepFramework.Core.AWS.Lambda.Runtime {
   * @param {Array} args
  constructor(...args) {

   * @param request
  handle(request) {
    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.createItem(request.data, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);

      return this.createResponse(task.get()).send();
Microservice that updates a task
'use strict';

import DeepFramework from 'deep-framework';

export default class Handler extends DeepFramework.Core.AWS.Lambda.Runtime {
   * @param {Array} args
  constructor(...args) {

   * @param request
  handle(request) {
    let taskId = request.getParam('Id');

    if (typeof taskId !== 'string') {
      throw new InvalidArgumentException(taskId, 'string');

    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.updateItem(taskId, request.data, (err, task) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);

      return this.createResponse(task.get()).send();
Microservice that deletes a task
'use strict';

import DeepFramework from 'deep-framework';

export default class extends DeepFramework.Core.AWS.Lambda.Runtime {
   * @param {Array} args
  constructor(...args) {

   * @param request
  handle(request) {
    let taskId = request.getParam('Id');

    if (typeof taskId !== 'string') {
      throw new DeepFramework.Core.Exception.InvalidArgumentException(taskId, 'string');

    let TaskModel = this.kernel.get('db').get('Task');

    TaskModel.deleteById(taskId, (err) => {
      if (err) {
        throw new DeepFramework.Core.Exception.DatabaseOperationException(err);

      return this.createResponse({}).send();

Each above file with related dependencies is compressed into .zip file and uploaded to AWS Lambda. If you’re new to this process, we strongly recommend following the How to Create, Upload and Invoke an AWS Lambda function tutorial.

Back to the four small Node.js functions, you can see that we have adopted ES6 (aka ES2015) as our coding standard. And we are importing deep-framework in every function. What is this framework anyway and why are we using it everywhere?

Full-stack web framework

Step back for a minute. Building and uploading AWS Lambda functions to the service is very simple and straight-forward, but now imagine that you need to manage 100–150 web services to access a web page, multiplied by hundreds or thousands of web pages.

We believe that the only way to achieve this kind of flexibility and scale is automation and code reuse. These principles led us to build and open source DEEP Framework — a full-stack web framework that abstracts web services and web applications from specific cloud services — and DEEP CLI (aka deepify) — a development tool-chain that abstracts package management and associated development operations.

Therefore, to make sure that the process of managing AWS Lambda functions is streamlined and automated, we consistently include two more files in each uploaded .zip:

DEEP microservice bootstrap
'use strict';

import DeepFramework from 'deep-framework';
import Handler from './Handler';

export default DeepFramework.LambdaHandler(Handler);
DEEP microservice package metadata (for npm) 
  "name": "deep-todo-task-create",
  "version": "0.0.1",
  "description": "Create a new todo task",
  "scripts": {
    "postinstall": "npm run compile",
    "compile": "deepify compile-es6 `pwd`"
  "dependencies": {
    "deep-framework": "^1.8.x"
  "preferGlobal": false,
  "private": true,
  "analyze": true

Having these three files (Handler.es6, bootstrap.es6, and package.json) in each Lambda function doesn’t mean that your final .zip file will be that small. Actually, a lot of additional operations happen before the .zip file is created. To name a few:

  • AWS Lambda performs better when the uploaded codebase is smaller. Because we provide both local development capabilities and one-step push to production, our process optimizes resources before deploying to AWS.
  • ES6 is not supported by the node.js v0.10.x runtime that we use in AWS Lambda, it is however available in the Node 4.3 runtime, so we compile .es6 files into ES5-compliant .js files using Babel.
  • Dependencies that are defined in package.json are automatically pulled and fine-tuned for node.js v0.10.x to provide the best performance possible.

Putting everything together

First, you need the following pre-requisites:

  1. AWS account (Create an Amazon Web Services Account)
  2. AWS CLI (Configure AWS Command Line Interface)
  3. Git v2+ (Get Started — Installing Git)
  4. Java / JRE v6+ (JDK 8 and JRE 8 Installation Start Here)
  5. js v4+ (Install nvm and Use latest node v4)

Note: Don’t use sudo to install nvm. Otherwise, you’ll have to fix npm permissions.

Second, install the DEEP CLI with the following command:

npm install deepify -g

Next, deploy the todo app using deepify:

deepify install github://MitocGroup/deep-microservices-todo-app ~/deep-todo-app

deepify server ~/deep-todo-app

deepify deploy ~/deep-todo-app

Note: When the deepify server command is finished, you can open http://localhost:8000 in your browser and enjoy the todo app running locally.

Cleaning up

There are at least half a dozen services and several dozen of resources created during deepify deploy. If only there was a simple command that would clean up everything when we’re done. We thought of that and created deepify undeploy to address this need. When you are done using todo app and want to remove web app related resources, execute the following:

deepify undeploy ~/deep-todo-app

As you can see, we empower developers to build hassle-free, cloud-native applications or platforms using microservices architecture and serverless computing.

And what about security?


One of the biggest value propositions on AWS is out-of-the-box security and compliance. The beauty of the cloud-native approach is that security comes by design (in other words, it won’t work otherwise). We take full advantage of that shared responsibility model and enforce security in every layer.

End users benefit from IAM best practices through streamlined implementations of least privilege access, delegated roles instead of credentials, and integration with logging and monitoring services (e.g., AWS CloudTrail, Amazon CloudWatch, and Amazon Elasticsearch Service + Kibana). For example, developers and end users of the todo app didn’t need to explicitly define any security roles (it was done by deepify deploy), but they can rest assured that only their instance of todo app will be using their infrastructure, platform, and application resources.

The following are two security roles (back-end and front-end) that have been seamlessly generated and enforced in each layer:

IAM role that allows back-end invocation of AWS Lambda function (e.g. DeepProdTodoCreate1234abcd) in web application AWS account (e.g. 123456789000)
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": ["lambda:InvokeFunction"],
            "Resource": ["arn:aws:lambda:us-east-1:123456789000:function:DeepProdTodoCreate1234abcd*"]
DEEP role that allows front-end resource (e.g deep.todo:task) to execute action (e.g. deep.todo:task:create)
  "Version": "2015-10-07",
  "Statement": [
      "Effect": "Allow",
      "Action": ["deep.todo:task:create"],
      "Resource": ["deep.todo:task"]


We have been continuously benchmarking AWS Lambda for various use cases in our microapplications. After a couple of repetitive situations doing similar analysis, we decided to build the benchmarking as another microapplication and re-use the ecosystem to include it automatically where we needed it. You can find the open-source code for the benchmarking microapplication on GitHub:

Particularly, for todo app, we performed various benchmarking analysis on AWS Lambda by tweaking different components in a specific function (e.g. function size, memory size, billable cost, etc.). Next, we would like to share results with you:

Benchmarking for todo app

Req NoFunction Size (MB)Memory Size (MB)Max Memory Used (MB)Start timeStop timeFront-end Call (ms)Back-end Call (ms)Billed Time (ms)Billed Time ($)


Speaking of performance, we find AWS Lambda mature enough to power large-scale web applications. The key is to build the functions as small as possible, focusing on a simple rule of one function to achieve only one task. Over time, these functions might grow in size; therefore, we always keep an eye on them and re-factor / split into the lowest possible logical denominator (smallest task).

Using the benchmarking tool, we ran multiple scenarios on the same function from todo app

Function Size (MB)Memory Size (MB)Max Memory Used (MB)Avg Front-end (ms)Avg Back-end (ms)Total Calls (#)Total Billed (ms)Total Billed ($/1B)*

Based on performance data, we have learned some pretty cool stuff:

  • The smaller the function is, the better it performs; On the other hand, if more memory is allocated, the size of the function matters less and less.
  • Memory size is not directly proportional to billable costs; developers can decide the memory size based on performance requirements combined with associated costs.
  • The key to better performance is continuous load, thanks to container reuse in AWS Lambda.


In this post, we presented a small web application that is built with AWS Lambda at the core. We walked you through the process of identifying the front-end, back-end, and data tiers required to build the todo app. You can fork the example code repository as a starting point for your own web applications.

If you have questions or suggestions, please leave a comment below.


Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/aquaponics/

So then. Aquaponics. I’d assumed it was something to do with growing underwater plants. Dead wrong.

My educative moment occurred at Disneyworld’s Epcot a couple of years ago. There’s a ride called The Land, where, after enduring  a selection of creaking dioramas illustrating different US habitats, you’re taken on a little motorised punt thing on a watery track through greenhouses groaning under the weight of four-kilogramme mega-lemons, arboreal tomatoes and Mickey-shaped pumpkins.

Giant lemon, from Arild Finne Nybø on Flickr.

Giant lemon, from Arild Finne Nybø on Flickr.

At the end of the…river thing…, you’ll find a section on aquaponics. An aquaponics system creates an incredibly efficient symbiotic environment for raising food. Aquatic food (prawns, fish and the like) is raised in water. Waste products from those creatures, which in an aquatic-only environment would degrade the quality of the water, are diverted into a hydroponic system, where nitrogen-fixing bacteria turn them into nitrates and nitrites, which are used to feed edible plants. The water can then be recirculated into the fish tank.

Finesse is required. You need to be able to monitor and control temperature, drainage and pumping. Professional systems are expensive, so the enterprising aquaponics practitioner will want to build their own. Enter the Raspberry Pi. And a shipping container, a shed and some valves.

Raspbery Pi Controlled IBC based Aquaponics

Raspbery Pi Controlled IBC based Aquaponics. Details and scripts available at http://www.instructables.com/id/Raspberry-Pi-Controlled-Aquaponics/

MatthewH415, the maker, has documented the whole build at Instructables. He says:

This build uses the IBC method of Aquaponics, with modifications to include a Raspberry Pi for controlling a pump, solenoid drain, and temperature probes for water and air temperatures. The relays and timing is controlled with python scripting. Temperature and control data is collected every minute and sent to plot.ly for graphing, and future expansion will include sensors for water level and PH values for additional control.

All of my scripts are available at github.com, feel free to use them for your aquaponics setup. Thanks to Chris @ plot.ly for the help with streaming data to their service, and to the amazingly detailed build instructions provided at IBCofAquaponics.com.

We love it. Thanks Matthew; come the apocalypse, we at Pi Towers are happy in the safe and secure knowledge that we’ll at least have tilapia and cucumbers.

The post Aquaponics appeared first on Raspberry Pi.

Astro Pi: Coding Challenges Results!

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-coding-challenges-results/


Back in early February we announced a new opportunity for young programmers to send their code up the International Space Station to be used by British ESA Astronaut Tim Peake.

Two challenges were on offer. The first required you to write Python Sense HAT code to turn Ed and Izzy (the Astro Pi computers) into an MP3 player, so that Tim can plug in his headphones and listen to music. The second required you to code Sonic Pi music for Tim to listen to via the MP3 player.

The competition closed on March 31st and the judging took place at Pi Towers in Cambridge last week. With the assistance of Flat Tim!

The judges were selected from companies who have contributed to the Astro Pi mission so far. These were;


Orchestral Manoeuvres In the Dark (Andy McCluskey and Paul Humphreys)

We also wanted to have some judges to provide musical talent to balance the science and technology expertise from the aerospace people. Thanks to Carl Walker at ESA we were able to connect with synthpop giants OMD (Enola Gay, Electricity, Maid of Orleans) and British/French film composer Ilan Eshkeri (Stardust, Layer Cake, Shaun the Sheep).


Ilan Eshkeri working on the Stardust soundtrack

We also secured Sam Aaron, the author of Sonic Pi and Overtone, a live coder who regularly performs in clubs across the UK.


Sam Aaron at TEDx Newcastle

Entries were received from all over the UK and were judged across four age categories; 11 and under, 11 to 13, 14 to 16 and 17 to 18. So the outcome is that four MP3 players and four songs will be going up to the ISS for Tim to use. Note that the Sonic Pi tunes will be converted to MP3 so that the MP3 player programs can load and play the audio to Tim.

The judging took two days to complete: one full day for the MP3 players and one day for the Sonic Pi tunes. So without further ado, let’s see who the winners are!

MP3 Player Winners

11 and under

11 to 13

14 to 16

  • Winner: Joe Speers
  • School: n/a (Independent entry)
  • Teacher/Adult: Craig Speers
  • Code on Github

17 to 18

Sonic Pi Winners

11 and under

11 to 13

  • Winner: Isaac Ingram
  • School: Knox Academy
  • Teacher/Adult: Karl Ingram

14 to 16

17 to 18

Congratulations to you all. The judges had a lot of fun with your entries and they will very soon be uploaded to the International Space Station for Tim Peake. The Astro Pi Twitter account will post a tweet to indicate when Tim is listening to the music.

The Raspberry Pi Foundation would like to thank all the judges who contributed to this competition, and especially our special judges: Andy McCluskey and Paul Humphreys from OMD, Ilan Eshkeri and Sam Aaron.

The post Astro Pi: Coding Challenges Results! appeared first on Raspberry Pi.

Cat exercise wheel

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/cat-exercise-wheel/

This is not a hamster.

(I could stare at that all day.)

Cat owners among you with hard floor coverings will recognise the eldritch skittering of tiny paws at the witching hour, when all cats believe they have become rally cars. The owner of Jasper and Ruben (who, when researching this post, I thought was called Jasper Ruben; he remains anonymous for now — please leave a comment with your name if you’d like to!) has mechanised the problem. With a Raspberry Pi, natch.


This is the web interface for Jasper and Ruben’s wheel. Cat-propelled, and Raspberry Pi-monitored, it logs distance travelled, average speed, duration of feline whirring, and all that good stuff, and displays the statistics in real time.

Here’s the back, where the clever happens. (And the top of Ruben’s head.)


The Pi’s GPIO is hooked up to a coil sensor behind the wheel, which is housed in an old DSL splitter box, held as close as possible to the wheel without actually touching it. A coil sensor detects magnetic field, so the wheel itself has some modifications to make it detectable and measurable: six small ferrous nails hidden in the lining.


The Pi drives a camera board and interprets the feedback from the sensor, so it can display live statistics as the cat runs. It also enables the user to record any particularly nifty bits of cat-sprinting.

Being human, you want to see more video of the setup in action. Here’s Jasper, being taunted by a laser dot, with real-time stats at the top of the video.

And here’s proof that the cats will use the wheel spontaneously:

You can see a comprehensive photo how-to on Imgur; Jasper and Ruben’s owner is also answering questions about the build over on Reddit.

We want to see someone modify this to use the wheel’s rotation to charge a battery. What would you use it to power? (I’m thinking kibble dispenser…)

The post Cat exercise wheel appeared first on Raspberry Pi.

Picademy: New dates announced!

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/picademy-new-dates-announced/

Mad Fer It in Manchester

It’s been a while since we blogged on all things Picademy, so here’s a quick update…

For the uninitiated, Picademy is our free, two-day CPD event series for educators who want to use the Raspberry Pi for projects in the classroom. Over the past three months, we’ve been busy delivering four events in Manchester, creating over a hundred new Raspberry Pi Certified Educators in the process. The whole team was blown away by the passion of the people who attended. In fact, such was the rabid enthusiasm for Raspberry Pi in the area that we added two extra dates in April to cope with the demand – good job, Manchester!

A recent Picademy Manchester cohort.

A recent Picademy Manchester cohort

Picademy uses project-based learning to underpin its workshops, so that delegates can immediately see how the projects can be used in a classroom setting. This way of learning might be a little bit daunting for those who haven’t been in the classroom as a student for while, so we love it when people who might initially lack confidence using the Pi undergo a transformation and embrace the role reversal of teacher becoming student.

A willingness to embrace new ideas, being open to failure, and allowing yourself to make mistakes on the road to success are important messages to take away and think about from each event. One recent Picademy Manchester graduate has written a great blog post reflecting on her experiences at Picademy, and another praised the support she received:

“Thank you so much for a brilliant two days in Manchester. It’s one of the most supportive and inspiring events I have ever attended.”
Carol Macintosh, Picademy delegate

The Pi on the Tyne is all mine

With Picademy Manchester finishing in April, we can announce that our next location will be in Newcastle at Newcastle City Library, where we will be holding events in the spring and early summer.

To find out more and make an application, visit our Picademy Newcastle page.

Our condensed version of the Newcastle skyline

Our condensed version of the Newcastle skyline in all its glory

Code Club Teacher Training

If you want something more compact to fit into your busy schedule, Code Club Teacher Training will also be running in Newcastle alongside Picademy events. The training is only two hours long and provides teachers with practical activities and engaging resources to develop young people’s understanding. The sessions are delivered in school, as INSET or twilight sessions, and are mapped against the new computing curriculum. We offer three modules: Computational Thinking, Programming, and Internet and the Web.

Request a Teacher Training session in your school: www.codeclubpro.org/request_training

The post Picademy: New dates announced! appeared first on Raspberry Pi.

A Raspberry Pi cosmic ray detector from folk at CERN

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/raspberry-pi-cosmic-ray-detector-from-cern/

A group of people from CERN is using their spare time to build Cosmic Pi, a cosmic ray detector based on a Raspberry Pi. Their goal is to crowdsource the world’s largest cosmic ray telescope by getting the devices into the hands of people and organisations around the globe, collecting data that will help astrophysicists understand more about these rays, several of which have passed through your body in the time it has taken you to read this paragraph. A video the team made last year explains the idea nicely:

Cosmic Pi

Uploaded by Cosmic Pi on 2015-05-07.

You can take a look at details of the team’s current Cosmic Pi prototype hardware and software, all available online. The cosmic-ray-detecting part consists of a scintillator, made of a material that absorbs energy from cosmic rays passing through it and then emits some of that energy in the form of photons; an optic fibre to trap these photons and carry them to the edges of the scintillator material; and a silicon photomultiplier at each end of the fibre to convert this light into an electrical signal that can be analysed by the computer. A blog post from the end of last year has more detail about the prototyping process and the current design.

From a hackathon to another : a year of CosmicPi evolution

On the first week-end of October, we were at CERN´s Ideasquare participating in The Port 2015 hackathon. We gave an overview of the project in our final presentation, available to watch here and below. Our presentation at ThePort15 hackathon.

Because atmospheric conditions influence the flux of cosmic rays at the Earth’s surface, the team decided that it would be worthwhile including temperature, pressure and humidity sensors to monitor the weather. They also added a GPS module to allow devices to log their location (allowing altitude, another factor influencing flux, to be recorded too), and an accelerometer and magnetometer to provide additional information about the device’s orientation and position. Currently, an Arduino Due microcontroller reads the sensor data and passes them to the Raspberry Pi, which pre-processes and stores them; the Cosmic Pi team is prototyping a HAT to combine as many components as possible in a single PCB.

Cosmic Pi HAT prototype and Raspberry Pi, with banana

Cosmic Pi HAT prototype and Raspberry Pi, with banana for scale. Photo by James Devine.

You can sign up to get notified when Cosmic Pi launches, which the team hope will happen with a Kickstarter campaign later in 2016, and they also intend to publish the design under an open source licence. They’re aiming to keep the cost of the whole package under $500, or about £350. While this is likely to be a bit steep for some individuals, we’d love to see organisations and groups like hackspaces using devices like this to contribute to what could be an amazingly valuable citizen science project. Keep an eye on the Cosmic Pi blog for updates!

The post A Raspberry Pi cosmic ray detector from folk at CERN appeared first on Raspberry Pi.

Indexing Amazon DynamoDB Content with Amazon Elasticsearch Service Using AWS Lambda

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/indexing-amazon-dynamodb-content-with-amazon-elasticsearch-service-using-aws-lambda/

Stephan Hadinger
Sr Mgr, Solutions Architecture

Mathieu Cadet Account Representative

A lot of AWS customers have adopted Amazon DynamoDB for its predictable performance and seamless scalability. The main querying capabilities of DynamoDB are centered around lookups using a primary key. However, there are certain times where richer querying capabilities are required. Indexing the content of your DynamoDB tables with a search engine such as Elasticsearch would allow for full-text search.

In this post, we show how you can send changes to the content of your DynamoDB tables to an Amazon Elasticsearch Service (Amazon ES) cluster for indexing, using the DynamoDB Streams feature combined with AWS Lambda.


Architectural overview

Here’s a high-level overview of the architecture:

DynamoDB Streams to Elasticsearch bridge

We’ll cover the main steps required to put this bridge in place:

  1. Choosing the DynamoDB tables to index and enabling DynamoDB Streams on them.
  2. Creating an IAM role for accessing the Amazon ES cluster.
  3. Configuring and enabling the Lambda blueprint.


Choosing the DynamoDB table to index

In this post, you look at indexing the content of a product catalog in order to provide full-text search capabilities. You’ll index the content of a DynamoDB table called all_products, which is acting as the catalog of all products.

Here’s an example of an item stored in that table:

  "product_id": "B016JOMAEE",
  "name": "Serverless Single Page Apps: Fast, Scalable, and Available",
  "category": "ebook",
  "description": "AWS Lambda - A Guide to Serverless Microservices
                  takes a comprehensive look at developing 
                  serverless workloads using the new
                  Amazon Web Services Lambda service.",
  "author": "Matthew Fuller",
  "price": 15.0,
  "rating": 4.8

Enabling DynamoDB Streams

In the DynamoDB console, enable the DynamoDB Streams functionality on the all_products table by selecting the table and choosing Manage Stream.

Enabling DynamoDB Streams

Multiple options are available for the stream. For this use case, you need new items to appear in the stream; choose either New image or New and old images. For more information, see Capturing Table Activity with DynamoDB Streams.

DynamoDB Streams Options

After the stream is set up, make a good note of the stream ARN. You’ll need that information later, when configuring the access permissions.

Finding a DynamoDB Stream ARN

Creating a new IAM role

The Lambda function needs read access to the DynamoDB stream just created. In addition, the function also requires access to the Amazon ES cluster to submit new records for indexing.

In the AWS Identity and Access Management (IAM) console, create a new role for the Lambda function and call it ddb-elasticsearch-bridge.

Creating new IAM role

As this role will be used by the Lambda function, choose AWS Lambda from the AWS Service Roles list.

Attaching policy to the role

On the following screens, choose the AWSLambdaBasicExecutionRole managed policy, which allows the Lambda function to send logs to Amazon CloudWatch Logs.

Configuring access to the Amazon ES cluster

First, you need a running Amazon ES cluster. In this example, create a search domain called inventory. After the domain has been created, note its ARN:

Attaching policy to the role

In the IAM console, select the ddb-elasticsearch-bridge role created earlier and add two inline policies to that role:

Attaching policy to the role

Here’s the policy to add to allow the Lambda code to push new documents to Amazon ES (replace the resource ARN with the ARN of your Amazon ES cluster):

    "Version": "2012-10-17",
    "Statement": [
            "Action": [
            "Effect": "Allow",
            "Resource": "arn:aws:es:us-east-1:0123456789:domain/inventory/*"

Important: you need to add /* to the resource ARN as depicted above.

Next, add a second policy for read access to the DynamoDB stream (replace the resource ARN with the ARN of your DynamoDB stream):

    "Version": "2012-10-17",
    "Statement": [
            "Action": [
            "Effect": "Allow",
            "Resource": [

Enabling the Lambda blueprint

When you log into the Lambda console and choose Create a Lambda Function, you are presented with a list of blueprints to use. Select the blueprint called dynamodb-to-elasticsearch.

dynamodb-to-elasticsearch blueprint

Next, select the DynamoDB table all_products as the event source:

Lambda event source

Then, customize the Lambda code to specify the Elasticsearch endpoint:

Customizing the blueprint

Finally, select the ddb-elasticsearch-bridge role created earlier to give the Lambda function the permissions required to interact with DynamoDB and the Amazon ES cluster:

Choosing a role

Testing the result

You’re all set!

After a few records have been added to your DynamoDB table, you can go back to the Amazon ES console and validate that a new index for your items has been automatically created:

Amazon ES indices

Playing with Kibana (Optional)

Elasticsearch is commonly used with Kibana for visual exploration of data.

To start querying the indexed data, create an index pattern in Kibana. Use the name of the DynamoDB table as an index pattern:

Kibana Index pattern

Kibana automatically determines the best type for each field:

Kibana Index pattern

Use a simple query to search the product catalog for all items in the category book containing the word aws in any field:

Kibana Index pattern

Other considerations

Indexing pre-existing content

The solution presented earlier is ideal to ensure that new data is indexed as soon it is added to the DynamoDB table. But what about pre-existing data stored in the table?

Luckily, the Lambda function used earlier can also be used to process data from an Amazon Kinesis stream, as long as the format of the data is similar to the DynamoDB Streams records.

Provided that you have an Amazon Kinesis stream set up as an additional input source for the Lambda code above, you can use the (very naive) sample Python3 code below to read the entire content of a DynamoDB table and push it to an Amazon Kinesis stream called ddb-all-products for indexing in Amazon ES.

import json
import boto3
import boto3.dynamodb.types

# Load the service resources in the desired region.
# Note: AWS credentials should be passed as environment variables
# or through IAM roles.
dynamodb = boto3.resource('dynamodb', region_name="us-east-1")
kinesis = boto3.client('kinesis', region_name="us-east-1")

# Load the DynamoDB table.
ddb_table_name = "all_products"
ks_stream_name = "ddb-all-products"
table = dynamodb.Table(ddb_table_name)

# Get the primary keys.
ddb_keys_name = [a['AttributeName'] for a in table.attribute_definitions]

# Scan operations are limited to 1 MB at a time.
# Iterate until all records have been scanned.
response = None
while True:
    if not response:
        # Scan from the start.
        response = table.scan()
        # Scan from where you stopped previously.
        response = table.scan(ExclusiveStartKey=response['LastEvaluatedKey'])

    for i in response["Items"]:
        # Get a dict of primary key(s).
        ddb_keys = {k: i[k] for k in i if k in ddb_keys_name}
        # Serialize Python Dictionaries into DynamoDB notation.
        ddb_data = boto3.dynamodb.types.TypeSerializer().serialize(i)["M"]
        ddb_keys = boto3.dynamodb.types.TypeSerializer().serialize(ddb_keys)["M"]
        # The record must contain "Keys" and "NewImage" attributes to be similar
        # to a DynamoDB Streams record. Additionally, you inject the name of
        # the source DynamoDB table in the record so you can use it as an index
        # for Amazon ES.
        record = {"Keys": ddb_keys, "NewImage": ddb_data, "SourceTable": ddb_table_name}
        # Convert the record to JSON.
        record = json.dumps(record)
        # Push the record to Amazon Kinesis.
        res = kinesis.put_record(

    # Stop the loop if no additional records are
    # available.
    if 'LastEvaluatedKey' not in response:

Note: In the code example above, you are passing the name of the source DynamoDB table as an extra record attribute SourceTable. The Lambda function uses that attribute to build the Amazon ES index name. Another approach for passing that information is tagging the Amazon Kinesis stream.

Now, create the Amazon Kinesis stream ddb-all-productsand then add permissions to the ddb-elasticsearch-bridge role in IAM to allow the Lambda function to read from the stream:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

Finally, set the Amazon Kinesis stream as an additional input source to the Lambda function:

Amazon Kinesis input source

Neat tip: Doing a full re-index of the content this way will not create duplicate entries in Amazon ES.

Paying attention to attribute types

With DynamoDB, you can use different types for the same attribute on different records, but Amazon ES expects a given attribute to be of only one type. Similarly, changing the type of an existing attribute after it has been indexed in Amazon ES causes problems and some searches won’t work as expected.

In these cases, you must rebuild the Amazon ES index. For more information, see Reindexing Your Data in the Elasticsearch documentation.


In this post, you have seen how you can use AWS Lambda with DynamoDB to index your table content in Amazon ES as changes happen.

Because you are relying entirely on Lambda for the business logic, you don’t have to deal with servers at any point: everything is managed by the AWS platform in a highly available and scalable fashion. To learn more about Lambda and serverless infrastructures, see the Microservices without the Servers blog post.

Now that you have added full-text search to your DynamoDB table, you might be interested in exposing its content through a small REST API. For more information, see Using Amazon API Gateway as a proxy for DynamoDB.

Build a smart doorbell with Windows 10

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/build-smart-doorbell-windows-10/

When someone rings my doorbell at home, I walk to the door to find out who’s there. For those of you with larger homes, I know that it can be challenging to get there in time to release the hounds.
Architecture diagramArchitecture diagram
With you in mind, Kishore Gaddam has put together a tutorial showing how you can use Windows 10 and Visual Studio to build a doorbell that takes your visitor’s picture, uploads it to Azure, and sends a notification to your cellphone. Integration with your smart kennel door is left as an exercise for the reader.
Head over to hackster.io for all the gory details.
The post Build a smart doorbell with Windows 10 appeared first on Raspberry Pi.