Michael Portera‘s trading card scanner uses LEGO, servo motors, and a Raspberry Pi and Camera Module to scan Magic: The Gathering cards and look up their prices online. This is a neat and easy-to-recreate project that you can adapt for whatever your, or your younger self’s, favourite trading cards are.
For those of you who aren’t this nerdy [Janina is 100% this nerdy – Ed.], Magic: The Gathering (or MTG for short) is a trading card game first launched in 1993. It’s based on a sprawling fantasy multiverse storyline, and is very heavy on mechanics — the current comprehensive rules fill 228 pages! You can imagine it as being a bit like Dungeons and Dragons, with less role-playing and more of a chess vibe. Unlike in chess, however, you can beat your MTG opponent in one turn with just the right combination of cards. If that’s your style of play, that is.
Scanning trading cards
So far, there are around 20000 official MTG cards, and, as with other types of trading cards, some of them are worth a lot of money.
Michael is one of the many people who were keen MTG players in their youth. Here’s how he came up with his project idea:
I was really into trading cards as a kid. I recently came across a lot of Magic: The Gathering cards in a box and thought to myself — I wonder how many cards I have and how much they’re worth?! Logging and looking these up manually would take a while, so I decided to see if I could automate some of the process. Somehow, the process led to building a platform out of Lego and leveraging AWS S3 and Rekognition.
LEGO, servos and camera
To build the housing of the scanner, Michael used LEGO, stating “I’m not good at wood working, and I thought that it might be rough on the cards.” While he doesn’t provide a build plan for the housing, Michael only used bricks from in the LEGO Medium Creative Brick Box he bought for the project. In addition, his tutorial includes a lot of pictures to guide you.
Servo motors spin plastic wheels to move single cards from a stack set into the scanner. Michael positioned a Raspberry Pi Camera Module so that it can take a picture of the title of each card as it is set before the lens. The length of the camera’s ribbon cable gave Michael a little difficulty, so he recommends getting an extension for it if you’re planning to recreate the build.
Optical character recognition and MTG card price API
On the software side, Michael wrote three scripts. One is a Python script to control the servos and take pictures. This, he says, “[records] about 20–25 cards a minute.”
Another script identifies the cards and looks up their prices automatically. Michael tried out OpenCV and Tesseract for optical character recognition (OCR) first, before settling on AWS S3 and Rekognition for storing and processing images, respectively. You’ll need an AWS account to do this — Michael used the free tier, which he says allows him to process 5000 pictures per month.
A sizeable collection
Finally, the data that Rekognition sends back gets processed by another Python script that looks up the identified cards on the TCGplayer API to find their price.
Michael says he’s very satisfied with the accuracy of the project’s OCR. He found out that the 920 Magic: The Gathering cards he scanned are worth about $275 in total. He provides a full write-up plus code over on hackster.io.
And for my next trick…
You might be thinking what I’m thinking: the logical next step for this project is to turn it into a card sorter. Then you could input a list of the card deck you want to put together, and presto! The device picks out the right cards from your collection. Building a Commander deck just became a little easier!
What trading cards would you use this project with, and how would you extend it? Also, what’s your favourite commander? Let me know in the comments!
After the outstanding success of their AIY Projects Voice and Vision Kits, Google has announced the release of upgraded kits, complete with Raspberry Pi Zero WH, Camera Module, and preloaded SD card.
Google’s AIY Projects Kits
Google launched the AIY Projects Voice Kit last year, first as a cover gift with The MagPi magazine and later as a standalone product.
Makers needed to provide their own Raspberry Pi for the original kit. The new kits include everything you need, from Pi to SD card.
Within a DIY cardboard box, makers were able to assemble their own voice-activated AI assistant akin to the Amazon Alexa, Apple’s Siri, and Google’s own Google Home Assistant. The Voice Kit was an instant hit that spurred no end of maker videos and tutorials, including our own free tutorial for controlling a robot using voice commands.
Later in the year, the team followed up the success of the Voice Kit with the AIY Projects Vision Kit — the same cardboard box hosting a camera perfect for some pretty nifty image recognition projects.
For more on the AIY Voice Kit, here’s our release video hosted by the rather delightful Rob Zwetsloot.
Check out the exclusive Google AIY Projects Kit that comes free with The MagPi 57! Grab yourself a copy in stores or online now: http://magpi.cc/2pI6IiQ This first AIY Projects kit taps into the Google Assistant SDK and Cloud Speech API using the AIY Projects Voice HAT (Hardware Accessory on Top) board, stereo microphone, and speaker (included free with the magazine).
AIY Projects 2
So what’s new with version 2 of the AIY Projects Voice Kit? The kit now includes the recently released Raspberry Pi Zero WH, our Zero W with added pre-soldered header pins for instant digital making accessibility. Purchasers of the kits will also get a micro SD card with preloaded OS to help them get started without having to set the card up themselves.
Everything you need to build your own Raspberry Pi-powered Google voice assistant
“Everything you need to get started is right there in the box,” explains Billy Rutledge, Google’s Director of AIY Projects. “We knew from our research that even though makers are interested in AI, many felt that adding it to their projects was too difficult or required expensive hardware.”
Google is also hard at work producing AIY Projects companion apps for Android, iOS, and Chrome. The Android app is available now to coincide with the launch of the upgraded kits, with the other two due for release soon. The app supports wireless setup of the AIY Kit, though avid coders will still be able to hack theirs to better suit their projects.
Google has also updated the AIY Projects website with an AIY Models section highlighting a range of neural network projects for the kits.
Get your kit
The updated Voice and Vision Kits were announced last night, and in the US they are available now from Target. UK-based makers should be able to get their hands on them this summer — keep an eye on our social channels for updates and links.
Spring has sprung, and with it, sleepy-eyed wildlife is beginning to roam our gardens and local woodlands. So why not follow hackster.io maker reichley’s tutorial and build your own solar-powered squirrelhouse nature cam?
“I live half a mile above sea level and am SURROUNDED by animals…bears, foxes, turkeys, deer, squirrels, birds”, reichley explains in his tutorial. “Spring has arrived, and there are LOADS of squirrels running around. I was in the building mood and, being a nerd, wished to combine a common woodworking project with the connectivity and observability provided by single-board computers (and their camera add-ons).”
Building a tiny home
reichley started by sketching out a design for the house to determine where the various components would fit.
Since he’s fan of autonomy and renewable energy, he decided to run the project’s Raspberry Pi Zero W via solar power. To do so, he reiterated the design to include the necessary tech, scaling the roof to fit the panels.
To keep the project running 24/7, reichley had to figure out the overall power consumption of both the Zero W and the Raspberry Pi Camera Module, factoring in the constant WiFi connection and the sunshine hours in his garden.
He used a LiPo SHIM to bump up the power to the required 5V for the Zero. Moreover, he added a BH1750 lux sensor to shut off the LiPo SHIM, and thus the Pi, whenever it’s too dark for decent video.
To control the project, he used Calin Crisan’s motionEyeOS video surveillance operating system for single-board computers.
For those moments when you wish the cast of Disney’s Beauty and the Beast was real, only to realise what a nightmare that would be, here’s Paul-Louis Ageneau’s robotic teapot!
See what I mean?
Tale as old as time…
It’s the classic story of guy meets digital killer teapot, digital killer teapot inspires him to 3D print his own. Loosely based on a boss level of the video game Alice: Madness Returns, Paul-Louis’s creation is a one-eyed walking teapot robot with a (possible) thirst for blood.
Kill Build the beast
“My new robot is based on a Raspberry Pi Zero W with a camera.” Paul-Louis explains in his blog. “It is connected via a serial link to an Arduino Pro Mini board, which drives servos.”
Each leg has two points of articulation, one for the knee and one for the ankle. In order to move each of the joints, the teapot uses eight servo motor in total.
Paul-Louis designed and 3D printed the body of the teapot to fit the components needed. So if you’re considering this build as a means of acquiring tea on your laziest of days, I hate to be the bearer of bad news, but the most you’ll get from your pour will be jumper leads and Pi.
While the Arduino board controls the legs, it’s the Raspberry Pi’s job to receive user commands and tell the board how to direct the servos. The protocol for moving the servos is simple, with short lines of characters specifying instructions. First a digit from 0 to 7 selects a servo; next the angle of movement, such as 45 or 90, is input; and finally, the use of C commits the instruction.
Typing in commands is great for debugging, but you don’t want to be glued to a keyboard. Therefore, Paul-Louis continued to work on the code in order to string together several lines to create larger movements.
The final control system of the teapot runs on a web browser as a standard four-axis arrow pad, with two extra arrows for turning.
Something there that wasn’t there before
Jean-Paul also included an ‘eye’ in the side of the pot to fit the Raspberry Pi Camera Module as another nod to the walking teapot from the video game, but with a purpose other than evil and wrong-doing. As you can see from the image above, the camera live-streams footage, allowing for remote control of the monster teapot regardless of your location.
If you like it all that much, it’s yours
In case you fancy yourself as an inventor, Paul-Louis has provided the entire build process and the code on his blog, documenting how to bring your own teapot to life. And if you’ve created any robotic household items or any props from video games or movies, we’d love to see them, so leave a link in the comments or share it with us across social media using the hashtag #IBuiltThisAndNowIThinkItIsTryingToKillMe.
dekuNukem uses a Raspberry Pi 3, the Raspberry Pi camera module, and an OLED screen for the build. You don’t strictly need to include the OLED board, but it definitely adds to the overall effect, letting you view your daily and weekly screen time at a glance without having to access your Raspberry Pi for data.
As dekuNukem explains in the GitHub repo for the build, they used a perf board to mount the screen and attached it to the Raspberry Pi. This is a nice, simple means of pulling the whole project together without loose wires or the need for a modified case.
This face_recognition library lets the Pi + camera register your face. You’ll also need a well lit 400×400 photograph of yourself to act as a reference for the library. From there, a few commands should get you started.
Uses for facial recognition
You could simply use facepunch for its intended purpose, but here at Pi Towers we’ve been discussing further uses for the build. We’re all guilty of sitting for too long at our desks, so why not incorporate a “get up and walk around” notification? How about a flashing LED that tells you to “drink some water”? You could even go a little deeper (though possibly a little Big Brother) and set up an “I’m back at my desk” notification on Slack, to let your colleagues know you’re available.
You could also take this foray into facial recognition and incorporate it into home automation projects: a user-identifying Magic Mirror, perhaps, or a doorbell that recognises friends and family.
What would you do with facial recognition on a Raspberry Pi?
A conversation with BMO showing off some voice recognition capabilities. There is no interaction for BMO’s responses other than voice commands. There is a small microphone inside BMO (right behind the blue dot) and the voice commands are processed by Google voice API over WiFi.
My first BMO began as a cosplay prop for my daughter. She and her friends are huge fans of Adventure Time and made their costumes for Princess Bubblegum, Marceline, and Finn. It was my job to come up with a BMO.
Bob as Banana Guard, daughter Laura as Princess Bubblegum, and son Steven as Finn
I wanted something electronic, and also interactive if possible. And it had to run on battery power. There was only one option that I found that would work: the Raspberry Pi.
Building a living little boy
BMO’s basic internals consist of the Raspberry Pi, an 8” HDMI monitor, and a USB battery pack. The body is made from laser-cut MDF wood, which I sanded, sealed, and painted. I added 3D-printed arms and legs along with some vinyl lettering to complete the look. There is also a small wireless keyboard that works as a remote control.
To make the front panel button function, I created a custom PCB, mounted laser-cut acrylic buttons on it, and connected it to the Pi’s IO header.
Custom-made PCBs control BMO’s gaming buttons and USB input.
The USB jack is extended with another custom PCB, which gives BMO USB ports on the front panel. His battery life is an impressive 8 hours of continuous use.
The main brain game frame
Most of BMO’s personality comes from custom animations that my daughter created and that were then turned into MP4 video files. The animations are triggered by the remote keyboard. Some versions of BMO have an internal microphone, and the Google Voice API is used to translate the user’s voice and map it to an appropriate response, so it’s possible to have a conversation with BMO.
The Raspberry Pi Camera Module was also put to use. Some BMOs have a servo that can pop up a camera, called GoMO, which takes pictures. Although some people mistake it for ghost detecting equipment, BMO just likes taking nice pictures.
Who wants to play video games?
Playing games on BMO is as simple as loading one of the emulators supported by Raspbian.
I’m partial to the Atari 800 emulator, since I used to write games for that platform when I was just starting to learn programming. The front-panel USB ports are used for connecting gamepads, or his front-panel buttons and D-Pad can be used.
BMO has been a lot of fun to bring to conventions. He makes it to ComicCon San Diego each year and has been as far away as DragonCon in Atlanta, where he finally got to meet the voice of BMO, Niki Yang.
BMO’s back panel, autographed by Niki Yang
One day, I received an email from the producer of Adventure Time, Kelly Crews, with a very special request. Kelly was looking for a birthday present for the show’s creator, Pendleton Ward. It was either luck or coincidence that I just was finishing up the latest version of BMO. Niki Yang added some custom greetings just for Pen.
Happy birthday to Pendleton Ward, the creator of, well, you know what. We were asked to build Pen his very own BMO and with help from Niki Yang and the Adventure Time crew here is the result.
We added a few more items inside, including a 3D-printed heart, a medal, and a certificate which come from the famous Be More episode that explains BMO’s origins.
BMO was quite a challenge to create. Fabricating the enclosure required several different techniques and materials. Fortunately, bringing him to life was quite simple once he had a Raspberry Pi inside!
Find out more
Be sure to follow Bob’s adventures with BMO at the Build Your Own BMO blog. And if you’ve built your own prop from television or film using a Raspberry Pi, be sure to share it with us in the comments below or on our social media channels.
Today, a guest post: Alasdair Davies, co-founder of Naturebytes, ZSL London’s Conservation Technology Specialist and Shuttleworth Foundation Fellow, shares the work of the Arribada Initiative. The project uses the Raspberry Pi Zero and camera module to follow the journey of green sea turtles. The footage captured from the backs of these magnificent creatures is just incredible – prepare to be blown away!
Footage from the new Arribada PS-C (pit-stop camera) video tag recently trialled on the island of Principe in unison with the Principe Trust. Engineered by Institute IRNAS (http://irnas.eu/) for the Arribada Initiative (http://blog.arribada.org/).
Access to affordable, open and customisable conservation technologies in the animal tracking world is often limited. I’ve been a conservation technologist for the past ten years, co-founding Naturebytes and working at ZSL London Zoo, and this was a problem that continued to frustrate me. It was inherently expensive to collect valuable data that was necessary to inform policy, to designate marine protected areas, or to identify threats to species.
In March this year, I got a supercharged opportunity to break through these barriers by becoming a Shuttleworth Foundation Fellow, meaning I had the time and resources to concentrate on cracking the problem. The Arribada Initiative was founded, and ten months later, the open source Arribada PS-C green sea turtle tag was born. The video above was captured two weeks ago in the waters of Principe Island, West Africa.
On route to Principe island with 10 second gen green sea #turtle tags for testing. This version has a video & accelerometer payload for behavioural studies, plus a nice wireless charging carry case made by @institute_irnas @ShuttleworthFdn
The tag comprises a Raspberry Pi Zero W sporting the Raspberry Pi camera module, a PiRA power management board, two lithium-ion cells, and a rather nice enclosure. It was built in unison with Institute IRNAS, and there’s a nice user-friendly wireless charging case to make it easy for the marine guards to replace the tags after their voyages at sea. When a tag is returned to one of the docking stations in the case, we use resin.io to manage it, download videos, and configure the tag remotely.
The tags can also be configured to take video clips at timed intervals, meaning we can now observe the presence of marine litter, plastic debris, before/after changes to the ocean environment due to nearby construction, pollution, and other threats.
Discarded fishing nets are lethal to sea turtles, so using this new tag at scale – now finally possible, as the Raspberry Pi Zero helps to drive down costs dramatically whilst retaining excellent video quality – offers real value to scientists in the field. Next year we will be releasing an optimised, affordable GPS version.
To make this all possible we had to devise a quicker method of attaching the tag to the sea turtles too, so we came up with the “pit-stop” technique (which is what the PS in the name “Arribada PS-C” stands for). Just as a Formula 1 car would visit the pits to get its tyres changed, we literally switch out the tags on the beach when nesting females return, replacing them with freshly charged tags by using a quick-release base plate.
About 6 days left now until the first tagged nesting green sea #turtles return using our latest “pit-stop” removeable / replaceable tag method. Counting down the days @arribada_i @institute_irnas
To implement the system we first epoxy the base plate to the turtle, which minimises any possible stress to the turtles as the method is quick. Once the epoxy has dried we attach the tag. When the turtle has completed its nesting cycle (they visit the beach to lay eggs three to four times in a single season, every 10–14 days on average), we simply remove the base plate to complete the field work.
If you’d like to watch more wonderful videos of the green sea turtles’ adventures, there’s an entire YouTube playlist available here. And to keep up to date with the initiative, be sure to follow Arribada and Alasdair on Twitter.
Allow your robots to join in the fun this Christmas with a round of Channel 4’s Countdown. https://www.rosietheredrobot.com/2017/12/tea-minus-30.html
Rosie the Red Robot
First, a little bit of backstory. Challenged by his eldest daughter to build a robot, technology-loving Alan got to work building Rosie.
I became (unusually) determined. I wanted to show her what can be done… and the how can be learnt later. After all, there is nothing more exciting and encouraging than seeing technology come alive. Move. Groove. Quite literally.
Originally, Rosie had a Raspberry Pi 3 brain controlling ultrasonic sensors and motors via Python. From there, she has evolved into something much grander, and Alan has documented her upgrades on the Rosie the Red Robot blog. Using GPS trackers and a Raspberry Pi camera module, she became Rosie Patrol, a rolling, walking, interactive bot; then, with further upgrades, the Tea Minus 30 project came to be. Which brings us back to Countdown.
T(ea) minus 30
In case it hasn’t been a big part of your life up until now, Countdown is one of the longest running televisions shows in history, and occupies a special place in British culture. Contestants take turns to fill a board with nine randomly selected vowels and consonants, before battling the Countdown clock to find the longest word they can in the space of 30 seconds.
I’ve had quite a few requests to show just the Countdown clock for use in school activities/own games etc., so here it is! Enjoy! It’s a brand new version too, using the 2010 Office package.
There’s a numbers round involving arithmetic, too – but for now, we’re going to focus on letters and words, because that’s where Rosie’s skills shine.
Using an online resource, Alan created a dataset of the ten thousand most common English words.
Many words, listed in order of common-ness. Alan wrote a Python script to order them alphabetically and by length
Next, Alan wrote a Python script to select nine letters at random, then search the word list to find all the words that could be spelled using only these letters. He used the randint function to select letters from a pre-loaded alphabet, and introduced a requirement to include at least two vowels among the nine letters.
Words that match the available letters are displayed on the screen.
Putting it all together
With the basic game-play working, it was time to bring the project to life. For this, Alan used Rosie’s camera module, along with optical character recognition (OCR) and text-to-speech capabilities.
Alan writes, “Here’s a very amateurish drawing to brainstorm our idea. Let’s call it a design as it makes it sound like we know what we’re doing.”
Alan’s script has Rosie take a photo of the TV screen during the Countdown letters round, then perform OCR using the Google Cloud Vision API to detect the nine letters contestants have to work with. Next, Rosie runs Alan’s code to check the letters against the ten-thousand-word dataset, converts text to speech with Python gTTS, and finally speaks her highest-scoring word via omxplayer.
You can follow the adventures of Rosie the Red Robot on her blog, or follow her on Twitter. And if you’d like to build your own Rosie, Alan has provided code and tutorials for his projects too. Thanks, Alan!
Hey folks, Rob here again! You get a double dose of me this month, as today marks the release of The MagPi 64. In this issue we give you a complete electronics starter guide to help you learn how to make circuits that connect to your Raspberry Pi!
Wires, wires everywhere!
In the electronics feature, we’ll teach you how to identify different components in circuit diagrams, we’ll explain what they do, and we’ll give you some basic wiring instructions so you can take your first steps. The feature also includes step-by-step tutorials on how to make a digital radio and a range-finder, meaning you can test out your new electronics skills immediately!
Electronics are cool, but what else is in this issue? Well, we have exciting news about the next Google AIY Projects Vision kit, which forgoes audio for images, allowing you to build a smart camera with your Raspberry Pi.
We’ve also included guides on how to create your own text-based adventure game and a kaleidoscope camera. And, just in time for the festive season, there’s a tutorial for making a 3D-printed Pi-powered Christmas tree star. All this in The MagPi 64, along with project showcases, reviews, and much more!
Using a normal web cam or the Raspberry Pi camera produce real time live kaleidoscope effects with the Raspberry Pi. This video shows the normal mode, along with an auto pre-rotate, and a horizontal and vertical flip.
Get The MagPi 64
Issue 64 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.
Subscribe for free goodies
Want to support the Raspberry Pi Foundation and the magazine, and get some cool free stuff? If you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.
To some people, the idea of a fully autonomous corporation might seem like the beginning of the end. However, while the BitBarista coffee machine prototype can indeed run itself without any human interference, it also teaches a lesson about ethical responsibility and the value of quality.
Bitcoin coffee machine that engages coffee drinkers in the value chain
If you’ve played Paperclips, you get it. And in case you haven’t played Paperclips, I will only say this: give a robot one job and full control to complete the task, and things may turn in a very unexpected direction. Or, in the case of Rick and Morty, they end in emotional breakdown.
Recently described by the BBC as ‘a coffee machine with a life of its own, dispensing coffee to punters with an ethical preference’, BitBarista works in conjunction with customers to source coffee and complete maintenance tasks in exchange for BitCoin payments. Customers pay for their coffee in BitCoin, and when BitBarista needs maintenance such as cleaning, water replenishment, or restocking, it can pay the same customers for completing those tasks.
Moreover, customers choose which coffee beans the machine purchases based on quality, price, environmental impact, and social responsibility. BitBarista also collects and displays data on the most common bean choices.
So not only is BitBarista a study into the concept of full autonomy, it’s also a means of data collection about the societal preference of cost compared to social and environmental responsibility.
Many people already have store-bought autonomous technology within their homes, such as the Roomba vacuum cleaner or the Nest Smart Thermostat. And within the maker community, many more still have created such devices using sensors, mobile apps, and microprocessors such as the Raspberry Pi. We see examples using the Raspberry Pi on a daily basis, from simple motion-controlled lights and security cameras to advanced devices using temperature sensors and WiFi technology to detect the presence of specific people.
In this video, we use a Raspberry Pi Zero W and a Raspberry Pi camera to make a smart security camera! The camera uses object detection (with OpenCV) to send you an email whenever it sees an intruder. It also runs a webcam so you can view live video from the camera when you are away.
To get started building your own autonomous technology, you could have a look at our resources Laser tripwire and Getting started with picamera. These will help you build a visitor register of everyone who crosses the threshold a specific room.
After making a delightful Bitcoin lottery using a Raspberry Pi, Sean Hodgins brings us more Pi-powered goodness in time for every maker’s favourite holiday: Easter! Just kidding, it’s Halloween. Check out his hair-raising new build, the Haunted Jack in the Box.
This project uses a raspberry pi and face detection using the pi camera to determine when someone is looking at it. Plenty of opportunities to scare people with it. You can make your own!
Imagine yourself wandering around a dimly lit house. Your eyes idly scan a shelf. Suddenly, out of nowhere, a twangy melody! What was that? You take a closer look…there seems to be a box in jolly colours…with a handle that’s spinning by itself?!
You freeze, unable to peel your eyes away, and BAM!, out pops a maniacally grinning clown. You promptly pee yourself. Happy Halloween, courtesy of Sean Hodgins.
Eerie disembodied voice: You’re welco-o-o-ome!
How has Sean built this?
Sean purchased a jack-in-the-box toy and replaced its bottom side with one that would hold the necessary electronic components. He 3D-printed this part, but says you could also just build it by hand.
The bottom of the box houses a Raspberry Pi 3 Model B and a servomotor which can turn the windup handle. There’s also a magnetic reed switch which helps the Pi decide when to trigger the Jack. Sean hooked up the components to the Pi’s GPIO pins, and used an elastic band as a drive belt to connect the pulleys on the motor and the handle.
Sean explains that he has used a lot of double-sided tape and superglue in this build. The bottom and top are held together with two screws, because, as he describes it, “the Jack coming out is a little violent.”
But if I explain it, it won’t be scary anymore! OK, fiiiine.
With the help of a a Camera Module and OpenCV, Sean implemented facial recognition: Jack knows when someone is looking at his box, and responds by winding up and popping out.
Testing the haunting script
Sean’s Python script is available here, but as he points out, there are many ways in which you could adapt this code, and the build itself, to be even more frightening.
So very haunted
What would you do with this build? Add creepy laughter? Soundbites from It? Lighting effects? Maybe even infrared light and a NoIR Camera Module, so that you can scare people in total darkness? There are so many possibilities for this project — tell us your idea in the comments.
Did you realise the Sense HAT has been available for over two years now? Used by astronauts on the International Space Station, the exact same hardware is available to you on Earth. With a new Astro Pi challenge just launched, it’s time for a retrospective/roundup/inspiration post about this marvellous bit of kit.
The Sense HAT on a Pi in full glory
The Sense HAT explained
We developed our scientific add-on board to be part of the Astro Pi computers we sent to the International Space Station with ESA astronaut Tim Peake. For a play-by-play of Astro Pi’s history, head to the blog archive.
Just to remind you, this is all the cool stuff our engineers have managed to fit onto the HAT:
Use the LED matrix and joystick to recreate games such as Pong or Flappy Bird. Of course, you could also add sensor input to your game: code an egg drop game or a Magic 8 Ball that reacts to how the device moves.
It’s also possible to incorporate Sense HAT data into your digital art! The Python Turtle module and the Processing language are both useful tools for creating beautiful animations based on real-world information.
A Sense HAT project that also uses this principle is Giorgio Sancristoforo’s Tableau, a ‘generative music album’. This device creates music according to the sensor data:
“There is no doubt that, as music is removed by the phonographrecord from the realm of live production and from the imperative of artistic activity and becomes petrified, it absorbs into itself, in this process of petrification, the very life that would otherwise vanish.”
Our online resource shows you how to record the information your HAT picks up. Next you can analyse and graph your data using Mathematica, which is included for free on Raspbian. This resource walks you through how this software works.
But you can also stick to terrestrial scientific investigations. For example, why not build a weather station and share its data on your own web server or via Weather Underground?
Your code in space!
If you’re a student or an educator in one of the 22 ESA member states, you can get a team together to enter our 2017-18 Astro Pi challenge. There are two missions to choose from, including Mission Zero: follow a few guidelines, and your code is guaranteed to run in space!
When Estefannie visited Pi Towers earlier this year, we introduced her to the Raspberry Pi Digital Curriculum and the free resources on our website. We’d already chatted to her via email about the idea of creating a collab video for the Raspberry Pi channel. Once she’d met members of the Raspberry Pi Foundation team and listened to them wax lyrical about the work we do here, she was even more keen to collaborate with us.
Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!
Estefannie returned to the US filled with inspiration for a video for our channel, and we’re so pleased with how awesome her final result is. The video is a super addition to our Raspberry Pi YouTube channel, it shows what our resources can help you achieve, and it’s great fun. You might also have noticed that the project fits in perfectly with this season’s Pioneers challenge. A win all around!
So yeah, we’re really chuffed about this video, and we hope you all like it too!
Estefannie’s Laser Cookies guide
For those of you wanting to try your hand at building your own Cookie Jar Laser Surveillance Security System, Estefannie has provided a complete guide to talk you through it. Here she goes:
First off, you’ll need:
1 Raspberry Pi Zero W
1 Raspberry Pi Camera Module
12 ft PVC pipes + 4 corners
1 acrylic panel
1 battery pack
8 zip ties
tons of cookies
I used the Raspberry Pi Foundation’s Laser trip wire and the Tweeting Babbage resources to get one laser working and to set up the camera and Twitter API. This took me less than an hour, and it was easy, breezy, beautiful, Raspberry Pi.
I soldered ten lasers in parallel and connected ten photoresistors to their own GPIO pins. I didn’t wire them up in series because of sensitivity reasons and to make debugging easier.
Building the frame took a few tries: I actually started with a wood frame, then tried a clear case, and finally realized the best and cleaner solution would be pipes. All the wires go inside the pipes and come out in a small window on the top to wire up to the Zero W.
Using pipes also made the build cheaper, since they were about $3 for 12 ft. Wiring inside the pipes was tricky, and to finish the circuit, I soldered some of the wires after they were already in the pipes.
I tried glueing the lasers to the frame, but the lasers melted the glue and became decalibrated. Next I tried tape, and then I found picture mounting putty. The putty worked perfectly — it was easy to mold a putty base for the lasers and to calibrate and re-calibrate them if needed. Moreover, the lasers stayed in place no matter how hot they got.
Although the lasers were not very strong, I still strained my eyes after long hours of calibrating — hence the sunglasses! Working indoors with lasers, sunglasses, and code was weird. But now I can say I’ve done that…in my kitchen.
Using all the knowledge I have shared, this project should take a couple of hours. The code you need lives on my GitHub!
“The cookie recipe is my grandma’s, and I am not allowed to share it.”
Joe Herman’s Pi Film Capture project combines old projectors and a stepper motor with a Raspberry Pi and a Raspberry Pi Camera Module, to transform his grandfather’s 8- and 16-mm home movies into glorious digital films.
Joe’s grandfather, Leo Willmott, loved recording home movies of his family of eight children and their grandchildren. He passed away when Joe was five, but in 2013 Joe found a way to connect with his legacy: while moving house, a family member uncovered a box of more than a hundred of Leo’s film reels. These covered decades of family history, and some dated back as far as 1939.
Kodachrome film reels of the type Leo used
This provided an unexpected opportunity for Leo’s family to restore some of their shared history. Joe immediately made plans to digitise the material, knowing that the members of his extensive family tree would provide an eager audience.
Building Pi Film Capture
After a failed attempt with a DSLR camera, Joe realised he couldn’t simply re-film the movies — instead, he would have to capture each frame individually. He combined a Raspberry Pi with an old Super 8 projector, and set about rigging up something to do just that.
He went through numerous stages of prototyping, and his final hardware setup works very well. A NEMA 17 stepper motor moves the film reel forward in the projector. A magnetic reed switch triggers the Camera Module each time the reel moves on to the next frame. Joe hacked the Camera Module so that it has a different focal distance, and he also added a magnifying lens. Moreover, he realised it would be useful to have a diffuser to ‘smooth’ some of the faults in the aged film reel material. To do this, he mounted “a bit of translucent white plastic from an old ceiling fixture” parallel with the film.
Joe’s 16-mm projector, with embedded Raspberry Pi hardware
In addition to capturing every single frame (sometimes with multiple exposure settings), Joe found that he needed intensive post-processing to restore some of the films. He settled on sending the images from the Pi to a more powerful Linux machine. To enable processing of the raw data, he had to write Python scripts implementing several open-source software packages. For example, to deal with the varying quality of the film reels more easily, Joe implemented a GUI (written with the help of PyQt), which he uses to change the capture parameters. This was a demanding job, as he was relatively new to using these tools.
The top half of Joe’s GUI, because the whole thing is really long and really thin and would have looked weird on the blog…
If a frame is particularly damaged, Joe can capture multiple instances of the image at different settings. These are then merged to achieve a good-quality image using OpenCV functionality. Joe uses FFmpeg to stitch the captured images back together into a film. Some of his grandfather’s reels were badly degraded, but luckily Joe found scripts written by other people to perform advanced digital restoration of film with AviSynth. He provides code he has written for the project on his GitHub account.
For an account of the project in his own words, check out Joe’s guest post on the IEEE Spectrum website. He also described some of the issues he encountered, and how he resolved them, in The MagPi.
What does Pi Film Capture deliver?
Joe provides videos related to Pi Film Capture on two sites: on his YouTube channel, you’ll find videos in which he has documented the build process of his digitising project. Final results of the project live on Joe’s Vimeo channel, where so far he has uploaded 55 digitised home videos.
Shot on 8mm by Leo Willmott, captured and restored by Joe Herman (Not a Wozniak film, but placed in that folder b/c it may be of interest to Hermans)
We’re beyond pleased that our tech is part of this amazing project, helping to reconnect the entire Herman/Willmott clan with their past. And it was great to be able to catch up with Joe, and talk about his build at Maker Faire last year!
Maker Faire New York 2017
We’ll be at Maker Faire New York again on the 23-24 September, and we can’t wait to see the amazing makes the Raspberry Pi community will be presenting there!
Are you going to be at MFNY to show off your awesome Pi-powered project? Tweet us, so we can meet up, check it out and share your achievements!
Head and Shoulders Scan with 29 Raspberry Pi Cameras
Uses for full-body 3D scanning
Poppy herself wanted to use the scanner in her work as a fashion designer. With the help of 3D scans of her models, she would be able to create custom cardboard dressmakers dummy to ensure her designs fit perfectly. This is a brilliant way of incorporating digital tech into another industry – and it’s not the only application for this sort of build. Growing numbers of businesses use 3D body scanning, for example the stores around the world where customers can 3D scan and print themselves as action-figure-sized replicas.
We’ve also seen the same technology used in video games for more immersive virtual reality. Moreover, there are various uses for it in healthcare and fitness, such as monitoring the effect of exercise regimes or physiotherapy on body shape or posture.
Within a makespace environment, a 3D body scanner opens the door to including new groups of people in community make projects: imagine 3D printing miniatures of a theatrical cast to allow more realistic blocking of stage productions and better set design, or annually sending grandparents a print of their grandchild so they can compare the child’s year-on-year growth in a hands-on way.
The Germany-based clothing business Outfittery uses full body scanners to take the stress out of finding clothes that fits well. image c/o Outfittery
As cheesy as it sounds, the only limit for the use of 3D scanning is your imagination…and maybe storage space for miniature prints.
Poppy’s Raspberry Pi 3D Body Scanner
For her build, Poppy acquired 27 Raspberry Pi Zeros and 27 Raspberry Pi Camera Modules. With various other components, some 3D-printed or made of cardboard, Poppy got to work. She was helped by members of Build Brighton and by her friend Arthur Guy, who also wrote the code for the scanner.
The Pi Zeros run Raspbian Lite, and are connected to a main server running a node application. Each is fitted into its own laser-cut cardboard case, and secured to a structure of cardboard tubing and 3D-printed connectors.
In the finished build, the person to be scanned stands within the centre of the structure, and the press of a button sends the signal for all Pis to take a photo. The images are sent back to the server, and processed through Autocade ReMake, a freemium software available for the PC (Poppy discovered part-way through the project that the Mac version has recently lost support).
Build your own
Obviously there’s a lot more to the process of building this full-body 3D scanner than what I’ve reported in these few paragraphs. And since it was Poppy’s goal to make a readily available and affordable scanner that anyone can recreate, she’s provided all the instructions and code for it on her Instructables page.
Projects like this, in which people use the Raspberry Pi to create affordable and interesting tech for communities, are exactly the type of thing we love to see. Always make sure to share your Pi-based projects with us on social media, so we can boost their visibility!
If you’re a member of a makespace, run a workshop in a school or club, or simply love to tinker and create, this build could be the perfect addition to your workshop. And if you recreate Poppy’s scanner, or build something similar, we’d love to see the results in the comments below.
The modest dimensions of our Raspberry Pi Zero and its wirelessly connectable sibling, the Pi Zero W, enable makers in our community to build devices that are very small indeed. The PiCorder built by Wayne Keenan is probably the slimmest Pi-powered video-recording device we’ve ever seen.
A simple Pi-camcorder using @pimoroni #HyperPixel, ZeroLipo, lipo bat, camera and #PiZeroW. All parts from the Pirates, total of ~£85. Project build instructions: https://www.hackster.io/TheBubbleworks/picorder-0eb94d
Wayne’s PiCorder is a very straightforward make. On the hardware side, it features a Pimoroni HyperPixel screen, Pi Zero camera module, and Zero LiPo plus LiPo battery pack. To put it together, he simply soldered header pins onto a Zero W, and connected all the components to it – easy as Pi! (Yes, I went there.)
So sleek as to be almost aerodynamic
Recording with the PiCorder (rePiCording?)
Then it was just a matter of installing the HyperPixel driver on the Pi, and the PiCorder was good to go. In this basic setup, recording is controlled via SSH. However, there’s a discussion about better ways to control the device in the comments on Wayne’s write-up. As the HyperPixel is a touchscreen, adding a GUI would make full use of its capabilities.
Think about how many screens you’re looking at right now
The PiCorder is a great project to recreate if you’re looking to build a small portable camera. If you’re new to soldering, this build is perfect for you: just follow our ‘How to solder’ video and tutorial, and you’re on your way. This could be the start of your journey into the magical world of physical computing!
Today we’ve released another update to the Raspbian desktop. In addition to the usual small tweaks and bug fixes, the big new changes are the inclusion of an offline version of Scratch 2.0, and of Thonny (a user-friendly IDE for Python which is excellent for beginners). We’ll look at all the changes in this post, but let’s start with the biggest…
Scratch 2.0 for Raspbian
Scratch is one of the most popular pieces of software on Raspberry Pi. This is largely due to the way it makes programming accessible – while it is simple to learn, it covers many of the concepts that are used in more advanced languages. Scratch really does provide a great introduction to programming for all ages.
Raspbian ships with the original version of Scratch, which is now at version 1.4. A few years ago, though, the Scratch team at the MIT Media Lab introduced the new and improved Scratch version 2.0, and ever since we’ve had numerous requests to offer it on the Pi.
There was, however, a problem with this. The original version of Scratch was written in a language called Squeak, which could run on the Pi in a Squeak interpreter. Scratch 2.0, however, was written in Flash, and was designed to run from a remote site in a web browser. While this made Scratch 2.0 a cross-platform application, which you could run without installing any Scratch software, it also meant that you had to be able to run Flash on your computer, and that you needed to be connected to the internet to program in Scratch.
We worked with Adobe to include the Pepper Flash plugin in Raspbian, which enables Flash sites to run in the Chromium browser. This addressed the first of these problems, so the Scratch 2.0 website has been available on Pi for a while. However, it still needed an internet connection to run, which wasn’t ideal in many circumstances. We’ve been working with the Scratch team to get an offline version of Scratch 2.0 running on Pi.
The Scratch team had created a website to enable developers to create hardware and software extensions for Scratch 2.0; this provided a version of the Flash code for the Scratch editor which could be modified to run locally rather than over the internet. We combined this with a program called Electron, which effectively wraps up a local web page into a standalone application. We ended up with the Scratch 2.0 application that you can find in the Programming section of the main menu.
Physical computing with Scratch 2.0
We didn’t stop there though. We know that people want to use Scratch for physical computing, and it has always been a bit awkward to access GPIO pins from Scratch. In our Scratch 2.0 application, therefore, there is a custom extension which allows the user to control the Pi’s GPIO pins without difficulty. Simply click on ‘More Blocks’, choose ‘Add an Extension’, and select ‘Pi GPIO’. This loads two new blocks, one to read and one to write the state of a GPIO pin.
The Scratch team kindly allowed us to include all the sprites, backdrops, and sounds from the online version of Scratch 2.0. You can also use the Raspberry Pi Camera Module to create new sprites and backgrounds.
This first release works well, although it can be slow for some operations; this is largely unavoidable for Flash code running under Electron. Bear in mind that you will need to have the Pepper Flash plugin installed (which it is by default on standard Raspbian images). As Pepper Flash is only compatible with the processor in the Pi 2.0 and Pi 3, it is unfortunately not possible to run Scratch 2.0 on the Pi Zero or the original models of the Pi.
We hope that this makes Scratch 2.0 a more practical proposition for many users than it has been to date. Do let us know if you hit any problems, though!
Thonny: a more user-friendly IDE for Python
One of the paths from Scratch to ‘real’ programming is through Python. We know that the transition can be awkward, and this isn’t helped by the tools available for learning Python. It’s fair to say that IDLE, the Python IDE, isn’t the most popular piece of software ever written…
Earlier this year, we reviewed every Python IDE that we could find that would run on a Raspberry Pi, in an attempt to see if there was something better out there than IDLE. We wanted to find something that was easier for beginners to use but still useful for experienced Python programmers. We found one program, Thonny, which stood head and shoulders above all the rest. It’s a really user-friendly IDE, which still offers useful professional features like single-stepping of code and inspection of variables.
Thonny was created at the University of Tartu in Estonia; we’ve been working with Aivar Annamaa, the lead developer, on getting it into Raspbian. The original version of Thonny works well on the Pi, but because the GUI is written using Python’s default GUI toolkit, Tkinter, the appearance clashes with the rest of the Raspbian desktop, most of which is written using the GTK toolkit. We made some changes to bring things like fonts and graphics into line with the appearance of our other apps, and Aivar very kindly took that work and converted it into a theme package that could be applied to Thonny.
Due to the limitations of working within Tkinter, the result isn’t exactly like a native GTK application, but it’s pretty close. It’s probably good enough for anyone who isn’t a picky UI obsessive like me, anyway! Have a look at the Thonny webpage to see some more details of all the cool features it offers. We hope that having a more usable environment will help to ease the transition from graphical languages like Scratch into ‘proper’ languages like Python.
Other than these two new packages, this release is mostly bug fixes and small version bumps. One thing you might notice, though, is that we’ve made some tweaks to our custom icon set. We wondered if the icons might look better with slightly thinner outlines. We tried it, and they did: we hope you prefer them too.
Downloading the new image
You can either download a new image from the Downloads page, or you can use apt to update:
sudo apt-get update
sudo apt-get dist-upgrade
To install Scratch 2.0:
sudo apt-get install scratch2
To install Thonny:
sudo apt-get install python3-thonny
One more thing…
Before Christmas, we released an experimental version of the desktop running on Debian for x86-based computers. We were slightly taken aback by how popular it turned out to be! This made us realise that this was something we were going to need to support going forward. We’ve decided we’re going to try to make all new desktop releases for both Pi and x86 from now on.
The version of this we released last year was a live image that could run from a USB stick. Many people asked if we could make it permanently installable, so this version includes an installer. This uses the standard Debian install process, so it ought to work on most machines. I should stress, though, that we haven’t been able to test on every type of hardware, so there may be issues on some computers. Please be sure to back up your hard drive before installing it. Unlike the live image, this will erase and reformat your hard drive, and you will lose anything that is already on it!
You can still boot the image as a live image if you don’t want to install it, and it will create a persistence partition on the USB stick so you can save data. Just select ‘Run with persistence’ from the boot menu. To install, choose either ‘Install’ or ‘Graphical install’ from the same menu. The Debian installer will then walk you through the install process.
You can download the latest x86 image (which includes both Scratch 2.0 and Thonny) from here or here for a torrent file.
One final thing
This version of the desktop is based on Debian Jessie. Some of you will be aware that a new stable version of Debian (called Stretch) was released last week. Rest assured – we have been working on porting everything across to Stretch for some time now, and we will have a Stretch release ready some time over the summer.
This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement.
The day Liz decided to keep me
So here it is!
Joining the crew
Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together.
12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…”
A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers.
Ticking items off the Bucket List
I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals.
High altitude ballooning (HAB)
Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it.
332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…”
I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later.
“Can we have our balloon back, please, mister?”
Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space.
13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…”
You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here.
Dear Raspberry Pi Friends…
My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi.
By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others.
It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result.
Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all.
You’re all brilliant.
The Queens of Robots, both shoddy and otherwise
Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake.
Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!
Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests.
Those ‘wow’ moments
Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you.
Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill.
Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech ⛅🛰🤖
Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me.
Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam.
19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…”
The next twelve months
Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two.
It’s been a pleasure. Thank you for joining me on the ride!
Looking to share your day, event, or the observations of your nature box live on the internet via a Raspberry Pi? Then look no further, for Alex Ellis has all you need to get started with YouTube live-streaming from your Pi.
The YouTube live dashboard. Image c/o Alex Ellis
If you spend any time on social media, be it Facebook, Instagram, YouTube, or Twitter, chances are you’ve been notified of someone ‘going live’.
Live-streaming video on social platforms has become almost ubiquitous, whether it’s content by brands, celebrities, or your cousin or nan – everyone is doing it.
Carrie Anne and Alex offer up a quick tour of the Pi Towers lobby while trying to figure out how Facebook Live video works.
YouTube live-streaming with Alex Ellis and Docker
In his tutorial, Alex demonstrates an easy, straightforward approach to live-streaming via a Raspberry Pi with the help of a Docker image of FFmpeg he has built. He says that with the image, instead of “having to go through lots of manual steps, we can type in a handful of commands and get started immediately.”
Why is the Docker image so helpful?
As Alex explains on his blog, if you want to manually configure your Raspberry Pi Zero for YouTube live-streaming, you need to dedicate more than a few hours of your day.
Normally this would have involved typing in many manual CLI commands and waiting up to 9 hours for some video encoding software (ffmpeg) to compile itself.
Get anything wrong (like Alex did the first time) and you have to face another nine hours of compilation time before you’re ready to start streaming – not ideal if your project is time-sensitive.
See you in 8-12 hours? Building ffmpeg on a my @Raspberry_Pi #pizero with @docker
Using the Docker image
In his tutorial, Alex uses a Raspberry Pi Zero and advises that the project will work with either Raspbian Jessie Lite or PIXEL. Once you’ve installed Docker, you can pull the FFmpeg image he has created directly to your Pi from the Docker Hub. (We advise that while doing so, you should feel grateful to Alex for making the image available and saving you so much time.)
It goes without saying that you’ll need a YouTube account in order to live-stream to YouTube; go to the YouTube live streaming dashboard to obtain a streaming key.
Get live streaming to @YouTube with this new weekend project and guide using your @Raspberry_Pi and @docker. https://t.co/soqZ9D9jbS
For a comprehensive breakdown of how to stream to YouTube via a Raspberry Pi, head to Alex’s blog. You’ll also find plenty of other Raspberry Pi projects there to try out.
Why live-stream from a Raspberry Pi?
We see more and more of our community members build Raspberry Pi projects that involve video capture. The minute dimensions of the Raspberry Pi Zero and Zero W make them ideal for fitting into robots, nature boxes, dash cams, and more. What better way to get people excited about your video than to share it with them live?
If you have used a Raspberry Pi to capture or stream footage, make sure to link to your project in the comments below. And if you give Alex’s Docker image a go, do let us know how you get on.
This column is from The MagPi issue 53. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.
Let’s Robot streams twice a week, Tuesdays and Thursdays, and allows the general public to control a team of robots within an interactive set, often consisting of mazes, clues, challenges, and even the occasional foe. Users work together via the Twitch.tv platform, sending instructions to the robots in order to navigate their terrain and complete the set objectives.
Let’s Robot aims to change the way we interact with television, putting the viewer in the driving seat.
Aylobot, the first robot of the project, boasts a LEGO body, while Ninabot, the somewhat 2.0 upgrade of the two, has a gripper, allowing more interaction from users. Both robots have their own cameras that stream to Twitch, so that those in control can see what they’re up to on a more personal level; several new additions have joined the robot team since then, each with their own unique skill.
Twice a week, the robots are controlled by the viewers, allowing them the chance to complete tasks such as force-feeding the intern, attempting to write party invitations, and battling in boss fights.
Let’s Robot is the brainchild of Jillian Ogle, who originally set out to make “the world’s first interactive live show using telepresence robots collaboratively controlled by the audience”. However, Jill discovered quite quickly that the robots needed to complete the project simply didn’t exist to the standard required… and so Let’s Robot was born.
After researching various components for the task, Jill decided upon the Raspberry Pi, and it’s this small SBC that now exists within the bodies of Aylobot, Ninabot, and the rest of the Let’s Robot family.
“Post-Its I drew for our #LetsRobot subscribers. We put these in the physical sets made for the robots. I still have a lot more to draw…”
In her previous life, Jill worked in art and game design, including a role as art director for Playdom, a subsidiary of Disney Interactive; she moved on to found Aylo Games in 2013 and Let’s Robot in 2015. The hardware side of the builds has been something of a recently discovered skill, with Jill admitting, “Anything I know about hardware I’ve picked up in the last two years while developing this project.”
73 Likes, 3 Comments – Jillian Ogle (@letsjill) on Instagram: “This was my first ever drone flight, live on #twitch. I think it went well. #letsrobot #robot…”
Social media funtimes
More recently, as Let’s Robot continues to grow, Jill can be found sharing the antics of the robots across social media, documenting their quests – such as the hilarious attempt to create party invites and the more recent Hillarybot vs Trumpbot balloon head battle, where robots with extendable pin-mounted arms fight to pop each other’s head.
400 Likes, 2 Comments – Jillian Ogle (@letsjill) on Instagram: “Last night was the robot presidential debate, and here is an early version of candidate #Trump bot….”
Gotta catch ’em all
Alongside the robots, Jill has created several other projects that both add to the interactive experience of Let’s Robot and comment on other elements of social trends out in the world. Most notably, there is the Pokémon Go Robot, originally a robot arm that would simulate the throw of an on-screen Poké Ball. It later grew wheels and took to the outside world, hunting down its pocket monster prey.
Originally sitting on a desk, the Pokémon Go Robot earned itself a new upgrade, gaining the body of a rover to allow it to handle the terrain of the outside world. Paired with the Livestream Goggles, viewers can join in the fun.
It’s also worth noting other builds, such as the WiFi Livestream Goggles that Jill can be seen sporting across several social media posts. The goggles, with a Pi camera fitted between the wearer’s eyes, allow viewers to witness Jill’s work from her perspective. It’s a great build, especially given how open the Let’s Robot team are about their continued work and progression.
The WiFi-enabled helmet allows viewers the ability to see what Jill sees, offering a new perspective alongside the Let’s Robot bots. The Raspberry Pi camera fits perfectly between the eyes, bringing a true eye level to the viewer. She also created internet-controlled LED eyebrows… see the video!
And finally, one project we are eager to see completed is the ‘in production’ Pi-powered transparent HUD. By incorporating refractive acrylic, Jill aims to create a see-through display that allows her to read user comments via the Twitch live-stream chat, without having to turn her eyes to a separate monitor
Since the publication of this article in The MagPi magazine, Jill and the Let’s Robot team have continued to grow their project. There are some interesting and exciting developments ahead – we’ll cover their progress in a future blog.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.