Has your fitness suffered during locked down? Have you been able to keep up diligently with your usual running routine? Maybe you found it easy to recreate you regular gym classes in your lounge with YouTube coaches. Or maybe, like a lot of us, you’ve not felt able to do very much at all, and needed a really big push to keep moving.
Maker James Wong took to Raspberry Pi to develop something that would hold him accountable for his daily HIIT workouts, and hopefully keep his workouts on track while alone in lockdown.
What is a HIIT workout?
HIIT is the best kind of exercise, in that it doesn’t last long and it’s effective. You do short bursts of high-intensity physical movement between short, regular rest periods. HIIT stands for High Intensity Interval Training.
James was attracted to HIIT during lockdown as it didn’t require any gym visits or expensive exercise equipment. He had access to endless online training sessions, but felt he needed that extra level of accountability to make sure he kept up with his at-home fitness regime. Hence, HIIT Pi.
So what does HIIT Pi actually do?
HIIT Pi is a web app that uses machine learning on Raspberry Pi to help track your workout in real time. Users can interact with the app via any web browser running on the same local network as the Raspberry Pi, be that on a laptop, tablet, or smartphone.
HIIT Pi is simple in that it only does two things:
Uses computer vision to automatically capture and track detected poses and movement
Scores them according to a set of rules and standards
So, essentially, you’ve got a digital personal trainer in the room monitoring your movements and letting you know whether they’re up to standard and whether you’re likely to achieve your fitness goals.
James calls HIIT Pi an “electronic referee”, and we agree that if we had one of those in the room while muddling through a Yoga With Adriene session on YouTube, we would try a LOT harder.
How does it work?
A Raspberry Pi camera module streams raw image data from the sensor roughly at 30 frames per second. James devised a custom recording stream handler that works off this pose estimation model and takes frames from the video stream, spitting out pose confidence scores using pre-set keypoint position coordinates.
James’s original project post details the inner workings. You can also grab the code needed to create your own at-home Raspberry Pi personal trainer.
Machine learning can sound daunting even for experienced Raspberry Pi hobbyists, but Microsoft and Adafruit Industries are determined to make it easier for everyone to have a go. Microsoft’s Lobe tool takes the stress out of training machine learning models, and Adafruit have developed an entire kit around their BrainCraft HAT, featuring Raspberry Pi 4 and a Raspberry Pi Camera, to get your own machine learning project off to a flying start.
Adafruit’s BrainCraft HAT
Adafruit’s BrainCraft HAT fits on top of Raspberry Pi 4 and makes it really easy to connect hardware and debug machine learning projects. The 240 x 240 colour display screen also lets you see what the camera sees. Two microphones allow for audio input, and access to the GPIO means you can connect things likes relays and servos, depending on your project.
Microsoft Lobe is a free tool for creating and training machine learning models that you can deploy almost anywhere. The hardest part of machine learning is arguably creating and training a new model, so this tool is a great way for newbies to get stuck in, as well as being a fantastic time-saver for people who have more experience.
Lady Ada demonstrated Bakery: a machine learning model that uses an Adafruit BrainCraft HAT, a Raspberry Pi camera, and Microsoft Lobe. Watch how easy it is to train a new machine learning model in Microsoft Lobe from this point in the Microsoft Build Keynote video.
Bakery identifies different baked goods based on images taken by the Raspberry Pi camera, then automatically identifies and prices them, in the absence of barcodes or price tags. You can’t stick a price tag on a croissant. There’d be flakes everywhere.
Running this project on Raspberry Pi means that Lady Ada was able to hook up lots of other useful tools. In addition to the Raspberry Pi camera and the HAT, she is using:
Three LEDs that glow green when an object is detected
A speaker and some text-to-speech code that announces which object is detected
A receipt printer that prints out the product name and the price
All of this running on Raspberry Pi, and made super easy with Microsoft Lobe and Adafruit’s BrainCraft HAT. Adafruit’s Microsoft Machine Learning Kit for Lobe contains everything you need to get started.
Watch the Microsoft Build keynote
And finally, watch Microsoft CTO Kevin Scott introduce Limor Fried, aka Lady Ada, owner of Adafruit Industries. Lady Ada joins remotely from the Adafruit factory in Manhattan, NY, to show how the BrainCraft HAT and Lobe work to make machine learning accessible.
Maker Jen Fox took to hackster.io to share a Raspberry Pi–powered trash classifier that tells you whether the trash in your hand is recyclable, compostable, or just straight-up garbage.
Jen reckons this project is beginner-friendly, as you don’t need any code to train the machine learning model, just a little to load it on Raspberry Pi. It’s also a pretty affordable build, costing less than $70 including a Raspberry Pi 4.
Raspberry Pi 4 Model B
Raspberry Pi Camera Module
Adafruit push button
The code-free machine learning model is created using Lobe, a desktop tool that automatically trains a custom image classifier based on what objects you’ve shown it.
Training the image classifier
Basically, you upload a tonne of photos and tell Lobe what object each of them shows. Jen told the empty classification model which photos were of compostable waste, which were of recyclable and items, and which were of garbage or bio-hazardous waste. Of course, as Jen says, “the more photos you have, the more accurate your model is.”
Loading up Raspberry Pi
As promised, you only need a little bit of code to load the image classifier onto your Raspberry Pi. The Raspberry Pi Camera Module acts as the image classifier’s “eyes” so Raspberry Pi can find out what kind of trash you hold up for it.
The push button and LEDs are wired up to the Raspberry Pi GPIO pins, and they work together with the camera and light up according to what the image classifier “sees”.
You’ll want to create a snazzy case so your trash classifier looks good mounted on the wall. Kate cut holes in a cardboard box to make sure that the camera could “see” out, the user can see the LEDs, and the push button is accessible. Remember to leave room for Raspberry Pi’s power supply to plug in.
The trick with spy devices is to make sure they look as much like the object they’re hidden inside as possible. Where Raspberry Pi comes in is making sure the foam camera can be used as a real photo-taking camera too, to throw the baddies off the scent if they start fiddling with your spyware.
The foam-firing bit of Nathan’s invention was relatively simple to recreate – a modified chef’s squirty cream dispenser, hidden inside a camera-shaped box, gets the job done.
Ruth and Shawn drew a load of 3D-printed panels to mount on the box frame in the image above. One of those cool coffee cups that look like massive camera lenses hides the squirty cream dispenser and gives this build an authentic camera look.
Techy bits from the build:
Mini display screen
The infrared LED is mounted next to the camera module and switches on when it gets dark, giving you night vision.
The Raspberry Pi computer and its power bank are crammed inside the box-shaped part, with the camera module and infrared LED mounted to peek out of custom-made holes in one of the 3D-printed panels on the front of the box frame.
The foam-firing chef’s thingy is hidden inside the big fake lens, and it’s wedged inside so that when you lift the big fake lens, the lever on the chef’s squirty thing is depressed and foam fires out of a tube near to where the camera lens and infrared LED peek out on the front panel of the build.
High-school student Eleanor Sigrest successfully crowdfunded her way onto a zero-G flight to test her latest Raspberry Pi-powered project. NASA Goddard engineers peer reviewed Eleanor’s experimental design, which detects unwanted movement (or ‘slosh’) in spacecraft fluid tanks.
The apparatus features an accelerometer to precisely determine the moment of zero gravity, along with 13 Raspberry Pis and 12 Raspberry Pi cameras to capture the slosh movement.
What’s wrong with slosh?
The Broadcom Foundation shared a pretty interesting minute-by-minute report on Eleanor’s first hyperbolic flight and how she got everything working. But, in a nutshell…
You don’t want the fluid in your space shuttle tanks sloshing around too much. It’s a mission-ending problem. Slosh occurs on take-off and also in microgravity during manoeuvres, so Eleanor devised this novel approach to managing it in place of the costly, heavy subsystems currently used on board space craft.
Eleanor wanted to prove that the fluid inside tanks treated with superhydrophobic and superhydrophilic coatings settled quicker than in uncoated tanks. And she was right: settling times were reduced by 73% in some cases.
At just 13 years old, Eleanor won the Samueli Prize at the 2016 Broadcom MASTERS for her mastery of STEM principles and team leadership during a rigorous week-long competition. High praise came from Paula Golden, President of Broadcom Foundation, who said: “Eleanor is the epitome of a young woman scientist and engineer. She combines insatiable curiosity with courage: two traits that are essential for a leader in these fields.”
That week-long experience also included a Raspberry Pi Challenge, and Eleanor explained: “During the Raspberry Pi Challenge, I learned that sometimes the simplest solutions are the best. I also learned it’s important to try everyone’s ideas because you never know which one might work the best. Sometimes it’s a compromise of different ideas, or a compromise between complicated and simple. The most important thing is to consider them all.”
Earlier this year, we released the Raspberry Pi High Quality Camera, a brand-new 12.3 megapixel camera that allows you to use C- and CS-mount lenses with Raspberry Pi boards.
We love it. You love it.
How do we know you love it? Because the internet is now full of really awesome 3D-printable cases and add-ons our community has created in order to use their High Quality Camera out and about…or for Octoprint…or home security…or SPACE PHOTOGRAPHY, WHAT?!
We thought it would be fun to show you some of 3D designs we’ve seen pop up on sites like Thingiverse and MyMiniFactory, so that anyone with access to a 3D printer can build their own camera too!
Adafruit did a thing, obvs
Shout out to our friends at Adafruit for this really neat, retro-looking camera case designed by the Ruiz Brothers. The brown filament used for the casing is so reminiscent of the leather bodies of SLRs from my beloved 1980s childhood that I can’t help but be drawn to it. And, with snap-fit parts throughout, you can modify this case model as you see fit. Not bad. Not bad at all.
Nikon to Raspberry Pi
While the Raspberry Pi High Quality Camera is suitable for C- and CS-mount lenses out of the box, this doesn’t mean you’re limited to only these sizes! There’s a plethora of C- and CS-mount adapters available on the market, and you can also 3D print your own adapter.
Thingiverse user UltiArjan has done exactly that and designed this adapter for using Nikon lenses with the High Quality Camera. Precision is key here to get a snug thread, so you may have to fiddle with your printer settings to get the right fit.
If you’re not interested in a full-body camera case and just need something to attach A to B, this minimal adapter for the Raspberry Pi Zero will be right up your street.
Designer ed7coyne put this model together in order to use Raspberry Pi Zero as a webcam, and according to Cura on my laptop, should only take about 2 hours to print at 0.1 with supports. In fact, since I’ve got Cura open already…
3D print a Raspberry Pi High Quality Camera?!
Not a working one, of course, but if you’re building something around the High Quality Camera and want to make sure everything fits without putting the device in jeopardy, you could always print a replica for prototyping!
Thingiverse user tmomas produced this scale replica of the Raspberry Pi High Quality Camera with the help of reference photos and technical drawings, and a quick search online will uncover similar designs for replicas of other Raspberry Pi products you might want to use while building a prototype
Bonus content alert
We made this video for HackSpace magazine earlier this year, and it’s a really hand resource if you’re new to the 3D printing game.
…I wasn’t lying when I said I was going to print ed7coyne’s minimal adapter.
It’s been a long lockdown for one of our favourite makers, Pi & Chips. Like most of us (probably), they have turned their hand to training small animals that wander into their garden to pass the time — in this case, pigeons. I myself enjoy raising my glass to the squirrel that runs along my back fence every evening at 7pm.
Of course, Pi & Chips has taken this one step further and created a food dispenser including motion-activated camera with a Raspberry Pi 3B+ to test the intelligence of these garden critters and capture their efforts live.
Looking into the cognitive behaviour of birds (and finding the brilliantly titled paper Maladaptive gambling by pigeons), Pi & Chips discovered that pigeons can, with practice, recognise objects including buttons and then make the mental leap to realise that touching these buttons actually results in something happening. So they set about building a project to see this in action.
Enter the ‘SmartFrank 3000’, named after the bossiest bird to grace Pi & Chips’s shed roof over the summer.
Steppers and servos
The build itself is a simple combo of a switch and dispenser. But it quickly became apparent that any old servo wasn’t going to be up to the job — it couldn’t move fast enough to open and close a hatch quickly or strongly enough.
Running a few tests with a stepper motor confirmed that this was the perfect choice, as it could move quickly enough, and was strong enough to hold back a fair weight of seed when not in operation.
A 3D-printed flap for the stepper was also fashioned, plus a nozzle that fits over the neck of a two-litre drinks bottle, and some laser-cut pieces to make a frame to hold it all together.
Now for the switch that Frank the pigeon was going to have to touch if it wanted any bird seed. Pi & Chips came up with this design made from 3mm ply and some sponge as the spring.
They soldered some wires to a spring clip from an old photo frame and added a bolt and two nuts. The second nut allowed very fine adjustment of the distance to make sure the switch could be triggered by as light a touch as possible.
Behind the scenes
Behind the scenes there’s a Raspberry Pi 3B+ running the show, together with a motor controller board for the stepper motor. This board runs from its own battery pack, as it needs 12V power and is therefore too heavy for Raspberry Pi to handle directly. A Raspberry Pi Camera Module has also been added and runs this motion detection script to start recording whenever a likely bird candidate steps up to the plate for dinner. Hopefully, we can soon get some footage of Frank the pigeon learning and earning!
When taking photos, most of us simply like to press the shutter button on our cameras and phones so that viewable image is produced almost instantaneously, usually encoded in the well-known JPEG format. However, there are some applications where a little more control over the production of that JPEG is desirable. For instance, you may want more or less de-noising, or you may feel that the colours are not being rendered quite right.
This is where raw (sometimes RAW) files come in. A raw image in this context is a direct capture of the pixels output from the image sensor, with no additional processing. Normally this is in a relatively standard format known as a Bayer image, named after Bryce Bayer who pioneered the technique back in 1974 while working for Kodak. The idea is not to let the on-board hardware ISP (Image Signal Processor) turn the raw Bayer image into a viewable picture, but instead to do it offline with an additional piece of software, often referred to as a raw converter.
The raw image is sometimes likened to the old photographic negative, and whilst many camera vendors use their own proprietary formats, the most portable form of raw file is the Digital Negative (or DNG) format, defined by Adobe in 2004. The question at hand is how to obtain DNG files from Raspberry Pi, in such a way that we can process them using our favourite raw converters.
Obtaining a raw image from Raspberry Pi
Many readers will be familiar with the raspistill application, which captures JPEG images from the attached camera. raspistill includes the -r option, which appends all the raw image data to the end of the JPEG file. JPEG viewers will still display the file as normal but ignore the (many megabytes of) raw data tacked on the end. Such a “JPEG+RAW” file can be captured using the terminal command:
raspistill -r -o image.jpg
Unfortunately this JPEG+RAW format is merely what comes out of the camera stack and is not supported by any raw converters. So to make use of it we will have to convert it into a DNG file.
This Python utility converts the Raspberry Pi’s native JPEG+RAW files into DNGs. PyDNG can be installed from github.com/schoolpost/PyDNG, where more complete instructions are available. In brief, we need to perform the following steps:
git clone https://github.com/schoolpost/PyDNG
pip3 install src/. # note that PyDNG requires Python3
PyDNG can be used as part of larger Python scripts, or it can be run stand-alone. Continuing the raspistill example from before, we can enter in a terminal window:
python3 examples/utility.py image.jpg
The resulting DNG file can be processed by a variety of raw converters. Some are free (such as RawTherapee or dcraw, though the latter is no longer officially developed or supported), and there are many well-known proprietary options (Adobe Camera Raw or Lightroom, for instance). Perhaps users will post in the comments any that they feel have given them good results.
White balancing and colour matrices
Now, one of the bugbears of processing Raspberry Pi raw files up to this point has been the problem of getting sensible colours. Previously, the images have been rendered with a sickly green cast, simply because no colour balancing is being done and green is normally the most sensitive colour channel. In fact it’s even worse than this, as the RGB values in the raw image merely reflect the sensitivity of the sensor’s photo-sites to different wavelengths, and do not a priori have more than a general correlation with the colours as perceived by our own eyes. This is where we need white balancing and colour matrices.
Correct white balance multipliers are required if neutral parts of the scene are to look, well, neutral. We can use raspistill‘s guesstimate of them, found in the JPEG+RAW file (or you can measure your own on a neutral part of the scene, like a grey card). Matrices and look-up tables are then required to convert colour from ‘camera’ space to the final colour space of choice, mostly sRGB or Adobe RGB.
My thanks go to forum contributors Jack Hogan for measuring these colour matrices, and to Csaba Nagy for implementing them in the PyDNG tool. The results speak for themselves.
Previous attempts at raw conversion are on the left; the results using the updated PyDNG are on the right.
For those familiar with DNG files, we include links to DCP (DNG Camera Profile) files (warning: binary format). You can try different ones out in raw converters, and we would encourage users to experiment, to perhaps create their own, and to share their results!
This is a basic colour profile baked into PyDNG, and is the one shown in the results above. It’s sufficiently small that we can view it as a JSON file.
8 Bits and a Byte created this automatic bubble machine, which is powered and controlled by a Raspberry Pi and can be switched on via the internet by fans of robots and/or bubbles.
They chose a froggy-shaped bubble machine, but you can repurpose whichever type you desire; it’s just easier to adapt a model running on two AA batteries.
Before the refurb, 8 Bits and a Byte’s battery-powered bubble machine was controlled by a manual switch, which turned the motor on and off inside the frog. If you wanted to watch the motor make the frog burp out bubbles, you needed to flick this switch yourself.
After dissecting their plastic amphibian friend, 8 Bits and a Byte hooked up its motor to Raspberry Pi using a relay module. They point to this useful walkthrough for help with connecting a relay module to Raspberry Pi’s GPIO pins.
Shot on a Raspberry Pi Camera Module, this stop-motion sequence is made up of 180 photos that took two hours to shoot and another hour to process.
The trick lies in the Camera Module enabling you to change the alpha transparency of the overlay image, which is the previous frame. It’s all explained in the official documentation, but basically, the Camera Module’s preview permits multiple layers to be rendered simultaneously: text, image, etc. Being able to change the transparency from the command line means this maker could see how the next frame (or the object) should be aligned. In 2D animation, this process is called ‘onion skinning’.
So why the Raspberry Pi Camera Module? Redditor /DIY_Maxwell aka Yuksel Temiz explains: “I make stop-motion animations as a hobby, using either my SLR or phone with a remote shutter. In most cases I didn’t need precision, but some animations like this are very challenging because I need to know the exact position of my object (the boat in this case) in each frame. The Raspberry Pi camera was great because I could overlay the previously captured frame into the live preview, and I could quickly change the transparency of the overlay to see how precise the location and how smooth the motion.”
You can easily make simple, linear stop-motion videos by just capturing your 3D printer while it’s doing its thing. Yuksel created a bolting horse (above) in that way. The boat sequence was more complicated though, because it rotates, and because pieces had to be added and removed.
The official docs are really comprehensive and span basic to advanced skill levels. Yuksel even walks you through getting started with the installation of Raspberry Pi OS.
We’ve seen Yuksel’s handiwork before, and this new project was made in part by modifying the code from the open-source microscope (above) they made using Raspberry Pi and LEGO. They’re now planning to make a nice GUI and share the project as an open-source stop-motion animation tool.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.