Tag Archives: Raspberry Pi High Quality Camera

Add face recognition with Raspberry Pi | Hackspace 38

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/add-face-recognition-with-raspberry-pi-hackspace-38/

It’s hard to comprehend how far machine learning has come in the past few years. You can now use a sub-£50 computer to reliably recognise someone’s face with surprising accuracy.

Although this kind of computing power is normally out of reach of microcontrollers, adding a Raspberry Pi computer to your project with the new High Quality Camera opens up a range of possibilities. From simple alerting applications (‘Mum’s arrived home!’), to dynamically adjusting settings based on the person using the project, there’s a lot of fun to be had.

Here’s a beginner’s guide to getting face recognition up and running.

Face recognition using machine learning is hard work, so the latest, greatest Raspberry Pi 4 is a must

1. Prepare your Raspberry Pi
For face recognition to work well, we’re going to need some horsepower, so we recommend a minimum of Raspberry Pi 3B+, ideally a Raspberry Pi 4. The extra memory will make all the difference. To keep as much resource as possible available for our project, we’ve gone for a Raspberry Pi OS Lite installation with no desktop.

Make sure you’re on the network, have set a new password, enabled SSH if you need to, and updated everything with sudo apt -y update && sudo apt -y full-upgrade. Finally, go into settings by running sudo raspi-config and enable the camera in ‘Interfacing Options’.

2. Attach the camera
This project will work well with the original Raspberry Pi Camera, but the new official HQ Camera will give you much better results. Be sure to connect the camera to your Raspberry Pi 4 with the power off. Connect the ribbon cable as instructed in hsmag.cc/HQCameraGetStarted. Once installed, boot up your Raspberry Pi 4 and test the camera is working. From the command line, run the following:
raspivid -o test.h264 -t 10000
This will record ten seconds of video to your microSD card. If you have an HDMI cable plugged in, you’ll see what the camera can see in real-time. Take some time to make sure the focus is correct before proceeding.

3. Install dependencies
The facial recognition library we are using is one that has been maintained for many years by Adam Geitgey. It contains many examples, including Python 3 bindings to make it really simple to build your own facial recognition applications. What is not so easy is the number of dependencies that need to be installed first. There are way too many to list here, and you probably won’t want to type them out, so head over to hsmag.cc/FacialRec so that you can cut and paste the commands. This step will take a while to complete on a Raspberry Pi 4, and significantly longer on a Model 3 or earlier.

3. Install the libraries
Now that we have everything in place, we can install Adam’s applications and Python bindings with a simple, single command:
sudo pip3 install face_recognition
Once installed, there are some examples we can download to try everything out.
cd
git clone --single-branch https://github.com/ageitgey/face_recognition.git
In this repository is a range of examples showing the different ways the software can be used, including live video recognition. Feel free to explore and remix.

5. Example images
The examples come with a training image of Barack Obama. To run the example:
cd ./face_recognition/examples
python3 facerec_on_raspberry_pi.py

On your smartphone, find an image of Obama using your favourite search engine and point it at the camera. Providing focus and light are good you will see:
“I see someone named Barack Obama!”
If you see a message saying it can’t recognise the face, then try a different image or try to improve the lighting if you can. Also, check the focus for the camera and make sure the distance between the image and camera is correct.

Who are you? What even is a name? Can a computer decide your identity?

6. Training time
The final step is to start recognising your own faces. Create a directory and, in it, place some good-quality passport-style photos of yourself or those you want to recognise. You can then edit the facerec_on_raspberry_pi.py script to use those files instead. You’ve now got a robust prototype of face recognition. This is just the beginning. These libraries can also identify ‘generic’ faces, meaning it can detect whether a person is there or not, and identify features such as the eyes, nose, and mouth. There’s a world of possibilities available, starting with these simple scripts. Have fun!

Issue 38 of Hackspace Magazine is out NOW

Front cover of hack space magazine featuring a big striped popcorn bucket filled with maker tools and popcorn

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store, The Raspberry Pi store in Cambridge, or your local newsagents.

Each issue is free to download from the HackSpace magazine website.

The post Add face recognition with Raspberry Pi | Hackspace 38 appeared first on Raspberry Pi.

Raspberry Pi High Quality security camera

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-high-quality-security-camera/

DJ from the element14 community shows you how to build a red-lensed security camera in the style of Portal 2 using the Raspberry Pi High Quality Camera.

The finished camera mounted on the wall

Portal 2 is a puzzle platform game developed by Valve — a “puzzle game masquerading as a first-person shooter”, according to Forbes.

DJ playing with the Raspberry Pi High Quality Camera

Kit list

No code needed!

DJ was pleased to learn that you don’t need to write any code to make your own security camera, you can just use a package called motionEyeOS. All you have to do is download the motionEyeOS image, pop the flashed SD card into your Raspberry Pi, and you’re pretty much good to go.

Dj got everything set up on a 5″ screen attached to the Raspberry Pi

You’ll find that the default resolution is 640×480, so it will show up as a tiny window on your monitor of choice, but that can be amended.

Simplicity

While this build is very simple electronically, the 20-part 3D-printed shell is beautiful. A Raspberry Pi is positioned on a purpose-built platform in the middle of the shell, connected to the Raspberry Pi High Quality Camera, which sits at the front of that shell, peeking out.

All the 3D printed parts ready to assemble

The 5V power supply is routed through the main shell into the base, which mounts the build to the wall. In order to keep the Raspberry Pi cool, DJ made some vent holes in the lens of the shell. The red LED is routed out of the side and sits on the outside body of the shell.

Magnetising

Raspberry Pi 4 (centre) and Raspberry Pi High Quality Camera (right) sat inside the 3D printed shell

This build is also screwless: the halves of the shell have what look like screw holes along the edges, but they are actually 3mm neodymium magnets, so assembly and repair is super easy as everything just pops on and off.

The final picture (that’s DJ!)

You can find all the files you need to recreate this build, or you can ask DJ a question, at element14.com/presents.

The post Raspberry Pi High Quality security camera appeared first on Raspberry Pi.

Raspberry Pi High Quality Camera takes photos through thousands of straws

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-high-quality-camera-takes-photos-through-thousands-of-straws/

Adrian Hanft is our favourite kind of maker: weird. He’s also the guy who invented the Lego camera, 16 years ago. This time, he spent more than a year creating what he describes as “one of the strangest cameras you may ever hear about.”

What? Looks normal from here. Massive, but normal

What’s with all the straws?

OK, here’s why it’s weird: it takes photos with a Raspberry Pi High Quality Camera through a ‘lens’ of tiny drinking straws packed together. 23,248 straws, to be exact, are inside the wooden box-shaped bit of the machine above. The camera itself sits at the slim end of the black and white part. The Raspberry Pi, power bank, and controller all sit on top of the wooden box full of straws.

Here’s what an image of Yoda looks like, photographed through that many straws:

Mosaic, but make it techy

Ground glass lenses

The concept isn’t as easy as it may look. As you can see from the images below, if you hold up a load of straws, you can only see the light through a few of them. Adrian turned to older technology for a solution, taking a viewfinder from an old camera which had ground glass (which ‘collects’ light) on the surface.

Left: looking through straws at light with the naked eye
Right: the same straws viewed through a ground glass lens

Even though Adrian was completely new to both Raspberry Pi and Python, it only took him a week of evenings and weekends to code the software needed to control the Raspberry Pi High Quality Camera.

Long story short, on the left is the final camera, with all the prototypes queued up behind it

An original Nintendo controller runs the show and connects to the Raspberry Pi with a USB adapter. The buttons are mapped to the functions of Adrian’s software.

A super satisfying time-lapse of the straws being loaded

What does the Nintendo controller do?

In his original post, Adrian explains what all the buttons on the controller do in order to create images:

“The Start button launches a preview of what the camera is seeing. The A button takes a picture. The Up and Down buttons increase or decrease the exposure time by 1 second. The Select button launches a gallery of photos so I can see the last photo I took. The Right and Left buttons cycle between photos in the gallery. I am saving the B button for something else in the future. Maybe I will use it for uploading to Dropbox, I haven’t decided yet.”

Adrian made a Lego mount for the Raspberry Pi camera
The Lego mount makes it easy to switch between cameras and lenses

A mobile phone serves as a wireless display so he can keep an eye on what’s going on. The phone communicates with the Raspberry Pi connected to the camera via a VPN app.

One of the prototypes in action

Follow Adrian on Instagram to keep up with all the photography captured using the final camera, as well as the prototypes that came before it.

The post Raspberry Pi High Quality Camera takes photos through thousands of straws appeared first on Raspberry Pi.

3D-printable cases for the Raspberry Pi High Quality Camera

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/3d-printable-cases-for-the-raspberry-pi-high-quality-camera/

Earlier this year, we released the Raspberry Pi High Quality Camera, a brand-new 12.3 megapixel camera that allows you to use C- and CS-mount lenses with Raspberry Pi boards.

We love it. You love it.

How do we know you love it? Because the internet is now full of really awesome 3D-printable cases and add-ons our community has created in order to use their High Quality Camera out and about…or for Octoprint…or home security…or SPACE PHOTOGRAPHY, WHAT?!

The moon, captured by a Raspberry Pi High Quality Camera. Credit: Greg Annandale

We thought it would be fun to show you some of 3D designs we’ve seen pop up on sites like Thingiverse and MyMiniFactory, so that anyone with access to a 3D printer can build their own camera too!

Adafruit did a thing, obvs

Shout out to our friends at Adafruit for this really neat, retro-looking camera case designed by the Ruiz Brothers. The brown filament used for the casing is so reminiscent of the leather bodies of SLRs from my beloved 1980s childhood that I can’t help but be drawn to it. And, with snap-fit parts throughout, you can modify this case model as you see fit. Not bad. Not bad at all.

Nikon to Raspberry Pi

While the Raspberry Pi High Quality Camera is suitable for C- and CS-mount lenses out of the box, this doesn’t mean you’re limited to only these sizes! There’s a plethora of C- and CS-mount adapters available on the market, and you can also 3D print your own adapter.

Thingiverse user UltiArjan has done exactly that and designed this adapter for using Nikon lenses with the High Quality Camera. Precision is key here to get a snug thread, so you may have to fiddle with your printer settings to get the right fit.

And, for the Canon users out there, here’s Zimbo1’s adapter for Canon EF lenses!

Raspberry Pi Zero minimal adapter

If you’re not interested in a full-body camera case and just need something to attach A to B, this minimal adapter for the Raspberry Pi Zero will be right up your street.

Designer ed7coyne put this model together in order to use Raspberry Pi Zero as a webcam, and according to Cura on my laptop, should only take about 2 hours to print at 0.1 with supports. In fact, since I’ve got Cura open already…

3D print a Raspberry Pi High Quality Camera?!

Not a working one, of course, but if you’re building something around the High Quality Camera and want to make sure everything fits without putting the device in jeopardy, you could always print a replica for prototyping!

Thingiverse user tmomas produced this scale replica of the Raspberry Pi High Quality Camera with the help of reference photos and technical drawings, and a quick search online will uncover similar designs for replicas of other Raspberry Pi products you might want to use while building a prototype

Bonus content alert

We made this video for HackSpace magazine earlier this year, and it’s a really hand resource if you’re new to the 3D printing game.

Also…

…I wasn’t lying when I said I was going to print ed7coyne’s minimal adapter.

The post 3D-printable cases for the Raspberry Pi High Quality Camera appeared first on Raspberry Pi.

Processing raw image files from a Raspberry Pi High Quality Camera

Post Syndicated from David Plowman original https://www.raspberrypi.org/blog/processing-raw-image-files-from-a-raspberry-pi-high-quality-camera/

When taking photos, most of us simply like to press the shutter button on our cameras and phones so that viewable image is produced almost instantaneously, usually encoded in the well-known JPEG format. However, there are some applications where a little more control over the production of that JPEG is desirable. For instance, you may want more or less de-noising, or you may feel that the colours are not being rendered quite right.

This is where raw (sometimes RAW) files come in. A raw image in this context is a direct capture of the pixels output from the image sensor, with no additional processing. Normally this is in a relatively standard format known as a Bayer image, named after Bryce Bayer who pioneered the technique back in 1974 while working for Kodak. The idea is not to let the on-board hardware ISP (Image Signal Processor) turn the raw Bayer image into a viewable picture, but instead to do it offline with an additional piece of software, often referred to as a raw converter.

A Bayer image records only one colour at each pixel location, in the pattern shown

The raw image is sometimes likened to the old photographic negative, and whilst many camera vendors use their own proprietary formats, the most portable form of raw file is the Digital Negative (or DNG) format, defined by Adobe in 2004. The question at hand is how to obtain DNG files from Raspberry Pi, in such a way that we can process them using our favourite raw converters.

Obtaining a raw image from Raspberry Pi

Many readers will be familiar with the raspistill application, which captures JPEG images from the attached camera. raspistill includes the -r option, which appends all the raw image data to the end of the JPEG file. JPEG viewers will still display the file as normal but ignore the (many megabytes of) raw data tacked on the end. Such a “JPEG+RAW” file can be captured using the terminal command:

raspistill -r -o image.jpg

Unfortunately this JPEG+RAW format is merely what comes out of the camera stack and is not supported by any raw converters. So to make use of it we will have to convert it into a DNG file.

PyDNG

This Python utility converts the Raspberry Pi’s native JPEG+RAW files into DNGs. PyDNG can be installed from github.com/schoolpost/PyDNG, where more complete instructions are available. In brief, we need to perform the following steps:

git clone https://github.com/schoolpost/PyDNG
cd PyDNG
pip3 install src/.  # note that PyDNG requires Python3

PyDNG can be used as part of larger Python scripts, or it can be run stand-alone. Continuing the raspistill example from before, we can enter in a terminal window:

python3 examples/utility.py image.jpg

The resulting DNG file can be processed by a variety of raw converters. Some are free (such as RawTherapee or dcraw, though the latter is no longer officially developed or supported), and there are many well-known proprietary options (Adobe Camera Raw or Lightroom, for instance). Perhaps users will post in the comments any that they feel have given them good results.

White balancing and colour matrices

Now, one of the bugbears of processing Raspberry Pi raw files up to this point has been the problem of getting sensible colours. Previously, the images have been rendered with a sickly green cast, simply because no colour balancing is being done and green is normally the most sensitive colour channel. In fact it’s even worse than this, as the RGB values in the raw image merely reflect the sensitivity of the sensor’s photo-sites to different wavelengths, and do not a priori have more than a general correlation with the colours as perceived by our own eyes. This is where we need white balancing and colour matrices.

Correct white balance multipliers are required if neutral parts of the scene are to look, well, neutral.  We can use raspistills guesstimate of them, found in the JPEG+RAW file (or you can measure your own on a neutral part of the scene, like a grey card). Matrices and look-up tables are then required to convert colour from ‘camera’ space to the final colour space of choice, mostly sRGB or Adobe RGB.

My thanks go to forum contributors Jack Hogan for measuring these colour matrices, and to Csaba Nagy for implementing them in the PyDNG tool. The results speak for themselves.

Results

Previous attempts at raw conversion are on the left; the results using the updated PyDNG are on the right.

DCP files

For those familiar with DNG files, we include links to DCP (DNG Camera Profile) files (warning: binary format). You can try different ones out in raw converters, and we would encourage users to experiment, to perhaps create their own, and to share their results!

  1. This is a basic colour profile baked into PyDNG, and is the one shown in the results above. It’s sufficiently small that we can view it as a JSON file.
  2. This is an improved (and larger) profile involving look-up tables, and aiming for an overall balanced colour rendition.
  3. This is similar to the previous one, but with some adjustments for skin tones and sky colours.

Note, however, that these files come with a few caveats. Specifically:

  • The calibration is only for a single Raspberry Pi High Quality Camera rather than a known average or “typical” module.
  • The illuminants used for the calibration are merely the ones that we had to hand — the D65 lamp in particular appears to be some way off.
  • The calibration only really works when the colour temperature lies between, or not too far from, the two calibration illuminants, approximately 2900K to 6000K in our case.

So there remains room for improvement. Nevertheless, results across a number of modules have shown these parameters to be a significant step forward.

Acknowledgements

My thanks again to Jack Hogan for performing the colour matrix calibration with DCamProf, and to Csaba Nagy for adding these new features to PyDNG.

Further reading

  1. There are many resources explaining how a raw (Bayer) image is converted into a viewable RGB or YUV image, among them Jack’s blog post.
  2. To understand the role of the colour matrices in a DNG file, please refer to the DNG specification. Chapter 6 in particular describes how they are used.

The post Processing raw image files from a Raspberry Pi High Quality Camera appeared first on Raspberry Pi.

DSLR Motion Capture with Raspberry Pi and OpenCV

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/dslr-motion-capture-with-raspberry-pi-and-opencv/

One of our favourite makers, Pi & Chips (AKA David Pride), wanted to see if they could trigger a DSLR camera to take pictures by using motion detection with OpenCV on Raspberry Pi.

You could certainly do this with a Raspberry Pi High Quality Camera, but David wanted to try with his swanky new Lumix camera. As well as a Raspberry Pi and whichever camera you’re using, you’ll also need a remote control. David sourced a cheap one from Amazon, since he knew full well he was going to be… breaking it a bit.

Breaking the remote a bit

When it came to the “breaking” part, David explains: “I was hoping to be able to just re-solder some connectors to the button but it was a dual function button depending on depth of press. I therefore got a set of probes out and traced which pins on the chip were responsible for the actual shutter release and then *carefully* managed to add two fine wires.”

Further breaking

Next, David added Dupont cables to the ends of the wires to allow access to the breadboard, holding the cables in place with a blob of hot glue. Then a very simple circuit using an NPN transistor to switch via GPIO gave remote control of the camera from Python.

Raspberry Pi on the right, working together with the remote control’s innards on the left

David then added OpenCV to the mix, using this tutorial on PyImageSearch. He took the basic motion detection script and added a tiny hack to trigger the GPIO when motion was detected.

He needed to add a delay to the start of the script so he could position stuff, or himself, in front of the camera with time to spare. Got to think of those angles.

David concludes: “The camera was set to fully manual and to a really nice fast shutter speed. There is almost no delay at all between motion being detected and the Lumix actually taking pictures, I was really surprised how instantaneous it was.”

The whole setup mounted on a tripod ready to play

Here are some of the visuals captured by this Raspberry Pi-powered project…

Take a look at some more of David’s projects over at Pi & Chips.

The post DSLR Motion Capture with Raspberry Pi and OpenCV appeared first on Raspberry Pi.

Go sailing with this stop-motion 3D-printed boat

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/go-sailing-with-this-stop-motion-3d-printed-boat/

Shot on a Raspberry Pi Camera Module, this stop-motion sequence is made up of 180 photos that took two hours to shoot and another hour to process.

The trick lies in the Camera Module enabling you to change the alpha transparency of the overlay image, which is the previous frame. It’s all explained in the official documentation, but basically, the Camera Module’s preview permits multiple layers to be rendered simultaneously: text, image, etc. Being able to change the transparency from the command line means this maker could see how the next frame (or the object) should be aligned. In 2D animation, this process is called ‘onion skinning’.

You can see the Raspberry Pi Camera Module on the bottom left in front of Yuksel’s hand

So why the Raspberry Pi Camera Module? Redditor /DIY_Maxwell aka Yuksel Temiz explains: “I make stop-motion animations as a hobby, using either my SLR or phone with a remote shutter. In most cases I didn’t need precision, but some animations like this are very challenging because I need to know the exact position of my object (the boat in this case) in each frame. The Raspberry Pi camera was great because I could overlay the previously captured frame into the live preview, and I could quickly change the transparency of the overlay to see how precise the location and how smooth the motion.”

You can easily make simple, linear stop-motion videos by just capturing your 3D printer while it’s doing its thing. Yuksel created a bolting horse (above) in that way. The boat sequence was more complicated though, because it rotates, and because pieces had to be added and removed.

The official docs are really comprehensive and span basic to advanced skill levels. Yuksel even walks you through getting started with the installation of Raspberry Pi OS.

Yuksel’s Raspberry Pi + Lego microscope

We’ve seen Yuksel’s handiwork before, and this new project was made in part by modifying the code from the open-source microscope (above) they made using Raspberry Pi and LEGO. They’re now planning to make a nice GUI and share the project as an open-source stop-motion animation tool.

The post Go sailing with this stop-motion 3D-printed boat appeared first on Raspberry Pi.

Raspberry Pi High Quality Camera powers up homemade microscope

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-high-quality-camera-powers-up-homemade-microscope/

Wow, DIY-Maxwell, wow. This reddit user got their hands on one of our new Raspberry Pi High Quality Cameras and decided to upgrade their homemade microscope with it. The brains of the thing are also provided by a Raspberry Pi.

Key features

  • Raspberry Pi OS
  • 8 MegaPixel CMOS camera (Full HD 30 fps video)
  • Imaging features from several centimetres to several micrometers without changing the lens
  • 6 stepper motors (X, Y, tilt, rotation, magnification, focus)
  • Variable speed control using a joystick controller or keyboard
  • Uniform illumination for imaging reflective surface
  • Modular design: stages and modules can be arranged in any configuration depending on the application

Here’s what a penny looks like under this powerful microscope:

Check out this video from the original reddit post to see the microscope in action.

Bill of materials

Click image to enlarge

The user has put together very detailed, image-led build instructions walking you through how to create the linear actuators, camera setup, rotary stage, illumination, title mechanism, and electronics.

The project uses a program written in Python 3 (MicroscoPy.py) to control the microscope, modify camera settings, and take photos and videos controlled by keyboard input.

Click image to enlarge

Here is a quick visual to show you the exact ports you need for this project on whatever Raspberry Pi you have:

Click image to enlarge

In the comments of the original reddit post, DIY_Maxwell explains that $10 objective lens used in the project limited the Raspberry Pi High Quality Camera’s performance. They predict you can expect even better images with a heavier investment in the lens.

The project is the result of a team at IBM Research–Europe, in Zurich, who develop microfluidic technologies for medical applications, needing to provide high-quality photos and videos of their microfluidic chips.

In a blog for IEEE Spectrum, IBM team member Yuksel Temiz explains: “Taking a photo of a microfluidic chip is not easy. The chips are typically too big to fit into the field of view of a standard microscope, but they have fine features that cannot be resolved using a regular camera. Uniform illumination is also critical because the chips are often made of highly reflective or transparent materials. Looking at publications from other research groups, it’s obvious that this is a common challenge. With this motivation, I devoted some of my free time to designing a multipurpose and compact lab instrument that can take macro photos from almost any angle.”

Here’s the full story about how the Raspberry Pi-powered creation came to be.

And for some extra-credit homework, you can check out this document comparing the performance of the microscope using our Raspberry Pi Camera Module v2 and the High Quality Camera. The key takeaway for those wishing to upgrade their old projects with the newer camera is to remember that it’s heavier and 50% bigger, so you’ll need to tweak your housing to fit it in.

The post Raspberry Pi High Quality Camera powers up homemade microscope appeared first on Raspberry Pi.