Tag Archives: camera

Raspberry Pi High Quality Camera powers up homemade microscope

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-high-quality-camera-powers-up-homemade-microscope/

Wow, DIY-Maxwell, wow. This reddit user got their hands on one of our new Raspberry Pi High Quality Cameras and decided to upgrade their homemade microscope with it. The brains of the thing are also provided by a Raspberry Pi.

Key features

  • Raspberry Pi OS
  • 8 MegaPixel CMOS camera (Full HD 30 fps video)
  • Imaging features from several centimetres to several micrometers without changing the lens
  • 6 stepper motors (X, Y, tilt, rotation, magnification, focus)
  • Variable speed control using a joystick controller or keyboard
  • Uniform illumination for imaging reflective surface
  • Modular design: stages and modules can be arranged in any configuration depending on the application

Here’s what a penny looks like under this powerful microscope:

Check out this video from the original reddit post to see the microscope in action.

Bill of materials

Click image to enlarge

The user has put together very detailed, image-led build instructions walking you through how to create the linear actuators, camera setup, rotary stage, illumination, title mechanism, and electronics.

The project uses a program written in Python 3 (MicroscoPy.py) to control the microscope, modify camera settings, and take photos and videos controlled by keyboard input.

Click image to enlarge

Here is a quick visual to show you the exact ports you need for this project on whatever Raspberry Pi you have:

Click image to enlarge

In the comments of the original reddit post, DIY_Maxwell explains that $10 objective lens used in the project limited the Raspberry Pi High Quality Camera’s performance. They predict you can expect even better images with a heavier investment in the lens.

The project is the result of a team at IBM Research–Europe, in Zurich, who develop microfluidic technologies for medical applications, needing to provide high-quality photos and videos of their microfluidic chips.

In a blog for IEEE Spectrum, IBM team member Yuksel Temiz explains: “Taking a photo of a microfluidic chip is not easy. The chips are typically too big to fit into the field of view of a standard microscope, but they have fine features that cannot be resolved using a regular camera. Uniform illumination is also critical because the chips are often made of highly reflective or transparent materials. Looking at publications from other research groups, it’s obvious that this is a common challenge. With this motivation, I devoted some of my free time to designing a multipurpose and compact lab instrument that can take macro photos from almost any angle.”

Here’s the full story about how the Raspberry Pi-powered creation came to be.

And for some extra-credit homework, you can check out this document comparing the performance of the microscope using our Raspberry Pi Camera Module v2 and the High Quality Camera. The key takeaway for those wishing to upgrade their old projects with the newer camera is to remember that it’s heavier and 50% bigger, so you’ll need to tweak your housing to fit it in.

The post Raspberry Pi High Quality Camera powers up homemade microscope appeared first on Raspberry Pi.

An open source camera stack for Raspberry Pi using libcamera

Post Syndicated from David Plowman original https://www.raspberrypi.org/blog/an-open-source-camera-stack-for-raspberry-pi-using-libcamera/

Since we released the first Raspberry Pi camera module back in 2013, users have been clamouring for better access to the internals of the camera system, and even to be able to attach camera sensors of their own to the Raspberry Pi board. Today we’re releasing our first version of a new open source camera stack which makes these wishes a reality.

(Note: in what follows, you may wish to refer to the glossary at the end of this post.)

We’ve had the building blocks for connecting other sensors and providing lower-level access to the image processing for a while, but Linux has been missing a convenient way for applications to take advantage of this. In late 2018 a group of Linux developers started a project called libcamera to address that. We’ve been working with them since then, and we’re pleased now to announce a camera stack that operates within this new framework.

Here’s how our work fits into the libcamera project.

We’ve supplied a Pipeline Handler that glues together our drivers and control algorithms, and presents them to libcamera with the API it expects.

Here’s a little more on what this has entailed.

V4L2 drivers

V4L2 (Video for Linux 2) is the Linux kernel driver framework for devices that manipulate images and video. It provides a standardised mechanism for passing video buffers to, and/or receiving them from, different hardware devices. Whilst it has proved somewhat awkward as a means of driving entire complex camera systems, it can nonetheless provide the basis of the hardware drivers that libcamera needs to use.

Consequently, we’ve upgraded both the version 1 (Omnivision OV5647) and version 2 (Sony IMX219) camera drivers so that they feature a variety of modes and resolutions, operating in the standard V4L2 manner. Support for the new Raspberry Pi High Quality Camera (using the Sony IMX477) will be following shortly. The Broadcom Unicam driver – also V4L2‑based – has been enhanced too, signalling the start of each camera frame to the camera stack.

Finally, dumping raw camera frames (in Bayer format) into memory is of limited value, so the V4L2 Broadcom ISP driver provides all the controls needed to turn raw images into beautiful pictures!

Configuration and control algorithms

Of course, being able to configure Broadcom’s ISP doesn’t help you to know what parameters to supply. For this reason, Raspberry Pi has developed from scratch its own suite of ISP control algorithms (sometimes referred to generically as 3A Algorithms), and these are made available to our users as well. Some of the most well known control algorithms include:

  • AEC/AGC (Auto Exposure Control/Auto Gain Control): this monitors image statistics into order to drive the camera exposure to an appropriate level.
  • AWB (Auto White Balance): this corrects for the ambient light that is illuminating a scene, and makes objects that appear grey to our eyes come out actually grey in the final image.

But there are many others too, such as ALSC (Auto Lens Shading Correction, which corrects vignetting and colour variation across an image), and control for noise, sharpness, contrast, and all other aspects of image processing. Here’s how they work together.

The control algorithms all receive statistics information from the ISP, and cooperate in filling in metadata for each image passing through the pipeline. At the end, the metadata is used to update control parameters in both the image sensor and the ISP.

Previously these functions were proprietary and closed source, and ran on the Broadcom GPU. Now, the GPU just shovels pixels through the ISP hardware block and notifies us when it’s done; practically all the configuration is computed and supplied from open source Raspberry Pi code on the ARM processor. A shim layer still exists on the GPU, and turns Raspberry Pi’s own image processing configuration into the proprietary functions of the Broadcom SoC.

To help you configure Raspberry Pi’s control algorithms correctly for a new camera, we include a Camera Tuning Tool. Or if you’d rather do your own thing, it’s easy to modify the supplied algorithms, or indeed to replace them entirely with your own.

Why libcamera?

Whilst ISP vendors are in some cases contributing open source V4L2 drivers, the reality is that all ISPs are very different. Advertising these differences through kernel APIs is fine – but it creates an almighty headache for anyone trying to write a portable camera application. Fortunately, this is exactly the problem that libcamera solves.

We provide all the pieces for Raspberry Pi-based libcamera systems to work simply “out of the box”. libcamera remains a work in progress, but we look forward to continuing to help this effort, and to contributing an open and accessible development platform that is available to everyone.

Summing it all up

So far as we know, there are no similar camera systems where large parts, including at least the control (3A) algorithms and possibly driver code, are not closed and proprietary. Indeed, for anyone wishing to customise a camera system – perhaps with their own choice of sensor – or to develop their own algorithms, there would seem to be very few options – unless perhaps you happen to be an extremely large corporation.

In this respect, the new Raspberry Pi Open Source Camera System is providing something distinctly novel. For some users and applications, we expect its accessible and non-secretive nature may even prove quite game-changing.

What about existing camera applications?

The new open source camera system does not replace any existing camera functionality, and for the foreseeable future the two will continue to co-exist. In due course we expect to provide additional libcamera-based versions of raspistill, raspivid and PiCamera – so stay tuned!

Where next?

If you want to learn more about the libcamera project, please visit https://libcamera.org.

To try libcamera for yourself with a Raspberry Pi, please follow the instructions in our online documentation, where you’ll also find the full Raspberry Pi Camera Algorithm and Tuning Guide.

If you’d like to know more, and can’t find an answer in our documentation, please go to the Camera Board forum. We’ll be sure to keep our eyes open there to pick up any of your questions.

Acknowledgements

Thanks to Naushir Patuck and Dave Stevenson for doing all the really tricky bits (lots of V4L2-wrangling).

Thanks also to the libcamera team (Laurent Pinchart, Kieran Bingham, Jacopo Mondi and Niklas Söderlund) for all their help in making this project possible.

 

Glossary

3A, 3A Algorithms: refers to AEC/AGC (Auto Exposure Control/Auto Gain Control), AWB (Auto White Balance) and AF (Auto Focus) algorithms, but may implicitly cover other ISP control algorithms. Note that Raspberry Pi does not implement AF (Auto Focus), as none of our supported camera modules requires it
AEC: Auto Exposure Control
AF: Auto Focus
AGC: Auto Gain Control
ALSC: Auto Lens Shading Correction, which corrects vignetting and colour variations across an image. These are normally caused by the type of lens being used and can vary in different lighting conditions
AWB: Auto White Balance
Bayer: an image format where each pixel has only one colour component (one of R, G or B), creating a sort of “colour mosaic”. All the missing colour values must subsequently be interpolated. This is a raw image format meaning that no noise, sharpness, gamma, or any other processing has yet been applied to the image
CSI-2: Camera Serial Interface (version) 2. This is the interface format between a camera sensor and Raspberry Pi
GPU: Graphics Processing Unit. But in this case it refers specifically to the multimedia coprocessor on the Broadcom SoC. This multimedia processor is proprietary and closed source, and cannot directly be programmed by Raspberry Pi users
ISP: Image Signal Processor. A hardware block that turns raw (Bayer) camera images into full colour images (either RGB or YUV)
Raw: see Bayer
SoC: System on Chip. The Broadcom processor at the heart of all Raspberry Pis
Unicam: the CSI-2 receiver on the Broadcom SoC on the Raspberry Pi. Unicam receives pixels being streamed out by the image sensor
V4L2: Video for Linux 2. The Linux kernel driver framework for devices that process video images. This includes image sensors, CSI-2 receivers, and ISPs

The post An open source camera stack for Raspberry Pi using libcamera appeared first on Raspberry Pi.

New book: The Official Raspberry Pi Camera Guide

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/new-book-the-official-raspberry-pi-camera-guide/

To coincide with yesterday’s launch of the Raspberry Pi High Quality Camera, Raspberry Pi Press has created a new Official Camera Guide to help you get started and inspire your future projects.

The Raspberry Pi High Quality Camera

Connecting a High Quality Camera turns your Raspberry Pi into a powerful digital camera. This 132-page book tells you everything you need to know to set up the camera, attach a lens, and start capturing high-resolution photos and video footage.

Make those photos snazzy

The book tells you everything you need to know in order to use the camera by issuing commands in a terminal window or via SSH. It also demonstrates how to control the camera with Python using the excellent picamera library.

You’ll discover the many image modes and effects available – our favourite is ‘posterise’.

Build some amazing camera-based projects

Once you’ve got the basics down, you can start using your camera for a variety of exciting Raspberry Pi projects showcased across the book’s 17 packed chapters. Want to make a camera trap to monitor the wildlife in your garden? Build a smart door with a video doorbell? Try out high-speed and time-lapse photography? Or even find out which car is parked in your driveway using automatic number-plate recognition? The book has all this covered, and a whole lot more.

Don’t have a High Quality Camera yet? No problem. All the commands in the book are exactly the same for the standard Raspberry Pi Camera Module, so you can also use this model with the help of our Official Camera Guide.

Snap it up!

The Official Raspberry Pi Camera Guide is available now from the Raspberry Pi Press online store for £10. And, as always, we have also released the book as a free PDF. But the physical book feels so good to hold and looks so handsome on your bookshelf, we don’t think you’ll regret getting your hands on the print edition.

Whichever format you choose, have fun shooting amazing photos and videos with the new High Quality Camera. And do share what you capture with us on social media using #ShotOnRaspberryPi.

The post New book: The Official Raspberry Pi Camera Guide appeared first on Raspberry Pi.

Using Raspberry Pi for deeper learning in education

Post Syndicated from Sian Williams Page original https://www.raspberrypi.org/blog/using-raspberry-pi-for-deeper-learning-in-education/

Using deeper learning as a framework for transformative educational experiences, Brent Richardson outlines the case for a pedagogical approach that challenges students using a Raspberry Pi. From the latest issue of Hello World magazine — out today!

A benefit of completing school and entering the workforce is being able to kiss standardised tests goodbye. That is, if you don’t count those occasional ‘prove you watched the webinar’ quizzes some supervisors require.

In the real world, assessments often happen on the fly and are based on each employee’s ability to successfully complete tasks and solve problems. It is often obvious to an employer when their staff members are unprepared.

Formal education continues to focus on accountability tools that measure base-level proficiencies instead of more complex skills like problem-solving and communication.

One of the main reasons the U.S. education system is criticised for its reliance on standardised tests is that this method of assessing a student’s comprehension of a subject can hinder their ability to transfer knowledge from an existing situation to a new situation. The effect leaves students ill-prepared for higher education and the workforce.

A study conducted by the National Association of Colleges and Employers found a significant gap between how students felt about their abilities and their employer’s observations. In seven out of eight categories, students rated their skills much higher than their prospective employers had.

Some people believe that this gap continues to widen because teaching within the confines of a standardised test encourages teachers to narrow their instruction. The focus becomes preparing students with a limited scope of learning that is beneficial for testing.

With this approach to learning, it is possible that students can excel at test-taking and still struggle with applying knowledge in new ways. Educators need to have the support to not only prepare students for tests but also to develop ways that will help their students connect to the material in a meaningful manner.

In an effort to boost the U.S. education system’s ability to increase the knowledge and skills of students, many private corporations and nonprofits directly support public education. In 2010, the Hewlett Foundation went so far as to develop a framework called ‘deeper learning’ to help guide its education partners in preparing learners for success.

The principles of deeper learning

Deeper learning focuses on six key competencies:

    1. Master core academic content
    2. Think critically and solve
      complex problems
    3. Work collaboratively
    4. Communicate effectively
    5. Learn how to learn
    6. Develop academic mindsets

This framework ensures that learners are active participants in their education. Students are immersed in a challenging curriculum that requires them to seek out and acquire new information, apply what they have learned, and build upon that to create new knowledge.

While deeper learning experiences are important for all students, research shows that schools that engage students from low-income families and students of colour in deeper learning have stronger academic outcomes, better attendance and behaviour, and lower dropout rates. This results in higher graduation rates, and higher rates
of college attendance and perseverance than comparison schools serving similar students. This pedagogical approach is one we strive to embed in all our work at Fab Lab Houston.

A deeper learning timelapse project

The importance of deeper learning was undeniable when a group of students I worked with in Houston built a solar-powered time-lapse camera. Through this collaborative project, we quickly found ourselves moving beyond classroom pedagogy to a ‘hero’s journey’ — where students’ learning paths echo a centuries-old narrative arc in which a protagonist goes on an adventure, makes new friends, encounters roadblocks, overcomes adversity, and returns home a changed person.

In this spirit, we challenged the students with a simple objective: ‘Make a device to document the construction of Fab Lab Houston’. In just one sentence, participants understood enough to know where the finish line was without being told exactly how to get there. This shift in approach pushed students to ask questions as they attempted to understand constraints and potential approaches.

Students shared ideas ranging from drone video to photography robots. Together everyone began to break down these big ideas into smaller parts and better define the project we would tackle together. To my surprise, even the students that typically refused to do most things were excited to poke holes in unrealistic ideas. It was decided, among other things, that drones would be too expensive, robots might not be waterproof, and time was always a concern.

The decision was made to move forward with the stationary time-lapse camera, because although the students didn’t know how to accomplish all the aspects of the project, they could at least understand the project enough to break it down into doable parts and develop a ballpark budget. Students formed three teams and picked one aspect of the project to tackle. The three subgroups focused on taking photos and converting them to video, developing a remote power solution, and building weatherproof housing.

A group of students found sample code for Raspberry Pi that could be repurposed to take photos and store them sequentially on a USB drive. After quick success, a few ambitious learners started working to automate the image post-processing into video. Eventually, after attempting multiple ways to program the computer to dynamically turn images into video, one team member discovered a new approach: since the photos were stored with a sequential numbering system, thousands of photos could be loaded into Adobe Premiere Pro straight off the USB with the ‘Automate to Sequence’ tool in Premiere.

A great deal of time was spent measuring power consumption and calculating solar panel and battery size. Since the project would be placed on a pole in the middle of a construction site for six months, the students were challenged with making their solar-powered time-lapse camera as efficient as possible.

Waking the device after it was put into sleep mode proved to be more difficult than anticipated, so a hardware solution was tested. The Raspberry Pi computer was programmed to boot up when receiving power, take a picture, and then shut itself down. With the Raspberry Pi safely shut down, a timer relay cut power for ten minutes before returning power and starting the cycle again.

Finally, a waterproof container had to be built to house the electronics and battery. To avoid overcomplicating the process, the group sourced a plastic weatherproof ammunition storage box to modify. Students operated a 3D printer to create custom parts for the box.

After cutting a hole for the camera, a small piece of glass was attached to a 3D-printed hood, ensuring no water entered the box. On the rear of the box, they printed a part to hold and seal the cable from the solar panel where it entered the box. It only took a few sessions before the group produced a functioning prototype. The project was then placed outside for a day to test the capability of the device.

The test appeared successful when the students checked the USB drive. The drive was full of high-quality images captured every ten minutes. When the drive was connected back to Raspberry Pi, a student noticed that all the parts inside the case moved. The high temperature on the day of the test had melted the glue used to attach everything. This unexpected problem challenged students to research a better alternative and reattach the pieces.

Once the students felt confident in their device’s functionality, it was handed over to the construction crew, who installed the camera on a twenty-foot pole. The installation went smoothly and the students anxiously waited to see the results.

Less than a week after the camera went up, Houston was hit hard with the rains brought on by hurricane Harvey. The group was nervous to see whether the project they had constructed would survive. However, when they saw that their camera had survived and was working, they felt a great sense of pride.

They recognised that it was the collaborative effort of the group to problem-solve possible challenges that allowed their camera to not only survive but to capture a spectacular series of photos showing the impact of the hurricane in the location it was placed.

BakerRipleyTimeLapse2

This is “BakerRipleyTimeLapse2” by Brent Richardson on Vimeo, the home for high quality videos and the people who love them.

A worthwhile risk

Overcoming many hiccups throughout the project was a great illustration of how the students learned how to learn and
to develop an academic mindset; a setback that at the beginning of the project might have seemed insurmountable was laughable in the end.

Throughout my experience as a classroom teacher, a museum educator, and now a director of a digital makerspace, I’ve seen countless students struggle to understand the relevance of learning, and this has led me to develop a strong desire to expand the use of deeper learning.

Sometimes it feels like a risk to facilitate learning rather than impart knowledge, but seeing a student’s development into a changed person, ready to help someone else learn, makes it worth the effort. Let’s challenge ourselves as educators to help students acquire knowledge and use it.

Get your FREE copy of Hello World today

Issue 12 of Hello World is available now as a FREE PDF download. UK-based educators can also subscribe to receive Hello World directly to their door in all its shiny printed goodness. Visit the Hello World website for more information.

The post Using Raspberry Pi for deeper learning in education appeared first on Raspberry Pi.

Raspberry Pi 3 baby monitor | Hackspace magazine #26

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/raspberry-pi-3-baby-monitor-hackspace-magazine-26/

You might have a baby/dog/hamster that you want to keep an eye on when you’re not there. We understand: they’re lovely, especially hamsters. Here’s how HackSpace magazine contributor Dr Andrew Lewis built a Raspberry Pi baby cam to watch over his small creatures…

When a project is going to be used in the home, it pays to take a little bit of extra time on appearance

Wireless baby monitors

You can get wireless baby monitors that have a whole range of great features for making sure your little ones are safe, sound, and sleeping happily, but they come with a hefty price tag.

In this article, you’ll find out how to make a Raspberry Pi-powered streaming camera, and combine it with a built-in I2C sensor pack that monitors temperature, pressure, and humidity. You’ll also see how you can use the GPIO pins on Raspberry Pi to turn an LED night light on and off using a web interface.

The hardware for this project is quite simple, and involves minimal soldering, but the first thing you need to do is to install Raspbian onto a microSD card for your Raspberry Pi. If you’re planning on doing a headless install, you’ll also need to enable SSH by creating an empty file called SSH on the root of the Raspbian install, and a file with your wireless LAN details called wpa_supplicant.conf.

You can download the code for this as well as the 3D-printable files from our GitHub. You’ll need to transfer the code to the Raspberry Pi. Next, connect the camera, the BME280 board, and the LEDs to the Raspberry Pi, as shown in the circuit diagram.

The BME280 module uses the I2C connection on pins 3 and 5 of the GPIO, taking power from pins 1 and 9. The LEDs connect directly to pins 19 and 20, and the camera cable fits into the camera connector.

Insert the microSD card into the Raspberry Pi and boot up. If everything is working OK, you should be able to see the IP address for your device listed on your hub or router, and you should be able to connect to it via SSH. If you don’t see the Raspberry Pi listed, check your wireless connection details and make sure your adapter is supplying enough power. It’s worth taking the time to assign your Raspberry Pi with a static IP address on your network, so it can’t change its IP address unexpectedly.

Smile for Picamera

Use the raspi-config application to enable the camera interface and the I2C interface. If you’re planning on modifying the code yourself, we recommend enabling VNC access as well, because it will make editing and debugging the code once the device is put together much easier. All that remains on the software side is to update APT, download the babycam.py script, install any dependencies with PIP, and set the script to run automatically. The main dependencies for the babycam.py script are the RPi.bme280 module, Flask, PyAudio, picamera, and NumPy. Chances are that these are already installed on your system by default, with the exception of RPi.bme280, which can be installed by typing sudo pip3 install RPi.bme280 from the terminal. Once all of the dependencies are present, load up the script and give it a test run, and point your web browser at port 8000 on the Raspberry Pi. You should see a webpage with a camera image, controls for the LED lights, and a read-out of the temperature, pressure, and humidity of the room.

Finishing a 3D print by applying a thin layer of car body filler and sanding back will give a much smoother surface. This isn’t always necessary, but if your filament is damp or your nozzle is worn, it can make a model look much better when it’s painted

The easiest way to get the babycam.py script to run on boot is to add a line to the rc.local file. Assuming that the babycam.py file is located in your home directory, you should add the line python3 /home/pi/babycam.py to the rc.local file, just before the line that reads exit 0. It’s very important that you include the ampersand at the end of the line, otherwise the Python script will not be run in a separate process, the rc.local file will never complete, and your Raspberry Pi will never boot.

Tinned Raspberry Pi

With the software and hardware working, you can start putting the case together. You might need to scale the 3D models to suit the tin can you have before you print them out, so measure your tin before you click Print. You’ll also want to remove any inner lip from the top of the can using a can opener, and make a small hole in the side of the can near the bottom for the USB power cable. Next, make a hole in the bottom of the can for the LED cables to pass through.

If you want to add more than a couple of LEDs (or want to use brighter LEDs), you should connect your LEDs to the power input, and use a transistor on the GPIO to trigger them

If you haven’t already done so, solder appropriate leads to your LEDs, and don’t forget to put a 330 Ω resistor in-line on the positive side. The neck of the camera is supported by two lengths of aluminium armature wire. Push the wire up through each of the printed neck pieces, and use a clean soldering iron to weld the pieces together in the middle. Push the neck into the printed top section, and weld into place with a soldering iron from underneath. Be careful not to block the narrow slot with plastic, as this is where the camera cable passes up through the neck and into the camera.

You need to mount the BME280 so that the sensor is exposed to the air in the room. Do this by drilling a small hole in the 3D-printed top piece and hot gluing the sensor into position. If you’re going to use the optional microphone, you can add an extra hole and glue the mic into place in the same way. A short USB port extender will give you enough cable to plug the USB microphone into the socket on your Raspberry Pi

Paint the tin can and the 3D-printed parts. We found that spray blackboard paint gives a good effect on 3D-printed parts, and PlastiKote stone effect paint made the tin can look a little more tactile than a flat colour. Once the paint is dry, pass the camera cable up through the slot in the neck, and then apply the heat-shrink tubing to cover the neck with a small gap at the top and bottom. Connect the camera to the top of the cable, and push the front piece on to hold it into place. Glue shouldn’t be necessary, but a little hot glue might help if the front parts don’t hold together well.

Push the power cable through the hole in the case, and secure it with a knot and some hot glue. Leave enough cable free to easily remove the top section from the can in future without stressing the wires.

If you’re having trouble getting the armature wire through the 3D-printed parts, try using a drill to help twist the wire through

This is getting heavy

Glue the bottom section onto the can with hot glue, and hot-glue the LEDs into place on the bottom, feeding the cable up through the hole and into the GPIO header. This is a good time to hot-glue a weight into the bottom of the can to improve its stability. I used an old weight from some kitchen scales, but any small weight should be fine. Finally, fix the Raspberry Pi into place on the top piece by either drilling or gluing, then reconnect the rest of the cables, and push the 3D-printed top section into the tin can. If the top section is too loose, you can add a little bit of hot glue to hold things together once you know everything is working.

With the right type of paint, even old tin cans make a good-looking enclosure
for a project

That should be all of the steps complete. Plug in the USB and check the camera from a web browser. The babycam.py script includes video, sensors, and light control. If you are using the optional USB microphone, you can expand the functionality of the app to include audio streaming, use cry detection to activate the LEDs (don’t make the LEDs too stimulating or you’ll never get a night’s sleep again), or maybe even add a Bluetooth speaker and integrate a home assistant.

HackSpace magazine issue 26

HackSpace magazine is out now, available in print from your local newsagent, the Raspberry Pi Store in Cambridge, and online from Raspberry Pi Press.

If you love HackSpace magazine as much as we do, why not have a look at the subscription offers available, including the 12-month deal that comes with a free Adafruit Circuit Playground!

And, as always, you can download the free PDF here.

The post Raspberry Pi 3 baby monitor | Hackspace magazine #26 appeared first on Raspberry Pi.

Retrofit a vintage camera flash with a Raspberry Pi Camera Module

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/retrofit-vintage-camera-flash-with-camera-module/

Wanting to break from the standard practice of updating old analogue cameras with digital technology, Alan Wang decided to retrofit a broken vintage camera flash with a Raspberry Pi Zero W to produce a video-capturing action cam.

Raspberry Pi Zero Flash Cam Video Test

Full story of this project: https://www.hackster.io/alankrantas/raspberry-pi-zero-flash-cam-359875

By hacking a somewhat gnarly hole into the body of the broken flash unit, Alan fit in the Raspberry Pi Zero W and Camera Module, along with a few other components. He powers the whole unit via a USB power bank.

At every touch of the onboard touchpad, the retrofit camera films 12 seconds of footage and saves it as an MP4 file on the onboard SD card or an optional USB flash drive.

While the project didn’t technically bring the flash unit back to life — as the flash function is still broken — it’s a nice example of upcycling old tech, and it looks pretty sweet. Plus, you can attach it to your existing film camera to produce some cool side-by-side comparison imagery, as seen in the setup above.

For a full breakdown of the build, including the code needed to run the camera, check out the project’s Hackster.io page.

The post Retrofit a vintage camera flash with a Raspberry Pi Camera Module appeared first on Raspberry Pi.

Record the last seven seconds of everything you see

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/record-the-last-seven-seconds-of-everything-you-see/

Have you ever witnessed something marvellous but, by the time you get your camera out to record it, the moment has passed? ‘s Film in the Past hat-mounted camera is here to save the day!

Record the past

As 18-year-old student Johan explains, “Imagine you are walking in the street and you see a meteorite in the sky – obviously you don’t have time to take your phone to film it.” While I haven’t seen many meteorites in the sky, I have found myself wishing I’d had a camera to hand more than once in my life – usually when a friend trips over or says something ridiculous. “Fortunately after the passage of the meteorite, you just have to press a button on the hat and the camera will record the last 7 seconds”, Johan continues. “Then you can download the video from an application on your phone.”

Johan’s project, Film in the Past, consists of a Raspberry Pi 3 with USB camera attached, mounted to the peak of a baseball cap.

The camera is always on, and, at the press of a button, will save the last seven seconds of footage to the Raspberry Pi. You can then access the saved footage from an application on your smartphone. It’s a bit like the video capture function on the Xbox One or, as I like to call it, the option to record hilarious glitches during gameplay. But, unlike the Xbox One, it’s a lot easier to get the footage off the Raspberry Pi and onto your phone.

Fancy building your own? The full Python code for the project can be downloaded via GitHub, and more information can be found on Instructables and Johan’s website.

The post Record the last seven seconds of everything you see appeared first on Raspberry Pi.

Instaframe: image recognition meets Instagram

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/instaframe-image-recognition-meets-instagram/

Bringing the digital photo frame into an even more modern age than the modern age it already resides in, Sean Tracey uses image recognition and social media to update his mother on the day-to-day happenings of her grandkids.

Sharing social media content

“Like every grandmother, my mum dotes on her grandchildren (the daughter and son of my sister, Grace and Freddie),” Sean explains in his tutorial for the project, “but they don’t live nearby, so she doesn’t get to see them as much as she might like.”

Sean tells of his mother’s lack of interest in social media platforms (they’re too complex), and of the anxiety he feels whenever she picks up his phone to catch up on the latest images of Grace and Freddie.

So I thought: “I know! Why don’t I make my mum a picture frame that filters my Instagram feed to show only pictures of my niece and nephew!”

Genius!

Image recognition and Instagram

Sean’s Instaframe project uses a Watson Visual Recognition model to recognise photos of his niece and nephew posted to his Instagram account, all via a Chrome extension. Then, via a series of smaller functions, these images are saved to a folder and displayed on a screen connected to a Raspberry Pi 3B+.

Sean has written up a full rundown of the build process on his website.

Photos and Pi

Do you like photos and Raspberry Pi? Then check out these other photo-focused Pi projects that we’re sure you’ll love (because they’re awesome) and will want to make yourself (because they’re awesome).

FlipFrame

FlipFrame, the rotating picture frame, rotates according to the orientation of the image on display.

FlipFrame

Upstagram

This tiny homage to the house from Up! takes bird’s-eye view photographs of Paris and uploads them to Instagram as it goes.

Pi-powered DSLR shutter

Adrian Bevan hacked his Raspberry Pi to act as a motion-activated shutter remote for his digital SLR — aka NatureBytes on steroids.

The post Instaframe: image recognition meets Instagram appeared first on Raspberry Pi.

Stereoscopic photography with StereoPi and a Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/stereoscopic-photography-stereopi-raspberry-pi/

StereoPi allows users to attached two Camera Modules to their Raspberry Pi Compute Module — it’s a great tool for building stereoscopic cameras, 360º monitors, and virtual reality rigs.

StereoPi draft 1

No Description

My love for stereoscopic photography goes way back

My great-uncle Eric was a keen stereoscopic photographer and member of The Stereoscopic Society. Every memory I have of visiting him includes looking at his latest stereo creations through a pair of gorgeously antique-looking, wooden viewers. And I’ve since inherited the beautiful mahogany viewing cabinet that used to stand in his dining room.

It looks like this, but fancier

Stereoscopic photography has always fascinated me. Two images that seem identical suddenly become, as if by magic, a three-dimensional wonder. As a child, I couldn’t make sense of it. And even now, while I do understand how it actually works, it remains magical in my mind — like fairies at the bottom of the garden. Or magnets.

So it’s no wonder that I was instantly taken with StereoPi when I stumbled across its crowdfunding campaign on Twitter. Having wanted to make a Pi-based stereoscopic camera ever since I joined the organisation, but not knowing how best to go about it, I thought this new board seemed ideal for me.

The StereoPi board

Despite its name, StereoPi is more than just a stereoscopic camera board. How to attach two Camera Modules to a Raspberry Pi is a question people ask us frequently and for various projects, from home security systems to robots, cameras, and VR.

Slim and standard editions of the StereoPi

Slim and standard editions of the StereoPi

The board attaches to any version of the Raspberry Pi Compute Module, including the newly released CM3+, and you can use it in conjunction with Raspbian to control it via the Python module picamera.

StereoPi stereoscopic livestream over 4G

StereoPi stereoscopic livestream over 4G. Project site: http://StereoPi.com

When it comes to what you can do with StereoPi, the possibilities are almost endless: mount two wide-angle lenses for 360º recording, build a VR rig to test out virtual reality games, or, as I plan to do, build a stereoscopic camera!

It’s on Crowd Supply now!

StereoPi is currently available to back on Crowd Supply, and purchase options start from $69. At 69% funded with 30 days still to go, we have faith that the StereoPi project will reach its goal and make its way into the world of impressive Raspberry Pi add-ons.

The post Stereoscopic photography with StereoPi and a Raspberry Pi appeared first on Raspberry Pi.

Bike dashcam from RaspiTV

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/bike-dashcam-raspitv/

It’s that time of year again: Pi Towers is locking its doors as we all scoot off into the night to spend some time with our families. There will be a special post on Christmas Day for people who have been given a new Raspberry Pi and need some pointers for getting started. Normal service will resume when we’ve dealt with our New Year headaches: until then, have a wonderful Christmas holiday!

Our good friend Alex Eames has been live-blogging a new project over the last week or so, and has just wrapped up. (Seasonal pun. Not sorry.) He’s recently been bitten by the cycling bug.

I’ve ridden about 1100 miles in the last 6 months and have learned enough to bore you to death with talk of heart zones and various items of clothing you can buy to make winter rides more bearable.

Here is Darth Alex demonstrating fashion-forward winter 2018 cycling wear.

Moving swiftly on.

Alex has been working on a dashcam for his bike, mostly intended for use as a rear-view “mirror”, but also to work as an evidence-collecting camera in case of any accidents.

dashcam test

This is really one of the most interesting and enjoyable project write-ups we’ve come across in a while: working on this dashcam as a daily live blog means that Alex has been able to take us down all the rabbit holes he investigated, explain changes of direction and dead ends, and show us exactly how the design and engineering process came together. And this, being an Alex project, has great attention to detail; he made custom mounts for his bike to keep everything as unobtrusive as possible, so it looks great as well.

There’s a ton of detail on hardware (which went through several iterations before Alex settled on something he was happy with), software, implementation, unexpected hiccups, and more. And if you’re someone who would rather skip to the end, here’s Alex’s road test.

Raspberry Pi Bike Dashcam Rearview Mirror Road Test – no audio

First and second road tests of my Raspberry Pi Rearview mirror/Dashcam bike project as blogged here https://raspi.tv/2018/making-a-fairly-simple-bike-dashcam-live-project-blog

I really hope we’ll see more write-ups like this one in 2019. We don’t get to read as much about other project makers’ process as we’d like to; it’s really fascinating to get a glimpse into the way someone else thinks about and approaches a problem.

The post Bike dashcam from RaspiTV appeared first on Raspberry Pi.

Rescuing old cine film with Raspberry Pi Zero

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/rescuing-old-cine-film-raspberry-pi-zero/

When Electrical Engineer Alan Platt was given the task of converting old cine film to digital footage for his father-in-law’s 70th birthday, his first instinct was to look online.

converting cine film to digital footage with a Raspberry Pi Zero

“There are plenty of companies happy to convert old films”, he explains, “but they are all extremely expensive. In addition, you have to send your original films away by post, and there’s no way to guarantee that they’ll be safe in transit.”

Alan was given a box of Super 8 films covering 15 years of family holidays and memories. A huge responsibility, and an enormous challenge. Not content to let someone else do the hard work, Alan decided to convert the films himself — and learn how to program a Raspberry Pi at the same time.

converting cine film to digital footage with a Raspberry Pi Zero

Alan’s cine film digitising machine

The best-laid plans

Alan’s initial plan involved using his father-in-law’s cine projector as the base for the conversion process, but this soon proved impossible. There was no space in the projector to house both the film-playing mechanism, and the camera for the digitisation process. Further attempts to use the projector came to an end when, on powering it up for the first time, the 50-year-old machine produced a loud bang and a large cloud of smoke.

Undeterred, Alan examined the bust projector’s mechanism and decided to build his own. This began with a large eBay order: 3-D printed components from Germany, custom-shaped PTFE sheets from the UK, and optical lenses from China. For the skeleton of the machine, Alan’s box of Technic LEGO was dusted off and unpacked; an old TV was dug out of storage to interface with the Raspberry Pi Zero.

converting cine film to digital footage with a Raspberry Pi Zero

Experimentation: Technic LEGO, clamps, and Blu Tack hold the equipment together

The build commenced with several weeks of trial and error using scraps of cine film, a Camera Module, and a motor. With the Raspberry Pi Zero, Alan controlled the motion of the film through the machine, and took photos of each frame.

“At one point, setting the tension on the film required a helper to stand next to me, holding a sledgehammer connected to the pick-up reel. Moving the sledgehammer up or down varied the tension, and allowed me to work out what power of motor I would need to make the film run smoothly.”

He refined the hardware and software until the machine could produce reliable, focused, and stable images.

A slow process

Over a period of two months, the finished machine was used to convert all the cine films. The process involves loading a reel onto a Technic LEGO arm, feeding the film through the mechanism with tweezers, and winding the first section on to the pick-up reel. The Raspberry Pi controls a stepper motor and the Camera Module, advancing the film frame by frame and taking individual photos of each film cell. The film is backlit through a sheet of translucent PTFE serving as a diffuser; the Camera Module is focused by moving it up and down on its aluminium mounting.

converting cine film to digital footage with a Raspberry Pi Zero

Alan taught himself to program in Python while working on this project

Finally, Alan used Avidemux, a free video-editing program, to stitch all the images together into an MP4 digital film.

The verdict

“I’m incredibly proud of this machine”, Alan says. “It has taken more than a quarter of a million photos, digitised hundreds of meters of film — and taught me to program in Python. It demonstrates you don’t need to be an expert software engineer to make something really cool!”

And Alan’s father-in-law?

“He was thrilled! Being able to watch the films on his TV without having to set up the projector was fantastic. It was a great present!”

Here, exclusively for the Raspberry Pi blog, we present the first moments of footage to be digitised using Alan’s machine.

converting cine film to digital footage with a Raspberry Pi Zero

Gripping footage, filmed at Windsor Safari Park in 1983

Digital footage

Have you used a Raspberry Pi to digitise family memories? Do you have a box of Super 8 films in the attic, waiting for a machine like Alan’s?

Tell us about it in the comments!

Thanks again, Rachel

The post Rescuing old cine film with Raspberry Pi Zero appeared first on Raspberry Pi.

Raspberry Pi as car computer

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/raspberry-pi-as-car-computer/

Carputers! Fabrice Aneche is documenting his ongoing build, which equips an older (2011) car with some of the features a 2018 model might have: thus far, a reversing camera (bought off the shelf, with a modified GUI to show the date and the camera’s output built with Qt and Golang), GPS and offline route guidance.

rearcam

We’re not sure how the car got through that little door there.

It was back in 2013, when the Raspberry Pi had been on the market for about a year, that we started to see carputer projects emerge. They tended to be focussed in two directions: in-car entertainment, and on-board diagnostics (OBD). We ended up hiring the wonderful Martin O’Hanlon, who wrote up the first OBD project we came across, just this year. Being featured on this blog can change your life, I tell you.

In the last five years, the Pi’s evolved: you’re now working with a lot more processing power, there’s onboard WiFi, and far more peripherals which can be useful in a…vehicular context are available. Consequently, the flavour of the car projects we’re seeing has changed somewhat, with navigation systems and cameras much more visible. Fabrice’s is one of the best examples we’ve found.

solarised map

Night-view navigation system

GPS is all very well, but you, the human person driver, will want directions at every turn. So Fabrice wrote a user interface to serve up live maps and directions, mostly in Qt5 and QML (he’s got some interesting discussion on his website about why he stopped using X11, which turned out to be too slow for his needs). All the non-QML work is done in Go. It’s all open-source, and on GitHub, if you’d like to contribute or roll your own project. He’s also worked over the Linux GPS daemons, found them lacking, and has produced his own:

…the Linux gps daemons are using obscure and over complicated protocols so I’ve decided to write my own gps daemon in Go using a gRPC stream interface. You can find it here.

I’m also not satisfied with the map matching of OSRM for real time display, I may rewrite one using mbmatch.

street map display

We’ll be keeping an eye on this project; given how much clever has gone into it already, we’re pretty sure that Fabrice will be adding new features. Thanks Fabrice!

The post Raspberry Pi as car computer appeared first on Raspberry Pi.

Randomly generated, thermal-printed comics

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/random-comic-strip-generation-vomit-comic-robot/

Python code creates curious, wordless comic strips at random, spewing them from the thermal printer mouth of a laser-cut body reminiscent of Disney Pixar’s WALL-E: meet the Vomit Comic Robot!

The age of the thermal printer!

Thermal printers allow you to instantly print photos, data, and text using a few lines of code, with no need for ink. More and more makers are using this handy, low-maintenance bit of kit for truly creative projects, from Pierre Muth’s tiny PolaPi-Zero camera to the sound-printing Waves project by Eunice Lee, Matthew Zhang, and Bomani McClendon (and our own Secret Santa Babbage).

Vomiting robots

Interaction designer and developer Cadin Batrack, whose background is in game design and interactivity, has built the Vomit Comic Robot, which creates “one-of-a-kind comics on demand by processing hand-drawn images through a custom software algorithm.”

The robot is made up of a Raspberry Pi 3, a USB thermal printer, and a handful of LEDs.

Comic Vomit Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

At the press of a button, Processing code selects one of a set of Cadin’s hand-drawn empty comic grids and then randomly picks images from a library to fill in the gaps.

Vomit Comic Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

Each image is associated with data that allows the code to fit it correctly into the available panels. Cadin says about the concept behing his build:

Although images are selected and placed randomly, the comic panel format suggests relationships between elements. Our minds create a story where there is none in an attempt to explain visuals created by a non-intelligent machine.

The Raspberry Pi saves the final image as a high-resolution PNG file (so that Cadin can sell prints on thick paper via Etsy), and a Python script sends it to be vomited up by the thermal printer.

Comic Vomit Robot Cadin Batrack's Raspberry Pi comic-generating thermal printer machine

For more about the Vomit Comic Robot, check out Cadin’s blog. If you want to recreate it, you can find the info you need in the Imgur album he has put together.

We ❤ cute robots

We have a soft spot for cute robots here at Pi Towers, and of course we make no exception for the Vomit Comic Robot. If, like us, you’re a fan of adorable bots, check out Mira, the tiny interactive robot by Alonso Martinez, and Peeqo, the GIF bot by Abhishek Singh.

Mira Alfonso Martinez Raspberry Pi

The post Randomly generated, thermal-printed comics appeared first on Raspberry Pi.

Recording lost seconds with the Augenblick blink camera

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/augenblick-camera/

Warning: a GIF used in today’s blog contains flashing images.

Students at the University of Bremen, Germany, have built a wearable camera that records the seconds of vision lost when you blink. Augenblick uses a Raspberry Pi Zero and Camera Module alongside muscle sensors to record footage whenever you close your eyes, producing a rather disjointed film of the sights you miss out on.

Augenblick blink camera recording using a Raspberry Pi Zero

Blink and you’ll miss it

The average person blinks up to five times a minute, with each blink lasting 0.5 to 0.8 seconds. These half-seconds add up to about 30 minutes a day. What sights are we losing during these minutes? That is the question asked by students Manasse Pinsuwan and René Henrich when they set out to design Augenblick.

Blinking is a highly invasive mechanism for our eyesight. Every day we close our eyes thousands of times without noticing it. Our mind manages to never let us wonder what exactly happens in the moments that we miss.

Capturing lost moments

For Augenblick, the wearer sticks MyoWare Muscle Sensor pads to their face, and these detect the electrical impulses that trigger blinking.

Augenblick blink camera recording using a Raspberry Pi Zero

Two pads are applied over the orbicularis oculi muscle that forms a ring around the eye socket, while the third pad is attached to the cheek as a neutral point.

Biology fact: there are two muscles responsible for blinking. The orbicularis oculi muscle closes the eye, while the levator palpebrae superioris muscle opens it — and yes, they both sound like the names of Harry Potter spells.

The sensor is read 25 times a second. Whenever it detects that the orbicularis oculi is active, the Camera Module records video footage.

Augenblick blink recording using a Raspberry Pi Zero

Pressing a button on the side of the Augenblick glasses set the code running. An LED lights up whenever the camera is recording and also serves to confirm the correct placement of the sensor pads.

Augenblick blink camera recording using a Raspberry Pi Zero

The Pi Zero saves the footage so that it can be stitched together later to form a continuous, if disjointed, film.

Learn more about the Augenblick blink camera

You can find more information on the conception, design, and build process of Augenblick here in German, with a shorter explanation including lots of photos here in English.

And if you’re keen to recreate this project, our free project resource for a wearable Pi Zero time-lapse camera will come in handy as a starting point.

The post Recording lost seconds with the Augenblick blink camera appeared first on Raspberry Pi.

Working with the Scout Association on digital skills for life

Post Syndicated from Philip Colligan original https://www.raspberrypi.org/blog/working-with-scout-association-digital-skills-for-life/

Today we’re launching a new partnership between the Scouts and the Raspberry Pi Foundation that will help tens of thousands of young people learn crucial digital skills for life. In this blog post, I want to explain what we’ve got planned, why it matters, and how you can get involved.

This is personal

First, let me tell you why this partnership matters to me. As a child growing up in North Wales in the 1980s, Scouting changed my life. My time with 2nd Rhyl provided me with countless opportunities to grow and develop new skills. It taught me about teamwork and community in ways that continue to shape my decisions today.

As my own kids (now seven and ten) have joined Scouting, I’ve seen the same opportunities opening up for them, and like so many parents, I’ve come back to the movement as a volunteer to support their local section. So this is deeply personal for me, and the same is true for many of my colleagues at the Raspberry Pi Foundation who in different ways have been part of the Scouting movement.

That shouldn’t come as a surprise. Scouting and Raspberry Pi share many of the same values. We are both community-led movements that aim to help young people develop the skills they need for life. We are both powered by an amazing army of volunteers who give their time to support that mission. We both care about inclusiveness, and pride ourselves on combining fun with learning by doing.

Raspberry Pi

Raspberry Pi started life in 2008 as a response to the problem that too many young people were growing up without the skills to create with technology. Our goal is that everyone should be able to harness the power of computing and digital technologies, for work, to solve problems that matter to them, and to express themselves creatively.

In 2012 we launched our first product, the world’s first $35 computer. Just six years on, we have sold over 20 million Raspberry Pi computers and helped kickstart a global movement for digital skills.

The Raspberry Pi Foundation now runs the world’s largest network of volunteer-led computing clubs (Code Clubs and CoderDojos), and creates free educational resources that are used by millions of young people all over the world to learn how to create with digital technologies. And lots of what we are able to achieve is because of partnerships with fantastic organisations that share our goals. For example, through our partnership with the European Space Agency, thousands of young people have written code that has run on two Raspberry Pi computers that Tim Peake took to the International Space Station as part of his Mission Principia.

Digital makers

Today we’re launching the new Digital Maker Staged Activity Badge to help tens of thousands of young people learn how to create with technology through Scouting. Over the past few months, we’ve been working with the Scouts all over the UK to develop and test the new badge requirements, along with guidance, project ideas, and resources that really make them work for Scouting. We know that we need to get two things right: relevance and accessibility.

Relevance is all about making sure that the activities and resources we provide are a really good fit for Scouting and Scouting’s mission to equip young people with skills for life. From the digital compass to nature cameras and the reinvented wide game, we’ve had a lot of fun thinking about ways we can bring to life the crucial role that digital technologies can play in the outdoors and adventure.

Compass Coding with Raspberry Pi

We are beyond excited to be launching a new partnership with the Raspberry Pi Foundation, which will help tens of thousands of young people learn digital skills for life.

We also know that there are great opportunities for Scouts to use digital technologies to solve social problems in their communities, reflecting the movement’s commitment to social action. Today we’re launching the first set of project ideas and resources, with many more to follow over the coming weeks and months.

Accessibility is about providing every Scout leader with the confidence, support, and kit to enable them to offer the Digital Maker Staged Activity Badge to their young people. A lot of work and care has gone into designing activities that require very little equipment: for example, activities at Stages 1 and 2 can be completed with a laptop without access to the internet. For the activities that do require kit, we will be working with Scout Stores and districts to make low-cost kit available to buy or loan.

We’re producing accessible instructions, worksheets, and videos to help leaders run sessions with confidence, and we’ll also be planning training for leaders. We will work with our network of Code Clubs and CoderDojos to connect them with local sections to organise joint activities, bringing both kit and expertise along with them.




Get involved

Today’s launch is just the start. We’ll be developing our partnership over the next few years, and we can’t wait for you to join us in getting more young people making things with technology.

Take a look at the brand-new Raspberry Pi resources designed especially for Scouts, to get young people making and creating right away.

The post Working with the Scout Association on digital skills for life appeared first on Raspberry Pi.

Naturebytes’ weatherproof Pi and camera case

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/naturebytes-weatherproof-pi-and-camera-case/

Naturebytes are making their weatherproof Wildlife Cam Case available as a standalone product for the first time, a welcome addition to the Raspberry Pi ecosystem that should take some of the hassle out of your outdoor builds.

A robin on a bird feeder in a garden with a Naturebytes Wildlife Cam mounted beside it

Weatherproofing digital making projects

People often use Raspberry Pis and Camera Modules for outdoor projects, but weatherproofing your set-up can be tricky. You need to keep water — and tiny creatures — out, but you might well need access for wires and cables, whether for power or sensors; if you’re using a camera, it’ll need something clear and cleanable in front of the lens. You can use sealant, but if you need to adjust anything that you’ve applied it to, you’ll have to remove it and redo it. While we’ve seen a few reasonable options available to buy, the choice has never been what you’d call extensive.

The Naturebytes case

For all these reasons, I was pleased to learn that Naturebytes, the wildlife camera people, are releasing their Wildlife Cam Case as a standalone product for the first time.

Naturebytes case open

The Wildlife Cam Case is ideal for nature camera projects, of course, but it’ll also be useful for anyone who wants to take their Pi outdoors. It has weatherproof lenses that are transparent to visible and IR light, for all your nature observation projects. Its opening is hinged to allow easy access to your hardware, and the case has waterproof access for cables. Inside, there’s a mount for fixing any model of Raspberry Pi and camera, as well as many other components. On top of all that, the case comes with a sturdy nylon strap to make it easy to attach it to a post or a tree.

Naturebytes case additional components

Order yours now!

At the moment, Naturebytes are producing a limited run of the cases. The first batch of 50 are due to be dispatched next week to arrive just in time for the Bank Holiday weekend in the UK, so get them while they’re hot. It’s the perfect thing for recording a timelapse of exactly how quickly the slugs obliterate your vegetable seedlings, and of lots more heartening things that must surely happen in gardens other than mine.

The post Naturebytes’ weatherproof Pi and camera case appeared first on Raspberry Pi.

AWS IoT 1-Click – Use Simple Devices to Trigger Lambda Functions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-1-click-use-simple-devices-to-trigger-lambda-functions/

We announced a preview of AWS IoT 1-Click at AWS re:Invent 2017 and have been refining it ever since, focusing on simplicity and a clean out-of-box experience. Designed to make IoT available and accessible to a broad audience, AWS IoT 1-Click is now generally available, along with new IoT buttons from AWS and AT&T.

I sat down with the dev team a month or two ago to learn about the service so that I could start thinking about my blog post. During the meeting they gave me a pair of IoT buttons and I started to think about some creative ways to put them to use. Here are a few that I came up with:

Help Request – Earlier this month I spent a very pleasant weekend at the HackTillDawn hackathon in Los Angeles. As the participants were hacking away, they occasionally had questions about AWS, machine learning, Amazon SageMaker, and AWS DeepLens. While we had plenty of AWS Solution Architects on hand (decked out in fashionable & distinctive AWS shirts for easy identification), I imagined an IoT button for each team. Pressing the button would alert the SA crew via SMS and direct them to the proper table.

Camera ControlTim Bray and I were in the AWS video studio, prepping for the first episode of Tim’s series on AWS Messaging. Minutes before we opened the Twitch stream I realized that we did not have a clean, unobtrusive way to ask the camera operator to switch to a closeup view. Again, I imagined that a couple of IoT buttons would allow us to make the request.

Remote Dog Treat Dispenser – My dog barks every time a stranger opens the gate in front of our house. While it is great to have confirmation that my Ring doorbell is working, I would like to be able to press a button and dispense a treat so that Luna stops barking!

Homes, offices, factories, schools, vehicles, and health care facilities can all benefit from IoT buttons and other simple IoT devices, all managed using AWS IoT 1-Click.

All About AWS IoT 1-Click
As I said earlier, we have been focusing on simplicity and a clean out-of-box experience. Here’s what that means:

Architects can dream up applications for inexpensive, low-powered devices.

Developers don’t need to write any device-level code. They can make use of pre-built actions, which send email or SMS messages, or write their own custom actions using AWS Lambda functions.

Installers don’t have to install certificates or configure cloud endpoints on newly acquired devices, and don’t have to worry about firmware updates.

Administrators can monitor the overall status and health of each device, and can arrange to receive alerts when a device nears the end of its useful life and needs to be replaced, using a single interface that spans device types and manufacturers.

I’ll show you how easy this is in just a moment. But first, let’s talk about the current set of devices that are supported by AWS IoT 1-Click.

Who’s Got the Button?
We’re launching with support for two types of buttons (both pictured above). Both types of buttons are pre-configured with X.509 certificates, communicate to the cloud over secure connections, and are ready to use.

The AWS IoT Enterprise Button communicates via Wi-Fi. It has a 2000-click lifetime, encrypts outbound data using TLS, and can be configured using BLE and our mobile app. It retails for $19.99 (shipping and handling not included) and can be used in the United States, Europe, and Japan.

The AT&T LTE-M Button communicates via the LTE-M cellular network. It has a 1500-click lifetime, and also encrypts outbound data using TLS. The device and the bundled data plan is available an an introductory price of $29.99 (shipping and handling not included), and can be used in the United States.

We are very interested in working with device manufacturers in order to make even more shapes, sizes, and types of devices (badge readers, asset trackers, motion detectors, and industrial sensors, to name a few) available to our customers. Our team will be happy to tell you about our provisioning tools and our facility for pushing OTA (over the air) updates to large fleets of devices; you can contact them at [email protected].

AWS IoT 1-Click Concepts
I’m eager to show you how to use AWS IoT 1-Click and the buttons, but need to introduce a few concepts first.

Device – A button or other item that can send messages. Each device is uniquely identified by a serial number.

Placement Template – Describes a like-minded collection of devices to be deployed. Specifies the action to be performed and lists the names of custom attributes for each device.

Placement – A device that has been deployed. Referring to placements instead of devices gives you the freedom to replace and upgrade devices with minimal disruption. Each placement can include values for custom attributes such as a location (“Building 8, 3rd Floor, Room 1337”) or a purpose (“Coffee Request Button”).

Action – The AWS Lambda function to invoke when the button is pressed. You can write a function from scratch, or you can make use of a pair of predefined functions that send an email or an SMS message. The actions have access to the attributes; you can, for example, send an SMS message with the text “Urgent need for coffee in Building 8, 3rd Floor, Room 1337.”

Getting Started with AWS IoT 1-Click
Let’s set up an IoT button using the AWS IoT 1-Click Console:

If I didn’t have any buttons I could click Buy devices to get some. But, I do have some, so I click Claim devices to move ahead. I enter the device ID or claim code for my AT&T button and click Claim (I can enter multiple claim codes or device IDs if I want):

The AWS buttons can be claimed using the console or the mobile app; the first step is to use the mobile app to configure the button to use my Wi-Fi:

Then I scan the barcode on the box and click the button to complete the process of claiming the device. Both of my buttons are now visible in the console:

I am now ready to put them to use. I click on Projects, and then Create a project:

I name and describe my project, and click Next to proceed:

Now I define a device template, along with names and default values for the placement attributes. Here’s how I set up a device template (projects can contain several, but I just need one):

The action has two mandatory parameters (phone number and SMS message) built in; I add three more (Building, Room, and Floor) and click Create project:

I’m almost ready to ask for some coffee! The next step is to associate my buttons with this project by creating a placement for each one. I click Create placements to proceed. I name each placement, select the device to associate with it, and then enter values for the attributes that I established for the project. I can also add additional attributes that are peculiar to this placement:

I can inspect my project and see that everything looks good:

I click on the buttons and the SMS messages appear:

I can monitor device activity in the AWS IoT 1-Click Console:

And also in the Lambda Console:

The Lambda function itself is also accessible, and can be used as-is or customized:

As you can see, this is the code that lets me use {{*}}include all of the placement attributes in the message and {{Building}} (for example) to include a specific placement attribute.

Now Available
I’ve barely scratched the surface of this cool new service and I encourage you to give it a try (or a click) yourself. Buy a button or two, build something cool, and let me know all about it!

Pricing is based on the number of enabled devices in your account, measured monthly and pro-rated for partial months. Devices can be enabled or disabled at any time. See the AWS IoT 1-Click Pricing page for more info.

To learn more, visit the AWS IoT 1-Click home page or read the AWS IoT 1-Click documentation.

Jeff;

 

Converting a Kodak Box Brownie into a digital camera

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/kodak-brownie-camera/

In this article from The MagPi issue 69, David Crookes explains how Daniel Berrangé took an old Kodak Brownie from the 1950s and turned it into a quirky digital camera. Get your copy of The MagPi magazine in stores now, or download it as a free PDF here.

Daniel Berrangé Kodak Brownie Raspberry Pi Camera

The Kodak Box Brownie

When Kodak unveiled its Box Brownie in 1900, it did so with the slogan ‘You press the button, we do the rest.’ The words referred to the ease-of-use of what was the world’s first mass-produced camera. But it could equally apply to Daniel Berrangé’s philosophy when modifying it for the 21st century. “I wanted to use the Box Brownie’s shutter button to trigger image capture, and make it simple to use,” he tells us.

Daniel Berrangé Kodak Brownie Raspberry Pi Camera

Daniel’s project grew from a previous effort in which he placed a pinhole webcam inside a ladies’ powder compact case. “The Box Brownie project is essentially a repeat of that design but with a normal lens instead of a pinhole, a real camera case, and improved software to enable a shutter button. Ideally, it would look unchanged from when it was shooting film.”

Webcam woes

At first, Daniel looked for a cheap webcam, intending to spend no more than the price of a Pi Zero. This didn’t work out too well. “The low-light performance of the webcam was not sufficient to make a pinhole camera so I just decided to make a ‘normal’ digital camera instead,” he reveals.
To that end, he began removing some internal components from the Box Brownie. “With the original lens removed, the task was to position the webcam’s electronic light sensor (the CCD) and lens as close to the front of the camera as possible,” Daniel explains. “In the end, the CCD was about 15 mm away from the front aperture of the camera, giving a field of view that was approximately the same as the unmodified camera would achieve.”

Daniel Berrangé Kodak Brownie Raspberry Pi Camera
Daniel Berrangé Kodak Brownie Raspberry Pi Camera
Daniel Berrangé Kodak Brownie Raspberry Pi Camera

It was then time for him to insert the Raspberry Pi, upon which was a custom ‘init’ binary that loads a couple of kernel modules to run the webcam, mount the microSD file system, and launch the application binary. Here, Daniel found he was in luck. “I’d noticed that the size of a 620 film spool (63 mm) was effectively the same as the width of a Raspberry Pi Zero (65 mm), so it could be held in place between the film spool grips,” he recalls. “It was almost as if it was designed with this in mind.”

Shutter success

In order to operate the camera, Daniel had to work on the shutter button. “The Box Brownie’s shutter button is entirely mechanical, driven by a handful of levers and springs,” Daniel explains. “First, the Pi Zero needs to know when the shutter button is pressed and second, the physical shutter has to be open while the webcam is capturing the image. Rather than try to synchronise image capture with the fraction of a second that the physical shutter is open, a bit of electrical tape was used on the shutter mechanism to keep it permanently open.”

Daniel Berrangé Kodak Brownie Raspberry Pi Camera

Daniel made use of the Pi Zero’s GPIO pins to detect the pressing of the shutter button. It determines if each pin is at 0 or 5 volts. “My thought was that I could set a GPIO pin high to 5 V, and then use the action of the shutter button to short it to ground, and detect this change in level from software.”

This initially involved using a pair of bare wires and some conductive paint, although the paint was later replaced by a piece of tinfoil. But with the button pressed, the GPIO pin level goes to zero and the device constantly captures still images until the button is released. All that’s left to do is smile and take the perfect snap.

The post Converting a Kodak Box Brownie into a digital camera appeared first on Raspberry Pi.