Tag Archives: sensor

Apple’s FaceID

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/apples_faceid.html

This is a good interview with Apple’s SVP of Software Engineering about FaceID.

Honestly, I don’t know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can’t be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:

I also quizzed Federighi about the exact way you “quick disabled” Face ID in tricky scenarios — like being stopped by police, or being asked by a thief to hand over your device.

“On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while — we’ll take you to the power down [screen]. But that also has the effect of disabling Face ID,” says Federighi. “So, if you were in a case where the thief was asking to hand over your phone — you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID.”

That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the “5 clicks” because it’s less obtrusive. When you do this, it defaults back to your passcode.

More:

It’s worth noting a few additional details here:

  • If you haven’t used Face ID in 48 hours, or if you’ve just rebooted, it will ask for a passcode.
  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode — it tried to read the people setting the phones up on the podium.)

  • Developers do not have access to raw sensor data from the Face ID array. Instead, they’re given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

  • You’ll also get a passcode request if you haven’t unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn’t unlocked it in 4 hours.

Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.

Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you’re a researcher or security wonk looking for more, he says it will have “extreme levels of detail” about the security of the system.

Here’s more about fooling it with fake faces:

Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop’s owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.

Hacking FaceID, though, won’t be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user’s face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face’s 3-D shape­ — a trick similar to the kind now used to capture actors’ faces to morph them into animated and digitally enhanced characters.

It’ll be harder, but I have no doubt that it will be done.

More speculation.

I am not planning on enabling it just yet.

The Weather Station and the eclipse

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/weather-station-eclipse/

As everyone knows, one of the problems with the weather is that it can be difficult to predict a long time in advance. In the UK we’ve had stormy conditions for weeks but, of course, now that I’ve finished my lightning detector, everything has calmed down. If you’re planning to make scientific measurements of a particular phenomenon, patience is often required.

Oracle Weather Station

Wake STEM ECH get ready to safely observe the eclipse

In the path of the eclipse

Fortunately, this wasn’t a problem for Mr Burgess and his students at Wake STEM Early College High School in Raleigh, North Carolina, USA. They knew exactly when the event they were interested in studying was going to occur: they were going to use their Raspberry Pi Oracle Weather Station to monitor the progress of the 2017 solar eclipse.

Wake STEM EC HS on Twitter

Through the @Celestron telescope #Eclipse2017 @WCPSS via @stemburgess

Measuring the temperature drop

The Raspberry Pi Oracle Weather Stations are always active and recording data, so all the students needed to do was check that everything was connected and working. That left them free to enjoy the eclipse, and take some amazing pictures like the one above.

You can see from the data how the changes in temperature lag behind the solar events – this makes sense, as it takes a while for the air to cool down. When the sun starts to return, the temperature rise continues on its pre-eclipse trajectory.

Oracle Weather Station

Weather station data 21st Aug: the yellow bars mark the start and end of the eclipse, the red bar marks the maximum sun coverage.

Reading Mr Burgess’ description, I’m feeling rather jealous. Being in the path of the Eclipse sounds amazing: “In North Carolina we experienced 93% coverage, so a lot of sunlight was still shining, but the landscape took on an eerie look. And there was a cool wind like you’d experience at dusk, not at 2:30 pm on a hot summer day. I was amazed at the significant drop in temperature that occurred in a small time frame.”

Temperature drop during Eclipse Oracle Weather Station.

Close up of data showing temperature drop as recorded by the Raspberry Pi Oracle Weather Station. The yellow bars mark the start and end of the eclipse, the red bar marks the maximum sun coverage.

 Weather Station in the classroom

I’ve been preparing for the solar eclipse for almost two years, with the weather station arriving early last school year. I did not think about temperature data until I read about citizen scientists on a NASA website,” explains Mr Burgess, who is now in his second year of working with the Raspberry Pi Oracle Weather Station. Around 120 ninth-grade students (ages 14-15) have been involved with the project so far. “I’ve found that students who don’t have a strong interest in meteorology find it interesting to look at real data and figure out trends.”

Wake STEM EC Raspberry Pi Oracle Weather Station installation

Wake STEM EC Raspberry Pi Oracle Weather Station installation

As many schools have discovered, Mr Burgess found that the biggest challenge with the Weather Station project “was finding a suitable place to install the weather station in a place that could get power and Ethernet“. To help with this problem, we’ve recently added two new guides to help with installing the wind sensors outside and using WiFi to connect the kit to the Internet.

Raspberry Pi Oracle Weather Station

If you want to keep up to date with all the latest Raspberry Pi Oracle Weather Station activities undertaken by our network of schools around the world, make sure you regularly check our weather station forum. Meanwhile, everyone at Wake STEM ECH is already starting to plan for their next eclipse on Monday, April 8, 2024. I wonder if they’d like some help with their Weather Station?

The post The Weather Station and the eclipse appeared first on Raspberry Pi.

The Pronunciation Training Machine

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pronunciation-training-machine/

Using a Raspberry Pi, an Arduino, an Adafruit NeoPixel Ring and a servomotor, Japanese makers HomeMadeGarbage produced this Pronunciation Training Machine to help their parents distinguish ‘L’s and ‘R’s when speaking English.

L R 発音矯正ギブス お母ちゃん編 Pronunciation training machine #right #light #raspberrypi #arduino #neopixel

23 Likes, 1 Comments – Home Made Garbage (@homemadegarbage) on Instagram: “L R 発音矯正ギブス お母ちゃん編 Pronunciation training machine #right #light #raspberrypi #arduino #neopixel”

How does an Pronunciation Training Machine work?

As you can see in the video above, the machine utilises the Google Cloud Speech API to recognise their parents’ pronunciation of the words ‘right’ and ‘light’. Correctly pronounce the former, and the servo-mounted arrow points to the right. Pronounce the later and the NeoPixel Ring illuminates because, well, you just said “light”.

An image showing how the project works - English Pronunciation TrainingYou can find the full code for the project on its hackster page here.

Variations on the idea

It’s a super-cute project with great potential, and the concept could easily be amended for other training purposes. How about using motion sensors to help someone learn their left from their right?

A photo of hands with left and right written on them - English Pronunciation Training

Wait…your left or my left?
image c/o tattly

Or use random.choice to switch on LEDs over certain images, and speech recognition to reward a correct answer? Light up a picture of a cat, for example, and when the player says “cat”, they receive a ‘purr’ or a treat?

A photo of a kitten - English Pronunciation Training

Obligatory kitten picture
image c/o somewhere on the internet!

Raspberry Pi-based educational aids do not have to be elaborate builds. They can use components as simple as a servo and an LED, and still have the potential to make great improvements in people’s day-to-day lives.

Your own projects

If you’ve created an educational tool using a Raspberry Pi, we’d love to see it. The Raspberry Pi itself is an educational tool, so you’re helping it to fulfil its destiny! Make sure you share your projects with us on social media, or pop a link in the comments below. We’d also love to see people using the Pronunciation Training Machine (or similar projects), so make sure you share those too!

A massive shout out to Artie at hackster.io for this heads-up, and for all the other Raspberry Pi projects he sends my way. What a star!

The post The Pronunciation Training Machine appeared first on Raspberry Pi.

timeShift(GrafanaBuzz, 1w) Issue 10

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/08/25/timeshiftgrafanabuzz-1w-issue-10/

This week, in addition to the articles we collected from around the web and a number of new Plugins and updates, we have a special announcement. GrafanaCon EU has been announced! Join us in Amsterdam March 1-2, 2018. The call for papers is officially open! We’ll keep you up to date as we fill in the details.


Grafana <3 Prometheus

Last week we mentioned that our colleague Carl Bergquist spoke at PromCon 2017 in Munich. His presentation is now available online. We will post the video once it’s available.


From the Blogosphere

Grafana-based GUI for mgstat, a system monitoring tool for InterSystems Caché, Ensemble or HealthShare: This is the second article in a series about Making Prometheus Monitoring for InterSystems Caché. Mikhail goes into great detail about setting this up on Docker, configuring the first dashboard, and adding templating.

Installation and Integration of Grafana in Zabbix 3.x: Daniel put together an installation guide to get Grafana to display metrics from Zabbix, which utilizes the Zabbix Plugin developed by Grafana Labs Developer Alex Zobnin.

Visualize with RRDtool x Grafana: Atfujiwara wanted to update his MRTG graphs from RRDtool. This post talks about the components needed and how he connected RRDtool to Grafana.

Huawei OceanStor metrics in Grafana: Dennis is using Grafana to display metrics for his storage devices. In this post he walks you through the setup and provides a comprehensive dashboard for all the metrics.

Grafana on a Raspberry Pi2: Pete discusses how he uses Grafana with his garden sensors, and walks you through how to get it up and running on a Pi2.


Grafana Plugins

This week was pretty active on the plugin front. Today we’re announcing two brand new plugins and updates to three others. Installing plugins in Grafana is easy – if you have Hosted Grafana, simply use the one-click install, if you’re using an on-prem instance you can use the Grafana-cli.

NEW PLUGIN

IBM APM Data Source – This plugin collects metrics from the IBM APM (Application Performance Management) products and allows you to visualize it on Grafana dashboards. The plugin supports:

  • IBM Tivoli Monitoring 6.x
  • IBM SmartCloud Application Performance Management 7.x
  • IBM Performance Management 8.x (only on-premises version)

Install Now

NEW PLUGIN

Skydive Data Source – This data source plugin collects metrics from Skydive, an open source real-time network topology and protocols analyzer. Using the Skydive Gremlin query language, you can fetch metrics for flows in your network.

Install now

UPDATED PLUGIN

Datatable Panel – Lots of changes in the latest update to the Datatable Panel Here are some highlights from the changelog:

  • NEW: Export options for Clipboard/CSV/PDF/Excel/Print
  • NEW: Column Aliasing – modify the name of a column as sent by the datasource
  • NEW: Added option for a cell or row to link to another page
  • NEW: Supports Clickable links inside table
  • BUGFIX: CSS files now load when Grafana has a subpath
  • NEW: Added multi-column sorting – sort by any number of columns ascending/descending
  • NEW: Column width hints – suggest a width for a named column
  • BUGFIX: Columns from datasources other than JSON can now be aliased

Update Now

UPDATED PLUGIN

D3 Gauge Panel – The D3 Gauge Panel has a new feature – Tick Mapping. Ticks on the gauge can now be mapped to text.

Update Now

UPDATED PLUGIN

PNP4Nagios Data Source – The most recent update to the PNP Data Source adds support for template variables in queries and as well as support for querying warning and critical thresholds.

Update Now


This week’s MVC (Most Valuable Contributor)

Each week we highlight a contributor to Grafana or the surrounding ecosystem as a thank you for their participation in making open source software great.

Brian Gann
Brian is the maintainer of two Grafana Plugins and this week he submitted substantial updates to both of them (Datatable and D3 Gauge panel plugins); and he says there’s more to come! Thanks for all your hard work, Brian.


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

The Dark Knight popping up in graphs seems to be a recurring theme!
This is the graph Jakub deserves, but not the one he needs right now.



What do you think?

That’s it for the 10th issue of timeShift. Let us know how we’re doing! Submit a comment on this article below, or post something at our community forum. Help us make this better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Mod your Nerf gun with a Pi

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/mod-nerf-gun-pi/

Michael Darby, who blogs at 314reactor, has created a new Raspberry Pi build, and it’s pretty darn cool. Though it’s not the first Raspberry Pi-modded Nerf gun we’ve seen, it’s definitely one of the most complex!

Nerf Gun Ammo Counter / Range Finder – Raspberry Pi

An ammo counter and range finder made from a Raspberry Pi for a Nerf Gun.

Nerf guns

Nerf guns are toy dart guns that have been on the market since the early 1990s. They are popular with kids and adults who enjoy playing paintball, laser tag, and first-person shooter video games. Michael loves Nerf guns, and he wanted to give his toy a sci-fi overhaul, making it look and function more like a gun that an avatar might use in Half-Life, Quake, or Doom.

Modding a Nerf gun

A busy and creative member of the Raspberry Pi community, Michael has previously delighted us with his Windows 98 wristwatch. Now, he has upgraded his Nerf gun with a rangefinder and an ammo counter by adding a Pi, a Pimoroni Rainbow HAT, and some sensors.

Setting up a rangefinder was straightforward. Michael fixed an ultrasonic distance sensor pointing in the direction of the gun’s barrel. Live information about how far away he is from his target is shown on the Rainbow HAT’s alphanumeric display.

View of Michael Darby's nerf gun range finder

To create an ammo counter, Michael had to follow a more circuitous route. Since he couldn’t think of a way to read out how many darts are in the Nerf gun’s magazine, he ended up counting how many darts have been shot instead. This data is collected via a proximity sensor, a device that can measure shorter distances than an ultrasonic sensor. Michael aimed the sensor towards the end of the barrel, attaching it with Blu-Tack.

View of Michael Darby's nerf gun proximity sensor

The number of shots left in the magazine is indicated by the seven LEDs above the Rainbow HAT’s alphanumeric display. The countdown works for more than seven darts, thanks to colour coding: the LEDs count down first in red, then in orange, and finally in green.

In a Python script running on the Pi, Michael has included a default number of shots per magazine. When he changes a magazine, he uses one of the HAT’s buttons as a ‘Reload’ button, resetting the counter. He has also set up the HAT so that the number of available shots can be entered manually instead.

Nerf gun modding tutorial

On Michael’s blog you will find a thorough step-by-step guide to how he created this build. He has also included his code, and links to all the components, software installation guides, and test scripts he has used. So head on over there if you’re keen to mod your own nerf gun like this, and take a look at some of his other projects while you’re there!

Michael welcomes suggestions for how to improve upon his mods, especially for how to count shots in a magazine automatically. Do you have an idea? Let usand himknow in the comments!

Toy mods

Over the years, we’ve covered quite a few fun toy upgrades, and some that may have to be approached with caution. The Pi-powered busy board for babies, the ‘weaponized’ teddy bear, and the inevitable smart Fisher Price phone are just a few from our archives.

What’s your favourite childhood toy, and how could it be improved by the addition of a Pi? Share your ideas with us in the comments below.

The post Mod your Nerf gun with a Pi appeared first on Raspberry Pi.

US Army Researching Bot Swarms

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/us_army_researc.html

The US Army Research Agency is funding research into autonomous bot swarms. From the announcement:

The objective of this CRA is to perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. To achieve this, ARL is requesting proposals that address three key Research Areas (RAs):

RA1: Distributed Intelligence: Establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team.

RA2: Heterogeneous Group Control: Develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy.

RA3: Adaptive and Resilient Behaviors: Develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world.

Slashdot thread.

And while we’re on the subject, this is an excellent report on AI and national security.

Teaching with Raspberry Pis and PiNet

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/teaching-pinet/

Education is our mission at the Raspberry Pi Foundation, so of course we love tools that help teachers and other educators use Raspberry Pis in a classroom setting. PiNet, which allows teachers to centrally manage a whole classroom’s worth of Pis, makes administrating a fleet of Pis easier. Set up individual student accounts, install updates and software, share files – PiNet helps you do all of this!

Caleb VinCross on Twitter

The new PiNet lab up and running. 30 raspberry pi 3’s running as fat clients for 600 + students. Much thanks to the PiNet team! @PiNetDev.

PiNet developer Andrew

PiNet was built and is maintained by Andrew Mulholland, who started work on this project when he was 15, and who is also one of the organisers of the Northern Ireland Raspberry Jam. Check out what he says about PiNet’s capabilities in his guest post here.

PiNet in class

PiNet running in a classroom

PiNet, teacher’s pet

PiNet has been available for about two years now, and the teachers using it are over the moon. Here’s what a few of them say about their experience:

We wanted a permanently set up classroom with 30+ Raspberry Pis to teach programming. Students wanted their work to be secure and backed up and we needed a way to keep the Pis up to date. PiNet has made both possible and the classroom now required little or no maintenance. PiNet was set up in a single day and was so successful we set up a second Pi room. We now have 60 Raspberry Pis which are used by our students every day. – Rob Jones, Secondary School Teacher, United Kingdom

AKS Computing on Twitter

21xRaspPi+dedicated network+PiNet server+3 geeks = success! Ready to test with a full class.

I teach Computer Science at middle school, so I have 4 classes per day in my lab, sharing 20 Raspberry Pis. PiNet gives each student separate storage space. Any changes to the Raspbian image can be done from my dashboard. We use Scratch, Minecraft Pi, Sonic Pi, and do physical computing. And when I have had issues, or have wanted to try something a little crazy, the support has been fabulous. – Bob Irving, Middle School Teacher, USA

Wolf Math on Twitter

We’re starting our music unit with @deejaydoc. My CS students are going through the @Sonic_Pi turorial on @PiNetDev.

I teach computer classes for about 600 students between the ages of 5 and 13. PiNet has really made it possible to expand our technology curriculum beyond the simple web-based applications that our Chromebooks were limited to. I’m now able to use Arduino boards to do basic physical computing with LEDs and sensors. None of this could have happened without PiNet making it easy to have an affordable, stable, and maintainable way of managing 30 Linux computers in our lab. – Caleb VinCross, Primary School Teacher, USA

More for educators

If you’re involved in teaching computing, be that as a professional or as a volunteer, check out the new free magazine Hello World, brought to you by Computing At School, BCS Academy of Computing, and Raspberry Pi working in partnership. It is written by educators for educators, and available in print and as a PDF download. And if you’d like to keep up to date with what we are offering to educators and learners, sign up for our education newsletter here.

Are you a teacher who uses Raspberry Pis in the classroom, or another kind of educator who has used them in a group setting? Tell us about your experience in the comments below.

The post Teaching with Raspberry Pis and PiNet appeared first on Raspberry Pi.

The Heart of Maker Faire

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/heart-maker-faire/

We at the Raspberry Pi Foundation find it incredibly rewarding to help people make and share things they love. It’s amazing to be part of an incredibly creative community of makers. And we’re not the only ones who feel this way: for this year’s Maker Faire UK, the team over at NUSTEM created the Heart of Maker Faire, a Pi-powered art installation that is a symbol of this unique community. And to be perfectly frank, it’s bloody gorgeous.

The Heart of Maker Faire

NUSTEM’s new installation for Maker Faire UK 2017, held on 1st & 2nd April at the Centre for Life, Newcastle-upon-Tyne. Visitors wrote notes about things they love, and sealed them in jars. They then read their heart rates, and used the control boxes to associate their jar and heart rate with a space on the shelves.

A heart for the community

NUSTEM is a STEM outreach organisation from Northumbria University, and the makers there are always keen to build interactive projects that get people excited about technology. So at this year’s Faire, attendees passing their installation were invited to write down something close to their heart, put that note in a jar, and measure their heart rate. Then they could connect their heart rate, via a QR code, to a space on a shelf lined with LEDs. Once they placed the jar in their space, the LEDs started blinking to imitate their heart beat. With this art piece, the NUSTEM team wants to say something about “how we’re all individuals, but about our similarities too”.

NUSTEM on Twitter

Still beating. Heart of #MakerFaireUK

Making the heart beat

This is no small build – it uses more than 2,000 NeoPixel LEDs, as well as five Raspberry Pis, among other components. Two Pi 3s are in charge of registering people’s contributions and keeping track of their jars. A Pi Zero W acts as a central hub, connecting its bigger siblings via WiFi, and storing a MySQL database of the jars’ data. Finally, two more Pi 3s control the LEDs of the Heart via a script written in Processing. The NUSTEM team has made the code available here for you “to laugh at” (their words, not mine!)

Heart of Maker Faire shelf

The heart, ready to be filled with love

A heart for art

Processing is an open-source programming language used to create images, graphs, and animations. It can respond to keyboard and mouse input, so you can write games with it as well. Moreover, it runs on the Pi, and you can use it to talk to the Pi’s GPIO pins, as the Heart of Maker Faire team did. Hook up buttons, sensors, and LEDs, and get ready to create amazing interactive pieces of art! If you’d like to learn more, read Matt’s blog post, or watch the talk he gave about Processing at our fifth birthday party earlier this year.

Matt Richardson: Art with Processing on the Raspberry Pi – Raspberry Pi Birthday Event 2017 – Talks

Matt Richardson: Art with Processing on the Raspberry Pi Sunday 5th March 2017 Raspberry Pi Birthday Event 2017 Filmed and edited by David and Andrew Ferguson. This video is not an official video published by the Raspberry Pi Foundation. No copyright infringement intended.

To help you get started, we’re providing a free learning resource introducing you to the basics of Processing. We’d love to see what you create, so do share a link to your masterworks in the comments!

World Maker Faire

We’ll be attending World Maker Faire in New York on the 23rd and 24th of September. Will you be there?

The post The Heart of Maker Faire appeared first on Raspberry Pi.

Ultrasonic pi-ano

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/ultrasonic-piano/

At the Raspberry Pi Foundation, we love a good music project. So of course we’re excited to welcome Andy Grove‘s ultrasonic piano to the collection! It is a thing of beauty… and noise. Don’t let the name fool you – this build can do so much more than sound like a piano.

Ultrasonic Pi Piano – Full Demo

The Ultrasonic Pi Piano uses HC-SR04 ultrasonic sensors for input and generates MIDI instructions that are played by fluidsynth. For more information: http://theotherandygrove.com/projects/ultrasonic-pi-piano/

What’s an ultrasonic piano?

What we have here, people of all genders, is really a theremin on steroids. The build’s eight ultrasonic distance sensors detect hand movements and, with the help of an octasonic breakout board, a Raspberry Pi 3 translates their signals into notes. But that’s not all: this digital instrument is almost endlessly customisable – you can set each sensor to a different octave, or to a different instrument.

octasonic breakout board

The breakout board designed by Andy

Andy has implemented gesture controls to allow you to switch between modes you have preset. In his video, you can see that holding your hands over the two sensors most distant from each other changes the instrument. Say you’re bored of the piano – try a xylophone! Not your jam? How about a harpsichord? Or a clarinet? In fact, there are 128 MIDI instruments and sound effects to choose from. Go nuts and compose a piece using tuba, ocarina, and the noise of a guitar fret!

How to build the ultrasonic piano

If you head over to Instructables, you’ll find the thorough write-up Andy has provided. He has also made all his scripts, written in Rust, available on GitHub. Finally, he’s even added a video on how to make a housing, so your ultrasonic piano can look more like a proper instrument, and less like a pile of electronics.

Ultrasonic Pi Piano Enclosure

Uploaded by Andy Grove on 2017-04-13.

Make your own!

If you follow us on Twitter, you may have seen photos and footage of the Raspberry Pi staff attending a Pi Towers Picademy. Like Andy*, quite a few of us are massive Whovians. Consequently, one of our final builds on the course was an ultrasonic theremin that gave off a sound rather like a dying Dalek. Take a look at our masterwork here! We loved our make so much that we’ve since turned the instructions for building it into a free resource. Go ahead and build your own! And be sure to share your compositions with us in the comments.

Sonic the hedgehog is feeling the beat

Sonic is feeling the groove as well

* He has a full-sized Dalek at home. I know, right?

The post Ultrasonic pi-ano appeared first on Raspberry Pi.

A homebrew Pi kit for home brewing

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/homebrew-beer-brewing-pi/

While the rest of us are forced to leave the house to obtain a tasty brew, beer master Christoper Aedo has incorporated a Raspberry Pi into his home brewing system for ultimate ‘sit-back-and-relax’ homebrew home brew.

homebrew home brew Raspberry Pi

KEG! KEG! KEG! KEG!

I drink and I know things

Having brewed his own beer for several years, Christopher was no novice in the pursuit of creating the perfect pint*. He was already brewing 10 gallons at a time when he decided to go all electric with a Raspberry Pi. Inspiration struck when he stumbled upon the StrangeBrew Elsinore Java server, and he went to work planning the best setup for the job:

Before I could talk myself out of the project, I decided to start buying parts. My basic design was a Hot Liquor Tank (HLT) and boil kettle with 5500W heating elements in them, plus a mash tun with a false bottom. I would use a pump to recirculate the mash through a 50 foot stainless coil in the HLT (a “heat exchanger recirculating mash system”, known as HERMS). I would need a second pump to circulate the water in the HLT, and to help with transferring water to the mash tun. All of the electrical components would be controlled with a Raspberry Pi.

Homebrew hardware setup

First, he set up the electrical side of his homebrew system using The Electric Brewing Company‘s walkthrough, swapping out the 12V solid-state relays for ones that manage the 3V needed by the Pi. Aedo then implemented the temperature sensors and controls of these relays. He used Hilitchi DS18B20 Waterproof Temperature Sensors connected to a 1-Wire bus and learned how to manage the relays in this tutorial.

Christopher wanted to be able to move his system around his property. Therefore, he squeezed all the electrical components of the build into a waterproof project box. For cooling purposes, he integrated copper shims and heat sinks.

homebrew home brew raspberry pi

Among the wires, wires, and more wires sits a Raspberry Pi, bottom left.

A brew-tiful build

With the hardware sorted, he took on the project’s software next. Although he had been inspired by it, Christopher decided to move away from the StrangeBrew Elsinore project in favour of the Python-based CraftBeerPi by active repo maintainer Manuel Fritsch.

homebrew home brew raspberry pi

The CraftBeerPi dashboard

This package allowed him to configure his chosen GPIO pins and set up the appropriate sensors. In fact, the setup process was so easy that Christoper also implemented a secondhand fridge as a fermentation chamber.

Duff Beer for me, Duff Beer for you…

In his recently released article on opensource.com, Aedo goes into far more detail. So if you want to create your own brewing kit, it offers all the info you need to get going.

Christoper attributes a lot of his build to the Hosehead, Electric Brewery, and CraftBeerPi projects. Using their resources and those of StrangeBrew Elsinore, any home brewer can control at least part of their system via a Raspberry Pi. Moreover, they can also keep track of their brewery stock levels via the wonderfully named Kegerface display.

We love seeing projects like this that take inspiration from others and build on them. We also love beer.

How about you? Have you created any sort of beer brewing system, from scratch or with the help of an existing project? Then make sure to share it with us in the comments below.

Duff man homebrew

 

*Did you know the British pint is larger than the American pint?

The post A homebrew Pi kit for home brewing appeared first on Raspberry Pi.

Perform Near Real-time Analytics on Streaming Data with Amazon Kinesis and Amazon Elasticsearch Service

Post Syndicated from Tristan Li original https://aws.amazon.com/blogs/big-data/perform-near-real-time-analytics-on-streaming-data-with-amazon-kinesis-and-amazon-elasticsearch-service/

Nowadays, streaming data is seen and used everywhere—from social networks, to mobile and web applications, IoT devices, instrumentation in data centers, and many other sources. As the speed and volume of this type of data increases, the need to perform data analysis in real time with machine learning algorithms and extract a deeper understanding from the data becomes ever more important. For example, you might want a continuous monitoring system to detect sentiment changes in a social media feed so that you can react to the sentiment in near real time.

In this post, we use Amazon Kinesis Streams to collect and store streaming data. We then use Amazon Kinesis Analytics to process and analyze the streaming data continuously. Specifically, we use the Kinesis Analytics built-in RANDOM_CUT_FOREST function, a machine learning algorithm, to detect anomalies in the streaming data. Finally, we use Amazon Kinesis Firehose to export the anomalies data to Amazon Elasticsearch Service (Amazon ES). We then build a simple dashboard in the open source tool Kibana to visualize the result.

Solution overview

The following diagram depicts a high-level overview of this solution.

Amazon Kinesis Streams

You can use Amazon Kinesis Streams to build your own streaming application. This application can process and analyze streaming data by continuously capturing and storing terabytes of data per hour from hundreds of thousands of sources.

Amazon Kinesis Analytics

Kinesis Analytics provides an easy and familiar standard SQL language to analyze streaming data in real time. One of its most powerful features is that there are no new languages, processing frameworks, or complex machine learning algorithms that you need to learn.

Amazon Kinesis Firehose

Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.

Amazon Elasticsearch Service

Amazon ES is a fully managed service that makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.

Solution summary

The following is a quick walkthrough of the solution that’s presented in the diagram:

  1. IoT sensors send streaming data into Kinesis Streams. In this post, you use a Python script to simulate an IoT temperature sensor device that sends the streaming data.
  2. By using the built-in RANDOM_CUT_FOREST function in Kinesis Analytics, you can detect anomalies in real time with the sensor data that is stored in Kinesis Streams. RANDOM_CUT_FOREST is also an appropriate algorithm for many other kinds of anomaly-detection use cases—for example, the media sentiment example mentioned earlier in this post.
  3. The processed anomaly data is then loaded into the Kinesis Firehose delivery stream.
  4. By using the built-in integration that Kinesis Firehose has with Amazon ES, you can easily export the processed anomaly data into the service and visualize it with Kibana.

Implementation steps

The following sections walk through the implementation steps in detail.

Creating the delivery stream

  1. Open the Amazon Kinesis Streams console.
  2. Create a new Kinesis stream. Give it a name that indicates it’s for raw incoming stream data—for example, RawStreamData. For Number of shards, type 1.
  3. The Python code provided below simulates a streaming application, such as an IoT device, and generates random data and anomalies into a Kinesis stream. The code generates two temperature ranges, where the first range is the hypothetical sensor’s normal operating temperature range (10–20), and the second is the anomaly temperature range (100–120).Make sure to change the stream name on line 16 and 20 and the Region on line 6 to match your configuration. Alternatively, you can download the Amazon Kinesis Data Generator from this repository and use it to generate the data.
    import json
    import datetime
    import random
    import testdata
    from boto import kinesis
    
    kinesis = kinesis.connect_to_region("us-east-1")
    
    def getData(iotName, lowVal, highVal):
       data = {}
       data["iotName"] = iotName
       data["iotValue"] = random.randint(lowVal, highVal) 
       return data
    
    while 1:
       rnd = random.random()
       if (rnd < 0.01):
          data = json.dumps(getData("DemoSensor", 100, 120))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print '***************************** anomaly ************************* ' + data
       else:
          data = json.dumps(getData("DemoSensor", 10, 20))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print data

  4. Open the Amazon Elasticsearch Service console and create a new domain.
    1. Give the domain a unique name. In the Configure cluster screen, use the default settings.
    2. In the Set up access policy screen, in the Set the domain access policy list, choose Allow access to the domain from specific IP(s).
    3. Enter the public IP address of your computer.
      Note: If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this AWS Database blog post to learn how to work with a proxy. For additional information about securing access to your Amazon ES domain, see How to Control Access to Your Amazon Elasticsearch Domain in the AWS Security Blog.
  5. After the Amazon ES domain is up and running, you can set up and configure Kinesis Firehose to export results to Amazon ES:
    1. Open the Amazon Kinesis Firehose console and choose Create Delivery Stream.
    2. In the Destination dropdown list, choose Amazon Elasticsearch Service.
    3. Type a stream name, and choose the Amazon ES domain that you created in Step 4.
    4. Provide an index name and ES type. In the S3 bucket dropdown list, choose Create New S3 bucket. Choose Next.
    5. In the configuration, change the Elasticsearch Buffer size to 1 MB and the Buffer interval to 60s. Use the default settings for all other fields. This shortens the time for the data to reach the ES cluster.
    6. Under IAM Role, choose Create/Update existing IAM role.
      The best practice is to create a new role every time. Otherwise, the console keeps adding policy documents to the same role. Eventually the size of the attached policies causes IAM to reject the role, but it does it in a non-obvious way, where the console basically quits functioning.
    7. Choose Next to move to the Review page.
  6. Review the configuration, and then choose Create Delivery Stream.
  7. Run the Python file for 1–2 minutes, and then press Ctrl+C to stop the execution. This loads some data into the stream for you to visualize in the next step.

Analyzing the data

Now it’s time to analyze the IoT streaming data using Amazon Kinesis Analytics.

  1. Open the Amazon Kinesis Analytics console and create a new application. Give the application a name, and then choose Create Application.
  2. On the next screen, choose Connect to a source. Choose the raw incoming data stream that you created earlier. (Note the stream name Source_SQL_STREAM_001 because you will need it later.)
  3. Use the default settings for everything else. When the schema discovery process is complete, it displays a success message with the formatted stream sample in a table as shown in the following screenshot. Review the data, and then choose Save and continue.
  4. Next, choose Go to SQL editor. When prompted, choose Yes, start application.
  5. Copy the following SQL code and paste it into the SQL editor window.
    CREATE OR REPLACE STREAM "TEMP_STREAM" (
       "iotName"        varchar (40),
       "iotValue"   integer,
       "ANOMALY_SCORE"  DOUBLE);
    -- Creates an output stream and defines a schema
    CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
       "iotName"       varchar(40),
       "iotValue"       integer,
       "ANOMALY_SCORE"  DOUBLE,
       "created" TimeStamp);
     
    -- Compute an anomaly score for each record in the source stream
    -- using Random Cut Forest
    CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "TEMP_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE FROM
      TABLE(RANDOM_CUT_FOREST(
        CURSOR(SELECT STREAM * FROM "SOURCE_SQL_STREAM_001")
      )
    );
    
    -- Sort records by descending anomaly score, insert into output stream
    CREATE OR REPLACE PUMP "OUTPUT_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE, ROWTIME FROM "TEMP_STREAM"
    ORDER BY FLOOR("TEMP_STREAM".ROWTIME TO SECOND), ANOMALY_SCORE DESC;

 

  1. Choose Save and run SQL.
    As the application is running, it displays the results as stream data arrives. If you don’t see any data coming in, run the Python script again to generate some fresh data. When there is data, it appears in a grid as shown in the following screenshot.Note that you are selecting data from the source stream name Source_SQL_STREAM_001 that you created previously. Also note the ANOMALY_SCORE column. This is the value that the Random_Cut_Forest function calculates based on the temperature ranges provided by the Python script. Higher (anomaly) temperature ranges have a higher score.Looking at the SQL code, note that the first two blocks of code create two new streams to store temporary data and the final result. The third block of code analyzes the raw source data (Stream_Pump_1) using the Random_Cut_Forest function. It calculates an anomaly score (ANOMALY_SCORE) and inserts it into the TEMP_STREAM stream. The final code block loads the result stored in the TEMP_STREAM into DESTINATION_SQL_STREAM.
  2. Choose Exit (done editing) next to the Save and run SQL button to return to the application configuration page.

Load processed data into the Kinesis Firehose delivery stream

Now, you can export the result from DESTINATION_SQL_STREAM into the Amazon Kinesis Firehose stream that you created previously.

  1. On the application configuration page, choose Connect to a destination.
  2. Choose the stream name that you created earlier, and use the default settings for everything else. Then choose Save and Continue.
  3. On the application configuration page, choose Exit to Kinesis Analytics applications to return to the Amazon Kinesis Analytics console.
  4. Run the Python script again for 4–5 minutes to generate enough data to flow through Amazon Kinesis Streams, Kinesis Analytics, Kinesis Firehose, and finally into the Amazon ES domain.
  5. Open the Kinesis Firehose console, choose the stream, and then choose the Monitoring
  6. As the processed data flows into Kinesis Firehose and Amazon ES, the metrics appear on the Delivery Stream metrics page. Keep in mind that the metrics page takes a few minutes to refresh with the latest data.
  7. Open the Amazon Elasticsearch Service dashboard in the AWS Management Console. The count in the Searchable documents column increases as shown in the following screenshot. In addition, the domain shows a cluster health of Yellow. This is because, by default, it needs two instances to deploy redundant copies of the index. To fix this, you can deploy two instances instead of one.

Visualize the data using Kibana

Now it’s time to launch Kibana and visualize the data.

  1. Use the ES domain link to go to the cluster detail page, and then choose the Kibana link as shown in the following screenshot.

    If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this blog post to learn how to work with a proxy.
  2. In the Kibana dashboard, choose the Discover tab to perform a query.
  3. You can also visualize the data using the different types of charts offered by Kibana. For example, by going to the Visualize tab, you can quickly create a split bar chart that aggregates by ANOMALY_SCORE per minute.


Conclusion

In this post, you learned how to use Amazon Kinesis to collect, process, and analyze real-time streaming data, and then export the results to Amazon ES for analysis and visualization with Kibana. If you have comments about this post, add them to the “Comments” section below. If you have questions or issues with implementing this solution, please open a new thread on the Amazon Kinesis or Amazon ES discussion forums.


Next Steps

Take your skills to the next level. Learn real-time clickstream anomaly detection with Amazon Kinesis Analytics.

 


About the Author

Tristan Li is a Solutions Architect with Amazon Web Services. He works with enterprise customers in the US, helping them adopt cloud technology to build scalable and secure solutions on AWS.

 

 

 

 

timeShift(GrafanaBuzz, 1w) Issue 3

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/07/timeshiftgrafanabuzz-1w-issue-3/

Many in the US were on holiday for Independence Day earlier this week, but that didn’t slow us down: team Stockholm even shipped a new Grafana release. This issue of timeShift has plenty of great articles to highlight. If you know of a recent article about Grafana, or are writing one yourself, please get in touch, we’d be happy to feature it here.


Grafana 4.4 Released

Grafana v4.4 is now Available for download

Dashboard history and version control is here! A big thanks to Walmart Labs for their massive code contribution.

Check out what’s new in Grafana 4.4 in the release announcement.


From the Blogosphere

Plugins and Dashboards

We are excited that there have been over 100,000 plugin installations since we launched the new plugable architecture in Grafana v3. You can discover and install plugins in your own on-premises or Hosted Grafana instance from our website. Below are some recent additions and updates.

Zabbix Updated to v3.5.0 CHANGELOG.md

  • rate() function, which calculates per-second rate for growing counters.
  • Template query format. New format is {group}{host}{app}{item}. It allows to use names with dot.
  • Improved performance of groupBy() functions (at 6-10x faster than old).
  • lots of bug fixes and more

In addition to the plugins available for download, there are hundreds of pre-made dashboards ready for you to import into Grafana to get up and running quickly. Check out some of the popular dashboards.

Server Metrics (Collectd) Collectd/Graphite Server metrics dashboard (Load,CPU, Memory, Temp etc).

Data Source: Graphite | Collector: Collectd

Apache Overview System stats for uptime, cpu count, RAM, free memory %, and panels for load, I/O and network traffic. Apache workers and scoreboard panels and uptime and CPU load single stats.

Data Source: InfluxDB | Collector: Telegraf

Node Exporter Server Metrics A simple dashboard configured to be able to view multiple servers side by side.

Data Source: Prometheus | Collector: Nodeexporter

This week’s MVC (Most Valuable Contributor)

Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback. Many of the fixes and improvements come from our fantastic community!

ryantxu (Ryan McKinley)

Ryan has contributed PR’s to Grafana as well as being the author of 4 well-maintained plugins (Ajax Panel, Discrete Panel, Plotly Panel and Influx Admin plugins). Thank you for all your hard work!

What do you think?

Anything in particular you’d like to see in this series of posts? Too long? Too short? Boring? Let us know. Comment on this article below, or post something at our community forum. With your help, we can make this a worthwhile resource.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Bicrophonic Research Institute and the Sonic Bike

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/sonic-bike/

The Bicrophonic Sonic Bike, created by British sound artist Kaffe Matthews, utilises a Raspberry Pi and GPS signals to map location data and plays music and sound in response to the places you take it on your cycling adventures.

What is Bicrophonics?

Bicrophonics is about the mobility of sound, experienced and shared within a moving space, free of headphones and free of the internet. Music made by the journey you take, played with the space that you move through. The Bicrophonic Research Institute (BRI) http://sonicbikes.net

Cycling and music

I’m sure I wasn’t the only teen to go for bike rides with a group of friends and a radio. Spurred on by our favourite movie, the mid-nineties classic Now and Then, we’d hook up a pair of cheap portable speakers to our handlebars, crank up the volume, and sing our hearts out as we cycled aimlessly down country lanes in the cool light evenings of the British summer.

While Sonic Bikes don’t belt out the same classics that my precariously attached speakers provided, they do give you the same sense of connection to your travelling companions via sound. Linked to GPS locations on the same preset map of zones, each bike can produce the same music, creating a cloud of sound as you cycle.

Sonic Bikes

The Sonic Bike uses five physical components: a Raspberry Pi, power source, USB GPS receiver, rechargeable speakers, and subwoofer. Within the Raspberry Pi, the build utilises mapping software to divide a map into zones and connect each zone with a specific music track.

Sonic Bikes Raspberry Pi

Custom software enables the Raspberry Pi to locate itself among the zones using the USB GPS receiver. Then it plays back the appropriate track until it registers a new zone.

Bicrophonic Research Institute

The Bicrophonic Research Institute is a collective of artists and coders with the shared goal of creating sound directed by people and places via Sonic Bikes. In their own words:

Bicrophonics is about the mobility of sound, experienced and shared within a moving space, free of headphones and free of the internet. Music made by the journey you take, played with the space that you move through.

Their technology has potential beyond the aims of the BRI. The Sonic Bike software could be useful for navigation, logging data and playing beats to indicate when to alter speed or direction. You could even use it to create a guided cycle tour, including automatically reproduced information about specific places on the route.

For the creators of Sonic Bike, the project is ever-evolving, and “continues to be researched and developed to expand the compositional potentials and unique listening experiences it creates.”

Sensory Bike

A good example of this evolution is the Sensory Bike. This offshoot of the Sonic Bike idea plays sounds guided by the cyclist’s own movements – it acts like a two-wheeled musical instrument!

lean to go up, slow to go loud,

a work for Sensory Bikes, the Berlin wall and audience to ride it. ‘ lean to go up, slow to go loud ‘ explores freedom and celebrates escape. Celebrating human energy to find solutions, hot air balloons take off, train lines sing, people cheer and nature continues to grow.

Sensors on the wheels, handlebars, and brakes, together with a Sense HAT at the rear, register the unique way in which the rider navigates their location. The bike produces output based on these variables. Its creators at BRI say:

The Sensory Bike becomes a performative instrument – with riders choosing to go slow, go fast, to hop, zigzag, or circle, creating their own unique sound piece that speeds, reverses, and changes pitch while they dance on their bicycle.

Build your own Sonic Bike

As for many wonderful Raspberry Pi-based builds, the project’s code is available on GitHub, enabling makers to recreate it. All the BRI team ask is that you contact them so they can learn more of your plans and help in any way possible. They even provide code to create your own Sonic Kayak using GPS zones, temperature sensors, and an underwater microphone!

Sonic Kayaks explained

Sonic Kayaks are musical instruments for expanding our senses and scientific instruments for gathering marine micro-climate data. Made by foAm_Kernow with the Bicrophonic Research Institute (BRI), two were first launched at the British Science Festival in Swansea Bay September 6th 2016 and used by the public for 2 days.

The post Bicrophonic Research Institute and the Sonic Bike appeared first on Raspberry Pi.

Pi-powered hands-on statistical model at the Royal Society

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/royal-society-galton-board/

Physics! Particles! Statistical modelling! Quantum theory! How can non-scientists understand any of it? Well, students from Durham University are here to help you wrap your head around it all – and to our delight, they’re using the power of the Raspberry Pi to do it!

At the Royal Society’s Summer Science Exhibition, taking place in London from 4-9 July, the students are presenting a Pi-based experiment demonstrating the importance of statistics in their field of research.

Modelling the invisible – Summer Science Exhibition 2017

The Royal Society Summer Science Exhibition 2017 features 22 exhibits of cutting-edge, hands-on UK science , along with special events and talks. You can meet the scientists behind the research. Find out more about the exhibition at our website: https://royalsociety.org/science-events-and-lectures/2017/summer-science-exhibition/

Ramona, Matthew, and their colleagues are particle physicists keen to bring their science to those of us whose heads start to hurt as soon as we hear the word ‘subatomic’. In their work, they create computer models of subatomic particles to make predictions about real-world particles. Their models help scientists to design better experiments and to improve sensor calibrations. If this doesn’t sound straightforward to you, never fear – this group of scientists has set out to show exactly how statistical models are useful.

The Galton board model

They’ve built a Pi-powered Galton board, also called a bean machine (much less intimidating, I think). This is an upright board, shaped like an upside-down funnel, with nails hammered into it. Drop a ball in at the top, and it will randomly bounce off the nails on its way down. How the nails are spread out determines where a ball is most likely to land at the bottom of the board.

If you’re having trouble picturing this, you can try out an online Galton board. Go ahead, I’ll wait.

You’re back? All clear? Great!

Now, if you drop 100 balls down the board and collect them at the bottom, the result might look something like this:

Galton board

By Antoine Taveneaux CC BY-SA 3.0

The distribution of the balls is determined by the locations of the nails in the board. This means that, if you don’t know where the nails are, you can look at the distribution of balls to figure out where they are most likely to be located. And you’ll be able to do all this using … statistics!!!

Statistical models

Similarly, how particles behave is determined by the laws of physics – think of the particles as balls, and laws of physics as nails. Physicists can observe the behaviour of particles to learn about laws of physics, and create statistical models simulating the laws of physics to predict the behaviour of particles.

I can hear you say, “Alright, thanks for the info, but how does the Raspberry Pi come into this?” Don’t worry – I’m getting to that.

Modelling the invisible – the interactive exhibit

As I said, Ramona and the other physicists have not created a regular old Galton board. Instead, this one records where the balls land using a Raspberry Pi, and other portable Pis around the exhibition space can access the records of the experimental results. These Pis in turn run Galton board simulators, and visitors can use them to recreate a virtual Galton board that produces the same results as the physical one. Then, they can check whether their model board does, in fact, look like the one the physicists built. In this way, people directly experience the relationship between statistical models and experimental results.

Hurrah for science!

The other exhibit the Durham students will be showing is a demo dark matter detector! So if you decide to visit the Summer Science Exhibition, you will also have the chance to learn about the very boundaries of human understanding of the cosmos.

The Pi in museums

At the Raspberry Pi Foundation, education is our mission, and of course we love museums. It is always a pleasure to see our computers incorporated into exhibits: the Pi-powered visual theremin teaches visitors about music; the Museum in a Box uses Pis to engage people in hands-on encounters with exhibits; and this Pi is itself a museum piece! If you want to learn more about Raspberry Pis and museums, you can listen to this interview with Pi Towers’ social media maestro Alex Bate.

It’s amazing that our tech is used to educate people in areas beyond computer science. If you’ve created a pi-powered educational project, please share it with us in the comments.

The post Pi-powered hands-on statistical model at the Royal Society appeared first on Raspberry Pi.

Scratch 2.0: all-new features for your Raspberry Pi

Post Syndicated from Rik Cross original https://www.raspberrypi.org/blog/scratch-2-raspberry-pi/

We’re very excited to announce that Scratch 2.0 is now available as an offline app for the Raspberry Pi! This new version of Scratch allows you to control the Pi’s GPIO (General Purpose Input and Output) pins, and offers a host of other exciting new features.

Offline accessibility

The most recent update to Raspbian includes the app, which makes Scratch 2.0 available offline on the Raspberry Pi. This is great news for clubs and classrooms, where children can now use Raspberry Pis instead of connected laptops or desktops to explore block-based programming and physical computing.

Controlling GPIO with Scratch 2.0

As with Scratch 1.4, Scratch 2.0 on the Raspberry Pi allows you to create code to control and respond to components connected to the Pi’s GPIO pins. This means that your Scratch projects can light LEDs, sound buzzers and use input from buttons and a range of sensors to control the behaviour of sprites. Interacting with GPIO pins in Scratch 2.0 is easier than ever before, as text-based broadcast instructions have been replaced with custom blocks for setting pin output and getting current pin state.

Scratch 2.0 GPIO blocks

To add GPIO functionality, first click ‘More Blocks’ and then ‘Add an Extension’. You should then select the ‘Pi GPIO’ extension option and click OK.

Scratch 2.0 GPIO extension

In the ‘More Blocks’ section you should now see the additional blocks for controlling and responding to your Pi GPIO pins. To give an example, the entire code for repeatedly flashing an LED connected to GPIO pin 2.0 is now:

Flashing an LED with Scratch 2.0

To react to a button connected to GPIO pin 2.0, simply set the pin as input, and use the ‘gpio (x) is high?’ block to check the button’s state. In the example below, the Scratch cat will say “Pressed” only when the button is being held down.

Responding to a button press on Scractch 2.0

Cloning sprites

Scratch 2.0 also offers some additional features and improvements over Scratch 1.4. One of the main new features of Scratch 2.0 is the ability to create clones of sprites. Clones are instances of a particular sprite that inherit all of the scripts of the main sprite.

The scripts below show how cloned sprites are used — in this case to allow the Scratch cat to throw a clone of an apple sprite whenever the space key is pressed. Each apple sprite clone then follows its ‘when i start as clone’ script.

Cloning sprites with Scratch 2.0

The cloning functionality avoids the need to create multiple copies of a sprite, for example multiple enemies in a game or multiple snowflakes in an animation.

Custom blocks

Scratch 2.0 also allows the creation of custom blocks, allowing code to be encapsulated and used (possibly multiple times) in a project. The code below shows a simple custom block called ‘jump’, which is used to make a sprite jump whenever it is clicked.

Custom 'jump' block on Scratch 2.0

These custom blocks can also optionally include parameters, allowing further generalisation and reuse of code blocks. Here’s another example of a custom block that draws a shape. This time, however, the custom block includes parameters for specifying the number of sides of the shape, as well as the length of each side.

Custom shape-drawing block with Scratch 2.0

The custom block can now be used with different numbers provided, allowing lots of different shapes to be drawn.

Drawing shapes with Scratch 2.0

Peripheral interaction

Another feature of Scratch 2.0 is the addition of code blocks to allow easy interaction with a webcam or a microphone. This opens up a whole new world of possibilities, and for some examples of projects that make use of this new functionality see Clap-O-Meter which uses the microphone to control a noise level meter, and a Keepie Uppies game that uses video motion to control a football. You can use the Raspberry Pi or USB cameras to detect motion in your Scratch 2.0 projects.

Other new features include a vector image editor and a sound editor, as well as lots of new sprites, costumes and backdrops.

Update your Raspberry Pi for Scratch 2.0

Scratch 2.0 is available in the latest Raspbian release, under the ‘Programming’ menu. We’ve put together a guide for getting started with Scratch 2.0 on the Raspberry Pi online (note that GPIO functionality is only available via the desktop version). You can also try out Scratch 2.0 on the Pi by having a go at a project from the Code Club projects site.

As always, we love to see the projects you create using the Raspberry Pi. Once you’ve upgraded to Scratch 2.0, tell us about your projects via Twitter, Instagram and Facebook, or by leaving us a comment below.

The post Scratch 2.0: all-new features for your Raspberry Pi appeared first on Raspberry Pi.

CoderDojo Coolest Projects 2017

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2017/

When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.

You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.

Coolest Projects 2017 attendee

Crowd at Coolest Projects 2017

This year’s coolest projects!

Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]

Coolest Projects 2017 LED ping-pong ball display
Coolest Projects 2017 Benjamin and Oly

Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.

Coolest Projects 2017 Aimee's cook book
Coolest Projects 2017 Aimee's setup

This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:

Coolest Projects 2017 face detection bear
Coolest Projects 2017 face detection interface
Coolest Projects 2017 face detection database

Helen’s and Oly’s favourite project involved…live bees!

Coolest Projects 2017 live bees

BEEEEEEEEEEES!

Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:

Coolest Projects 2017 Aimee's bees

Coolest robots

I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:

Raspberry Pi Lawnmower

Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017

Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.

Philip Colligan on Twitter

This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects

And here are some pictures of even more cool robots we saw:

Coolest Projects 2017 coolest robot no.1
Coolest Projects 2017 coolest robot no.2
Coolest Projects 2017 coolest robot no.3

Games, toys, activities

Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!

Coolest Projects 2017 Mogamad, Daniel, Basheerah, Oly
Coolest Projects 2017 Alexa text-based game

Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!

Coolest Projects 2017 Lego home alone house
Coolest Projects 2017 Lego home alone innards
Coolest Projects 2017 Lego home alone innards closeup

Meanwhile, the Northern Ireland Raspberry Jam group ran a DOTS board activity, which turned their area into a conductive paint hazard zone.

Coolest Projects 2017 NI Jam DOTS activity 1
Coolest Projects 2017 NI Jam DOTS activity 2
Coolest Projects 2017 NI Jam DOTS activity 3
Coolest Projects 2017 NI Jam DOTS activity 4
Coolest Projects 2017 NI Jam DOTS activity 5
Coolest Projects 2017 NI Jam DOTS activity 6

Creativity and ingenuity

We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.

Philip Colligan on Twitter

Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo

Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:

Coolest Projects 2017 winning team 1
Coolest Projects 2017 winning team 2
Coolest Projects 2017 winning team 3

Take a look at the gallery of all winners over on Flickr.

The wow factor

Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:

It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.

Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.

Join this awesome community!

If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.

The post CoderDojo Coolest Projects 2017 appeared first on Raspberry Pi.

Shelfchecker Smart Shelf: build a home library system

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/smart-shelf-home-library/

Are you tired of friends borrowing your books and never returning them? Maybe you’re sure you own 1984 but can’t seem to locate it? Do you find a strange satisfaction in using the supermarket self-checkout simply because of the barcode beep? With the ShelfChecker smart shelf from maker Annelynn described on Instructables, you can be your own librarian and never misplace your books again! Beep!

Shelfchecker smart shelf annelynn Raspberry Pi

Harry Potter and the Aesthetically Pleasing Smart Shelf

The ShelfChecker smart shelf

Annelynn built her smart shelf utilising a barcode scanner, LDR light sensors, a Raspberry Pi, plus a few other peripherals and some Python scripts. She has created a fully integrated library checkout system with accompanying NeoPixel location notification for your favourite books.

This build allows you to issue your book-borrowing friends their own IDs and catalogue their usage of your treasured library. On top of that, you’ll be able to use LED NeoPixels to highlight your favourite books, registering their removal and return via light sensor tracking.

Using light sensors for book cataloguing

Once Annelynn had built the shelf, she drilled holes to fit the eight LDRs that would guard her favourite books, and separated them with corner brackets to prevent confusion.

Shelfchecker smart shelf annelynn Raspberry Pi

Corner brackets keep the books in place without confusion between their respective light sensors

Due to the limitations of the MCP3008 Adafruit microchip, the smart shelf can only keep track of eight of your favourite books. But this limitation won’t stop you from cataloguing your entire home library; it simply means you get to pick your ultimate favourites that will occupy the prime real estate on your wall.

Obviously, the light sensors sense light. So when you remove or insert a book, light floods or is blocked from that book’s sensor. The sensor sends this information to the Raspberry Pi. In response, an Arduino controls the NeoPixel strip along the ‘favourites’ shelf to indicate the book’s status.

Shelfchecker smart shelf annelynn Raspberry Pi

The book you are looking for is temporarily unavailable

Code your own library

While keeping a close eye on your favourite books, the system also allows creation of a complete library catalogue system with the help of a MySQL database. Users of the library can log into the system with a barcode scanner, and take out or return books recorded in the database guided by an LCD screen attached to the Pi.

Shelfchecker smart shelf annelynn Raspberry Pi

Beep!

I won’t go into an extensive how-to on creating MySQL databases here on the blog, because my glamourous assistant Janina has pulled up these MySQL tutorials to help you get started. Annelynn’s Github scripts are also packed with useful comments to keep you on track.

Raspberry Pi and books

We love books and libraries. And considering the growing number of Code Clubs and makespaces into libraries across the world, and the host of book-based Pi builds we’ve come across, the love seems to be mutual.

We’ve seen the Raspberry Pi introduced into the Wordery bookseller warehouse, a Pi-powered page-by-page book scanner by Jonathon Duerig, and these brilliant text-to-speech and page turner projects that use our Pis!

Did I say we love books? In fact we love them so much that members of our team have even written a few.*

If you’ve set up any sort of digital making event in a library, have in some way incorporated Raspberry Pi into your own personal book collection, or even managed to recreate the events of your favourite story using digital making, make sure to let us know in the comments below.

* Shameless plug**

Fancy adding some Pi to your home library? Check out these publications from the Raspberry Pi staff:

A Beginner’s Guide to Coding by Marc Scott

Adventures in Raspberry Pi by Carrie Anne Philbin

Getting Started with Raspberry Pi by Matt Richardson

Raspberry Pi User Guide by Eben Upton

The MagPi Magazine, Essentials Guides and Project Books

Make Your Own Game and Build Your Own Website by CoderDojo

** Shameless Pug

 

The post Shelfchecker Smart Shelf: build a home library system appeared first on Raspberry Pi.